Future-proof Your Enterprise | Perspectives | digitalML

Enterprise Future-Proofing and Complimentary Technologies - Perspectives From Our Director of Customer Success

API Strategy, IT Modernization

Enterprise Future-Proofing and Complimentary Technologies – Perspectives From Our Director of Customer Success

·  10 Min Read

digitalMl’s perspectives on how to future-proof your enterprise


Welcome back to our digitalML spotlight series! Next up we’re speaking to digitalML’s Director of Customer Success, Dick Brown.

In this interview Dick shares his perspectives on:

  • Distributed microservices architecture – and the move to reactive advice for larger organizations
  • The 5G roll out and the interesting factors in play
  • Kafka messaging as a complimentary, rather than displacing, disruptive technology
  • How future-flexibility is a key goal for large organizations, and how to achieve that

On Distributed Architecture and the 5G roll out…

Gemma: What are 1 or 2 industry trends that are interesting to you right now and why?

Dick: There’s two I’m currently keeping up with. One which is interesting to me from a consulting and approach perspective, and the other is from an industry that I track and have tried to keep my pulse on, and I’m interested in how recent moves are shaking out:

Distributed microservices architecture – and the move to reactive advice for large organizations

The first, is shifting to distributed architectures; how companies are breaking apart their monoliths, and how that takes shape with their downstream consumers. There’s a lot of different ways you can break apart a monolith into a distributed architecture, and allow teams to take advantage of the development autonomy this provides.

The concept has been around for a while and it’s the finer grained, natural evolution of services-oriented architecture (SOA). Smaller, more agile teams have taken this concept and run with it. However, when larger companies try and take on this methodology, you can see a breakdown in adoption. I think folks are starting to realize that it’s a 5-7 year journey that they’re in the middle of when building microservices.

For more nimble companies, the move has had a lot more success. The methodology is applied to a greenfield approach – when you’re building something new it’s easier – and you can go straight to right-sized applications and set up frameworks and org structure in a distributed way. But a monolith isn’t going to embrace it in the same way. It becomes very difficult and the companies realize the long journey it requires.

As a result, we’re seeing the consultation and advice shifting from proactive “these are the best practices” and “this is what you need to do” to, more reactive and “this is what you should prepare yourself for”. A lot of the information out there is geared towards how to brace your teams and business while staying aggressive and still do business.

Gemma: So, is it more of a pragmatic approach to shifting between an academic exercise and the real implementation of a distributed architecture?

Dick: Yeah, there’s only so many recommendations you can make. Over time people realize how long this would take big companies, so you run out of proactive advice to give. Now the consultants are being very pragmatic; offering sustainable advice that you can work on for the next 1-2 years, while maintaining this overall 5-7 year vision.

5G roll out

The second big industry trend is 5G roll out. Everything from bidding on different bands, to the acquisition of real estate and what’s happening with smart cells that can learn how to switch for increased reception.

The telecom industry is something I’ve been involved in in the past, and is something that always interests me. Not only from a technology perspective e.g. the hardware and the software used to keep it running, but also the political environment that it runs through; the bidding wars companies get into, and now with big mergers in the US, and companies trying to maintain competitive advantage. It reminds me of the spectrum auction in the UK between 3, O2 and Vodafone – every company trying to compete with each other, and smaller carriers banded together to say “hey you have to bar these larger folks from bidding, because we can’t maintain competitive advantage if they do”.

It’s interesting to see how everyone balances trying to do their best to provide leading service to their customer, while also navigating this tight rope of becoming anti-competitive. The state must ask: “Is this going to cause a monopoly and restriction of choice for consumers?”

The industry is moving super-fast and always evolving, and seeing the fallout from all the factors I’ve previously mentioned shows how many factors are at-play.

Kafka messaging as a complimentary, rather than displacing, disruptive technology

Gemma: What emerging technologies are you currently following?

Dick: One I’m following very strictly is things in the world of Kafka; so, event streaming.

The concept is around sending out data to a distribution layer, that consumers are subscribed to, and not caring necessarily about the information being picked up. It’s a vastly different strategy than request/response, which is built into a majority of systems.

Commands and queues, and messaging services have an action that has occurred, and are reliant on the consumer picking up that message and processing it.

With streaming you send this data out into the nether. I know people are consuming it, I have an API which shows me that, but I may send the same information 5, 6, 10 times, and it’s really on the consumer of how they’re receiving it.

There’s been a conceptual evolution with certain technologies; where power is flipped to consumers over providers. You have a lot of consumers specifying to providers “if you don’t care whether this message gets to a consumer or not, I’m going to tell you the information I care about and establish this contract between us. This way, you won’t provide information that varies from this contract and break my consumption”. “I know it’s not going to break your sending of the information if I don’t consume it, but it’s going to break my application if I can’t consume the information properly”.

What’s also really interesting for me is to see how this technology has settled, and not become the brand new thing that’s going to replace everything.

For example, with Docker, it is meant to displace Virtual Machines (Todd Everett mentioned this in his interview) – choose one or the other.

But event streaming has settled into a space where you can’t simply forget asynchronous communication from APIs, and you can’t replace all commands and message queues – you still need them and they’re very important.

There are advantages in certain areas for event streaming. So, I think it’s interesting to see how the dust has settled: it’s not a disruptive technology in the sense of displacing, it’s making a great compliment to features that are already there, and folks are finding those unique situations where it becomes advantageous to stream data.

G: That’s a really interesting concept. Do you think we’ll see more of these complimentary – as opposed to displacing – technologies moving forward?

D: I think so. There is a bigger shift now where I see fewer home run technologies and a lot more bunting (bear with me on the baseball analogies, it’s the world series right now!).

There’s a lot of this concept of open source technology making lives easier, but if you don’t want to use it or it doesn’t fit with certain situations, you don’t have to use it!

In practice it shows that the technologies we’ve already built have a lot of staying power. They’ll stick around for a long time. I think there are great ideas behind them and it reinforces their use in the future – so a lot of the new tools coming out are complimentary to them or introduce new ways to manage or work with them.

Future-flexibility is key for large enterprises – and a hybrid solution is the best way to achieve it

G: What do you think is the biggest challenge for large organizations who want to be better at digital or IT modernization?

D: I think there’s a few, but probably the greatest challenge would be that as you become a large organization, you’re prone to work with other larger organizations. And when you do so, the demands on your time and demands on you as a company become much larger and harder to walk away from.

When you are a large company operating with smaller companies, it’s easy to say “…this is how things go because this is how we do things, and you have to deal with it”. But as you become larger and larger, the people you work with also become larger, and there’s more collateral behind how they choose to operate.

I’m not saying that everyone bullies small companies, and there’s always valid reasons to promote an internal best-practice, but the contingencies and the extenuating circumstances become much greater as you operate with large organizations. It’s harder to walk away from or ignore them and enforce your will as easily before.

It also becomes a lot harder to move. You’re steering a tanker, as you have so many barriers dictating your path forwards.

You’re now in a situation where you want to make a certain move, but can’t because of contractual or technological limitations – because a very strong business partner, consumer or client cannot go down that path with you. If they can’t go forward with you, you’re going to lose revenue and it potentially stops you from pursuing these options.

G: So how can they overcome those barriers?

D: I think that’s where you get into hybrid solutions. That’s one of the great things about distributed architecture – you can choose the components that you support from a legacy perspective, while still advancing forward.

Something that’s great in a product like ignite is when you go into branching development, you have the capability of supporting what is legacy; what is standing up a client, and still maintain and advance that moving forward. But then you can have the latest and greatest on top of it, which is also running simultaneously.

Instead of having multiple teams running multiple instances with multiple sets of code, you can have that abstracted out as a type of design or service layer. That abstracted information can then be republished and re-purposed in other places until that client is finally ready to move forward.

It doesn’t stop you and it allows you to still maintain what’s currently there.

“The only thing that’s really consistent in software development, is that nothing’s consistent”

G: Can you share with us a best practice for abstracted service management?

D: One best practice I would strongly adhere to is: pilot teams first, then roll out to larger groups.

Inject yourself into smaller teams that are ready to move and have a strong compulsion to adopt the product, and accommodate their use cases, while keeping a focus on what’s important for the larger organization.

I think it’s important to accept that at the end of the day it’s software – the only thing that’s really consistent in software development is that nothing’s consistent – every solution has a shelf life, and nothing ages gracefully.

Building a solution that works, knowing that it can be changed later, is a best practice that everyone should adopt. Whether working in abstracted service management, or building their own custom code – building something that has a great use case now, but has the ability to evolve and adapt in the future brings success.

The power of reusable Data Elements

G: What’s your favorite ignite feature, and why?

D: My favorite ignite feature is one that’s been around for a long time, and I think it sets us apart – the ability to take in data elements and reuse them in future design.

So, this concept that whether its REST or SOAP or events or messaging; whatever you’re using, it all comes down to “I have a data element that I’m trying to provide to someone, or one I need to get from someone”.

I think focusing and building that as a core feature in the product, reinforces our drive towards an agnostic platform that focuses on the use cases.

It’s the truest level of abstraction – forget about how this is represented, the service layer and the type of profile its being used in or the technology it needs downstream. We provide abstraction of the most core components; which comes down to data model objects.

It reinforces the concept that we tell our customers around being focused on the holistic design, agnostic of a platform or technology – unshackle yourself from that and allow yourself to be truly flexible in the future.

G: And for those data elements, where they are abstracted, does that make them a bit more understandable for people on the business side, looking to drill down to that granularity?

D: I think so. It shows an API designer or a Product Manager the type of information they’re working with, and an understanding of if it’s something folks are going to be able to use, and do they need to enhance or advance it in a certain way.

The methods to provide that information are where I can actually do different things, like expand on a given resource or model, or maybe trim it down because it’s inefficiently presented, or move the hierarchy and array in a way that’s easier to translate and understand from a consumer point of view.

So, I think it does give a simpler context to read a method and understand it.

On digitalML’s rich product history and introducing new processes for customer success

G: What’s the best part about working at digitalML?

D: One of the great things about working at digitalML is the rich history behind every feature and solution. It’s a creative place with a startup culture of “let’s move forward”, but that history we’ve built up, a lot like some of our customers, we can’t forget everyone and leave things behind.

That, and not being a venture capital-backed firm – we make decisions that are best for ourselves and our customers. I really appreciate that perspective in a company: not trying to drive a stock price or valuation, we’re trying to provide the best product possible. A product we can grow our business with, but also makes our customers really successful.

Everyone who’s been at the company a long time has a really in-depth understanding of not only what the product can do, but also what it has done in the past for several different customers. There’s a lot of things ignite has been used for in the past, and having that rich history to pull from is exciting.

In a world of everything’s possible, we have strong tangible proof of: “…yes, we have done that, here’s where we’ve done it before.” Even if we moved away from some features in the past, that trend could now be coming back into the industry and we can refocus our efforts towards supporting and advancing it.

While we have all this history, we’re also always working towards a better product and driving customer value. That’s shown in many different ways, we’re currently introducing a bunch of new processes throughout the company to drive visibility, success and support for customers.

G: Finally, any other advice for our customers?

D: I think back to what I was talking about for best practices; embracing the idea that software development is a constantly evolving journey. Sometimes it can be tempting to adopt a tool and treat it like a silver bullet, but like I mentioned with events, it’s about finding the best compliment the tool falls into, and making yourself future-proof and ready for what will be coming next.

About Dick

Dick Brown, Director of Customer Success

Dick is Director of Customer Success at digitalML, and heads up our customer delivery, support, and solutions teams, to ensure ignite customers achieve their strategic goals. His experience spans program management, software consultancy, sales and marketing across multiple industries. Dick offers a record of success developing and implementing complex enterprise software solutions across international markets, including the UK, Europe, Australia, and emerging markets in Africa and South America.

A huge thanks to Dick for sharing his perspectives this month. See you back here soon for the next installment!

Past interviews:

About the Author

Learn the Best API Practices and Get the Latest ignite Updates

What can we help you find?

Upcoming Webinar: How a connected catalog is helping large enterprises meet the evolution of an API strategy

Join us for a 30 minute conversation with digitalML’s CEO, Jeremy Sindall and CTO, Sayee Bapatla as they have a conversation around the vision of large enterprises, the path to get there, where connected catalogs vs dev portals and marketplaces fit in, and examples from leaders on what frameworks they’re building.

Use of cookies

We use cookies to make the website optimal and to continuously improve it. By continuing to use the site, you consent to the use of cookies. Please refer to the privacy policy for more information.