Containerization, API Cataloging and Evolving APIs – Perspectives from our Principal Consultant
September 20, 2019| Reading Time: 9 minutes
Welcome to the next installment of digitalML’s spotlight series!
This time around we’re speaking to Principal Consultant Todd Everett, who gives his perspectives on:
More efficient business collaboration tools
Containerization using Docker and Kubernetes
RESTful APIs for exposing just the data you need from large databases
Effective cataloging of APIs and Services in large organizations
Evolving abstracted APIs over time to prevent a waterfall approach and futureproof assets
Proper use cases for emerging technology adoption
On collaboration tools, Containerization and the benefits of RESTful APIs
Gemma: What are a couple of industry trends that are interesting to you right now and why?
Todd: I have 3 interesting trends, utilizing both older and newer technologies:
The first trend I’m seeing is organizations gradually moving to more efficient collaboration tools e.g. Microsoft Teams and Slack. They are moving away from things like email and social media for internal teamwork as they’re just not productive enough – with too much on them and being too distracting; it’s easy to miss things. Moving to something simpler and specifically designed to suit business needs is making collaboration quicker and easier.
Containerization using Docker and Kubernetes
The second thing is something Ryan spoke about in his article – containerization using Docker and Kubernetes. What makes them so interesting is the ease of use.
Back in the day we had to create virtual machines (VMs) for everything, and stand up application servers, databases etc. to get an environment of our platform, ignite, up and running. The VMs enabled me to move forward without having to go and buy and install a new machine every single time. But even with that, I still had to install the software on every single machine, and if it went down I would have to use a back-up image and go forward with that. Also, as everyone would make changes themselves it was hard to support over time.
With Docker, you can make microcontainers which means you can install and create individual containers for an application server, a database, Elasticsearch etc. – really fine grained containers which means if something blows up I can just re-deploy that entire container from scratch.
The containers can also be completely standardized very nicely. If you need to make updates you can do it from the host or various other methodologies, you don’t need to make changes to the container itself.
From a security aspect, what you used to have to do was go and change every single thing all the time e.g. I had to run security profiles on a certain server, and then repeate it on every single server. Now you just do it in development time, and then when you deploy it you don’t have to worry about it again. I can have 10 different versions of an application that may have 10 different versions of databases, which have different levels of patches on them, all running on my desktop at the same time – and having installed them within 15 minutes – it’s beautiful!
All this makes using the cloud so much easier.
The final thing is modern RESTful APIs. Not exactly a new trend but what they open up for large enterprises is really important.
The word API has been around for a long time in tech and there’s been a few phases. We used to have everything on the mainframe, managing that using the old 3270 greenscreens. All of an organization’s data was on that mainframe. .
These mainframes were first built when there was not much memory or processing power – so they had to be built really efficiently with extremely efficient code. As processing power increased and memory became cheap, we put more and more into these mainframes. The things would be so huge they’d have to run batch jobs at night which would take 4+ hours.
People then started saying “copybook is a dead language, let’s get all that data off the mainframe and put it onto a Java system”. Trying to move that whole thing to Java was huge. Java was built having memory readily and cheaply available so you really didn’t create it with memory savings in mind. You’d end up with the same database batch job on Java taking 3 days!
We started moving away from batch in general, although mainframe still does a good job for that. We started to focus on efforts to get the data held on the mainframe off of there. There’s many ways this could be done, but one of the ways that stuck was using SOAP interfaces. With SOAP, what ended up happening was we would expose the mainframe as an XML interface – one of the advantages was that XML became ubiquitous so everyone could read it, instead of needing adaptors. But the problem is, we didn’t know how big to make things. With copybook you’d say “hey I need this data” and the response would be “well you’re getting it all” – but I don’t need all the data, only a little bit of it.
That’s how the next indoctrination of APIs came about. What happened with SOAP was you were essentially just using whole back end systems – massive amounts of data coming over as XML. When applications themselves needed to make calls for a database to the back end systems, they wouldn’t construct SOAP interfaces, they would just make the JDBC calls themselves.
RESTful APIs are the next generation of that – even in our ignite platform we’ve exposed our entire application via RESTful calls. We’re not calling a mainframe though, we’re calling our own application. SOAP would have been too heavy to do this.
What the APIs have done is create things which are lightweight, allow you to expose your own personal database, and size them correctly, just like you would do a sequel call. APIs allow you to just grab the information you need.
Cataloging APIs and Services is a challenge for large organizations, but there is an easy way…
Gemma: What do you think is the biggest challenge for large organizations who want to be better at digital or IT modernization?….
Todd: The biggest problem large organizations face is gaining an accurate view of all their existing IT assets. They’ve got 1000s of APIs and services all over the place in different runtime environments, along with all these endpoints, mappings, dependencies etc..
Recently I went to a meeting with a large airline company – they gave us 100 WSDLs for use in demoing our ignite platform’s capabilities. I imported it all into the system and started looking at what they had – turns out they had 7 reservation services that did almost the same thing – in fact 3 were exactly the same! What’s more, they didn’t even know they had this repetition. If you scale this up to the 1000s of APIs large organizations have, that’s a lot of duplicates!
I have seen this scenario many a time – when you need a particular API or service there’s 2 different ways of doing things – you can search for existing, or you can recreate. If you can’t find it and it takes you days to do so, sometimes it’s just easier to go recreate it. An analogy I like to use — I know I have a 7/16” wrench somewhere in my house, but I have no idea where it is, so I just end up going to the hardware store to buy another – it’s easier and quicker than searching.
Cataloging of APIs and services is the way to gain this view, but it’s not totally simple. Sticking with my wrench analogy; it’s not that I’m too lazy, I just don’t know where to put my wrench! I don’t have a toolbox or specific place to put it, so instead I just throw it into my junk drawer with all kinds of other things. But, if I do that with everything which doesn’t have a place, my junk drawer soon becomes so big and chaotic that I can’t find anything in there! Instead, I just turn it over into the trash can and start from scratch.
That’s exactly what happens with all the APIs, SOAP services etc in a large org. It becomes a big glob of junk; you can’t tell who affects who and what affects what, which service is where etc. You don’t know where to store things properly so everyone ends up with their own little thing in their desktop. After a while you just reformat and start again.
G: Sounds like chaos! So how can they solve this issue and properly catalog their IT assets?
T: That’s where a platform like ignite comes in. Not only are you cataloging all your services, you’re also doing so in a way which makes sense. You’re not losing things; everything’s searchable as you can tag it with metadata. For example, I know I’m looking for a reservation service but not the exact details. If I search for everything with a “reservation service” tag, I can easily find the exact service I’m looking for. This is because the ignite platform actually tells you how to categorize your services and where to put them.
Another good analogy for this is your email inbox. You create all these email folders to categorize your mail… you’re diligent for maybe 3 weeks… and then you get slammed with something else and all your emails get overwhelming. Well imagine if all your emails were automatically tagged so if you’re looking for an email from a certain customer you just search it and it pops up – then you don’t ever have to worry about your email folders! You’re just able to see what you want when you want to see it.
With ignite you don’t have to worry about spending ages putting things into hierarchical folders. To add to that, say I want to put something in more than one folder – I can’t do that unless I copy it! Whereas if it’s tagged, I can have as many tags as I want associated with one email. It’s the same with ignite – you can have multiple taxonomies associated with a service, or even multiple nodes within one taxonomy to ensure everything is properly organzied and services are easily discoverable.
Evolving APIs over time prevents analysis paralysis… and abstraction helps futureproof your assets
G: Can you share with us a best practice for abstracted API and service management?
T: Don’t try to please every requirement in one go with a single API design; it can be evolved over time to keep it lightweight and simple. That was the problem with older SOAP services – they were absolutely huge, to cater for every single scenario.
My Mom used to say, you can please some people all of the time but you can’t please all people all of the time. What ends up happening is you create these services which almost end up as analysis paralysis or a waterfall process. You go ask everyone what they want out of the service; naturally they will all want different things.
Before long you have an enormous service with all this data – which can be accessed by MQ endpoints, JMS endpoints, http endpoints… on and on and on because you wanted to please everyone. It becomes so big and unwieldy that if you want to make 1 thing, you’re going to break 1000 other things in 1000 different ways – so it costs you $1m dollars to make one change!
Figure out what your use case is – what do you need your service to accomplish, and accomplish it! Just make sure you’re pleasing your key stakeholders, the service can always be evolved in a later version.
Having your services as abstracted designs is a key factor in being able to evolve services, as well as general futureproofing. Technology changes and needs change overtime. For example 5 years ago XML was the trend, right now it’s JSON, but what’s next? If everything is abstracted away from these technologies, then you can simply un-plug and re-plug into the next thing that comes along. Imagine you’d created your entire interfaces using copybooks – probably wouldn’t be very useful at this point. But because everything is abstracted, this allows you to evolve with whatever different technology is the next big thing.
G: What’s your favorite ignite feature, and why?
T: ignite’s APIs are really cool. We chose to expose our entire application through RESTful calls. This enables us to rapidly launch new features and continually improve existing functionality.
Sitting on top of that, Developer+ templates* are amazing as they allow you to generate absolutely anything from your designs.
*Note to reader: these are expression-based templates which pull information from any abstracted service design, to enable auto-generation of runtime artifacts for any technology.
The fact we’ve created the platform using APIs makes things like Developer+ something that can be utilized really easily and quickly. Imagine you were writing an output template and you had to write sequel queries for everything you did – it would take forever! But the fact you can just use an API call to call that data makes things slick and quick.
On use cases for emerging technology implementation and synergy between colleagues…
G: What’s the best part about working at digitalML?
T: Without doubt, it’s being on the front line of new technologies. In big corporate organizations, sometimes emerging tech won’t be implemented for years. Or, at the other end of the spectrum, you’ll be told to implement a new technology without the use case it’s supposed to help.
I was at a gathering the other night ad spoke to someone who had just switched jobs having been at a large organization for years and years. I asked him why he moved and he said he just couldn’t take any more – “we had these folks from EA finding new tech they thought was cool and just telling us to go use it. We’d ask what is it for? The response would be I don’t care, it’s a cool technology; go use it!”
When that happens over years and years, you just build up all these layers of different systems which were only used because somebody mandated that you had to use that technology.
But at digitalML, you’re actually both using and creating a technology based on need and agility. There’s always a use case for what you’re doing; if not you don’t do it.
The other thing I’ve noticed in my life is that the most fun I’ve ever had in a job is where you have a bunch of people who are hungry for new knowledge and want to move together. You build this synergy with each other which propels you all forward and makes life and work a lot more fun! With a small company like digitalML, that’s just the way everybody works. Everyone is on the same team and we’re all working towards a common goal!
Todd is a Principal Consultant at digitalML, providing solutions and support for our most valued ignite customers. He’s a technical expert in our platform and has over 20 years’ experience spanning customer success, enterprise architecture, and development across the tech, insurance, and banking sectors.
Thanks to Todd for giving his perspectives this month. You can also read past interviews in this series here:
See you next month!