February, 2020 | digitalML
DevOps, Containerization, and Security – Perspectives from our Senior DevOps Engineer Reading Time: 5 minutes

Welcome back to our digitalML spotlight series! This time we’re speaking to Senior DevOps Engineer Anirban Das, who shares his perspectives on:


On the benefits of Docker and Kubernetes

Gemma: What are 1 or 2 industry trends that are interesting to you right now and why?

Anirban: I would say containerization using Docker and Kubernetes.

If I rewind 10 years, we used to either build servers manually or deployed VMs from templates. The template would have the majority of the base set up pre-done, but then in most cases would also need additional configuration on top to provision each server (e.g. a database server, an application server, or a UI server) from the template.

A lot of the time that configuration would be done manually (then as time moved forward it would be done using a script). But you’d still have the problem of people coming in, making changes which weren’t properly recorded and didn’t always follow the change control procedure – before you knew it, the server would end up in an unknown state. So, if you ever needed to rebuild the server from scratch, there would be inconsistencies a lot of the time, but it would never be clear what was missing based on the documentation.

Then config management and provisioning tools – like Puppet and Chef – came along, which I’ve worked with for the last 6 years. They are great in keeping the servers in a known state – you deploy from a template, and then apply your Puppet manifests. Puppet configures the whole server.

The time to do so varies though. For example if you’re setting up a UI server it might take 5 minutes, but if you’re setting up an Oracle database server it might take 1 hour – so it’s still time consuming.

Puppet is great at keeping your servers in a state that is consistent for your application to run, but then again it only checks what you’ve configured; if someone goes in and adds something else or changes a file that isn’t managed by Puppet, it won’t be subjected to the same checks.

Scaling becomes a bit of an issue too. You can scale horizontally and spin up another server with Puppet, however if the original server takes one hour to provision, it’s going to take another hour to do so for the new one. BUT, if it’s in demand and you need that resource, you have to do it!

That’s where containerization really helps, because everything’s pre-built. Your images are already what you need, to run your application on; you don’t need to add anything to it.

Once you deploy your container, that’s it – no additional installation or configuration is needed because it’s all in the image you created.

It helps with scaling too. If you’re using Kubernetes or Docker Swarm you can configure it to auto-scale – if there’s high traffic these containers scale automatically, and the start-up time is significantly less than a VM starting up. Once the usage reduces, it then automatically scales back down. It enables you to do usage-based on-demand scaling, instead of having to do pre-planning.

The thresholds are of course configurable too, so you can optimize your resources consistently.


Gemma: What’s a favorite project you’ve worked on during your time at digitalML?

Anirban: I’ve been heavily involved in the migration of our own ignite platform from stand-alone Docker containers to Kubernetes. We were running most of our SaaS customers on these stand-alone containers, and so we weren’t getting any of the nice features I’ve just spoken about!

Migrating most of our customers to Kubernetes has had another benefit too – it’s helped us mature our product. Doing so helped us identify some potential improvements to the platform which had previously been overlooked. So that was a great opportunity to find and apply a genuine solution, and therefore make ignite even better and improve our own internal processes.



The security benefits and challenges of DevOps tools

G: Security is naturally a big issue for large enterprises; what implications do DevOps tools have on this?

A: There’s no doubt that security has always been a big concern for large enterprises. For example, I used to work for a payments company, so of course the money transactions taking place on the servers needed to be really secure.

When you’re hosting things on the cloud, security is a big concern – it’s shared infrastructure at the end of the day.

If we were deploying VMs, we’d use something like CIS-CAT, which is a benchmarking tool, to ensure the server hardening has been done correctly. When we moved to Puppet, it would apply those CIS-CAT standards and scans automatically. The same goes for containers as well; the base image you’re using needs to be fully compliant and pass a certain benchmark score.

There’s security at different levels, for example:
  1. Server hardening – if you end up getting unauthorized access to the server, how much damage can you cause
  2. Network security – How secure is the actual network itself? Can someone gain unauthorized access to you network and access something they are not meant to? Is your network protected against a DDoS attack which can grind your application to a halt? Is your database open to the network? Your application server needs access to the database, but can any other server or container talk to the database directly and retrieve information?
  3. Storage security – Is your data in your database encrypted? If your storage is stolen/copied can someone recover the data?
Ensuring that no one can access your secure data is a big task. If you’re enforcing all your security standards manually it will take ages to secure one stack. Firstly you are adding network security to your network device e.g. firewalls, then you configure your servers to have your hardening policies in. It could take you days to secure one stack, which in the end has huge impact in time-to-market.

Whereas if you’re using DevOps tools, you’re already applying your policies upfront – do once and apply automatically during provisioning. If you then need to update a policy, it will update everywhere automatically too, while being fully compliant with ITIL practices. Because it’s automated, you’re excluding human error too, which is a big plus.


G: What are some of the security risk factors with using DevOps tools?

A: The problem with new technologies is that security quite often isn’t up to scratch from the outset – it’s locked-down later once the tech is established, and is constantly maturing. That’s why we sometimes see the largest organizations with a conservative mentality – the confidence to move is low.


On communication internally and with partners

G: What’s your favorite ignite feature, and why?

A: I’m really excited for the new version of ignite and the value it brings to reusability, by organizing your APIs and microservices as bundle-ready digital building blocks


G: What’s the best part about working at digitalML?

A: The team is great. We’re very communicative within the Engineering team, which helps things get discussed and done really quickly. Sometimes, when you have larger separate teams (e.g. DevOps, Front end, Back end, support), the communication between those teams breaks down, even though they’re interdependent.

There’s also a great issue-raising process which gives clarity on priorities throughout the company as a whole when it comes to our product.


G: Any other advice for our customers?

A: It would definitely be for customers to provide as much insight into their security restrictions as possible – things like what you’re allowed/not allowed to do, and why. A lot of the time that visibility helps us to help you make sure what we’re delivering with the ignite platform is fully compliant to your policies.


About Anirban

Anirban Das, Senior DevOps Engineer at digitalML

Anirban is a Senior DevOps Engineer here at digitalML. With over 8 years’ experience across DevOps, Cloud hosting, and automation, he uses his expertise to ensure a high availability
infrastructure and optimization of our ignite platform for our customers.









A big thanks to Anirban for sharing his views this time around!
What is API Governance? 8 Best Practices for API Governance Success Reading Time: 6 minutes

As APIs, and an API First strategy, gain more recognition throughout enterprises as important factors in digital transformation, we’re seeing a rush to plan, design, and build new APIs at scale. Proper API governance is essential to ensuring your APIs are:

In fact, Forbes are now stating that “the strategic importance of API governance cannot be underestimated“, and that API governance is a key part to an enterprise’s competitive edge when it comes to digital.

But what exactly is API governance, why do you need it, and what are proven best practices for enforcing it?


What is API governance?

API governance is the practice of applying common rules relating to API standards and security policies to your APIs. It also quite often involves designing your APIs based off a common data model of approved reusable resources/model objects (this is a best practice in itself which we will come back to later). Finally, governance can be used to ensure your APIs are sufficiently enriched with metadata for them to be easily consumed by a wider audience, both within your enterprise (e.g. product managers), and externally (e.g. partners).

API governance includes API standards, security protocols, important metadata, and an information model-approach

The goal of API governance is to ensure proper standardization of your APIs so that they are discoverable, consistent, and reusable.


Who needs API governance?

Simply put, API governance is important for any organization implementing an API First strategy, where APIs are core to their transformation to a digital business. It’s also vital for anyone planning to, or already implementing a distributed microservices environment.

There’s one use case in particular where API governance is absolutely critical – large enterprises. This is because they require 1000s of consistent, secure, and reusable APIs representing both Business and IT functions, instead of a handful of well-documented public APIs in an API portal.

There’s also a regulatory/compliance aspect to the need for API governance. One example is the Open Banking Implementation Entity (OBIE)’s Open Banking Standards in the UK for the big 9 banks. Admittedly, the standards are in an early stage of maturity, but the trend of regulator pressure on organizations to enforce proper API governance and standards, and to be able to demonstrate that they are doing so, is one we expect to see grow dramatically in the coming years.


API governance… no longer the Elephant in the room?

Traditionally, API governance gets a bit of a bad rep and has often been viewed as slowing down development. This is primarily due to APIs being manually written by developers, often with governance as an afterthought – and so governance has always been hard to enforce – relying on remembering to apply the rules, hard to check against and resolve without manually massaging the code.

On top of that, there are different architectural styles of APIs (think SOAP vs REST vs GraphQL vs AsyncAPI etc.) each with their own recommended coding standards and design patterns to keep up with, and often subsets within that depending on the profile of the API you want to design – that’s a lot of rules to remember!

Thankfully, there is a newer body of opinion, supported by great tooling, that applies governance upstream in the API lifecycle. If implemented in the correct way, and by following the best practices below, API governance can in fact speed up your development of APIs at scale, and ensure you’re getting the best business value out of your investments. That’s regardless of the type of API you’re designing.



8 API governance best practices

1. Have one set of enterprise-wide API governance rules

This sounds like an obvious one, but it’s important to have a set of governance rules that are defined globally instead of only on a LoB/individual group basis. By this we mean not only to adopt basic coding standards (e.g. the OpenAPI Specification), but also those based on what matters to the business, for example:

You also want all your governance rules for different architectural styles of service (e.g SOAP, REST) in one place. If they are all centrally located and maintained, everyone using them in your organization has one source of truth.


2. Manage your APIs as abstracted Designs in a holistic service catalog

There’s two parts to this best practice. The first is that if your APIs are held as abstracted Designs instead of code, with the technical details (e.g. payloads, parameters, and headers) held in associated Specification(s), applying governance rules becomes much easier to both bake-in and apply throughout the lifecycle (see later best practices for more on these).

The second is that if they are held as part of a holistic service catalog, with mappings/lineage/dependencies all documented too, and aligned to taxonomies, it’s much easier to visualize and rationalize your APIs. You gain insight and control into where they are, who owns them, who’s using them, where the flow of data is etc. Obviously, this is particularly helpful for the regulatory aspect of API governance we discussed earlier.


3. Use an information model to plan, design, and build your APIs

Many large enterprises either have, or are working to build, information models (known as canonical models in the SOA days). Planning, designing, and building your API methods off a model is recommended because by doing so you are using approved model objects and resources which standardize enterprise information in predetermined structures. This helps with API governance at speed as it’s increasing standardization and reusability of your APIs, while bypassing the slow process of developers having to constantly define common structures over and over.


4. Apply governance at all stages of the API lifecycle

API governance has traditionally had the tendency to cause roadblocks in development – when things have been overlooked early on but become a bigger issue later in the process. If you can ensure your API governance rules are applied at all stages of the lifecycle – i.e. throughout Plan, Design, Build, and Run – you’re going to prevent these roadblocks and help speed up development, while ensuring the outputs are all properly standardized.


5. Bake-in your API governance rules

This best practice is key for overcoming governance being that Elephant in the room we mentioned earlier. Having to rely on your developers to manually apply your API governance rules, standards, security policies etc. in their code is no fun for anyone, and not feasible when you’re developing APIs at scale. You may also have different levels of governance to be applied e.g. the Swagger violations levels of “must” vs “should” vs “may”, and rules around which levels need to be met before an API can transition to a certain state (e.g. UAT → PROD). That’s near-impossible to enforce if API governance is a manual process!

Therefore, API governance rules need to be baked-in, automatically applied and validated against, with a simple way to rectify violations, without having to spend time digging in the code. By doing this, you’re minimizing human error, speeding up development time, and ensuring that all your APIs are standardized and reusable.


6. Implement versioning

APIs often need extending and/or modifying, or sometimes deprecating, and versioning is therefore crucial to keeping track of this. It’s important to apply your governance rules to ALL the versions, to ensure they’re all standardized and properly documented. Governance can also help determine if a change is backward compatible or not, so it may be possible to enforce major version change if an API version is not backward compatible, and therefore prevent breakage when deployed and used by applications.


7. Ensure your API governance rules are met before an API can be deployed

Deploying un-governed APIs, or APIs with governance violations, even into a sandbox environment, is detrimental for large enterprises; costing time and money. You want your enterprise’s API governance rules, as well as generic architectural type standards (e.g. OpenAPI Spec), to be validated against and any violations resolved while you’re still in design-time.

An example of this would be ensuring that all the “must” Swagger violations are addressed before an API can be transitioned from UAT to Production environments.


8. Apply governance rules to your Brownfield services too

It’s one thing to ensure all your shiny new Greenfield APIs are properly governed, but there’s huge value in applying the same standards to your Brownfield (e.g SOAP web services). Doing so provides insight into the overall state of your legacy assets, helps identify high-priority targets for modernization, and speeds up regulatory and compliance-based reporting.



Looking for a solution to help you implement API governance best practices? The ignite platform from digitalML provides a holistic service catalog with API lifecycle focusing on Plan, Design, and Build. ignite provides extensive API governance features including baking-in governance rules (including API standards, security policies, and applying rich metadata to APIs and Services), full versioning, and designing off a common information model. For more information on ignite visit our platform page.