DevOps, Containerization, and Security – Perspectives from our Senior DevOps Engineer
February 11, 2020|
Welcome back to our digitalML spotlight series! This time we’re speaking to Senior DevOps Engineer Anirban Das, who shares his perspectives on:
- The benefits of containerization
- Migrating our own customers from stand-alone Docker containers to Kubernetes
- The security benefits and challenges associated with DevOps tools
On the benefits of Docker and KubernetesGemma: What are 1 or 2 industry trends that are interesting to you right now and why?
Anirban: I would say containerization using Docker and Kubernetes.
If I rewind 10 years, we used to either build servers manually or deployed VMs from templates. The template would have the majority of the base set up pre-done, but then in most cases would also need additional configuration on top to provision each server (e.g. a database server, an application server, or a UI server) from the template.
A lot of the time that configuration would be done manually (then as time moved forward it would be done using a script). But you’d still have the problem of people coming in, making changes which weren’t properly recorded and didn’t always follow the change control procedure – before you knew it, the server would end up in an unknown state. So, if you ever needed to rebuild the server from scratch, there would be inconsistencies a lot of the time, but it would never be clear what was missing based on the documentation.
Then config management and provisioning tools – like Puppet and Chef – came along, which I’ve worked with for the last 6 years. They are great in keeping the servers in a known state – you deploy from a template, and then apply your Puppet manifests. Puppet configures the whole server.
The time to do so varies though. For example if you’re setting up a UI server it might take 5 minutes, but if you’re setting up an Oracle database server it might take 1 hour – so it’s still time consuming.
Puppet is great at keeping your servers in a state that is consistent for your application to run, but then again it only checks what you’ve configured; if someone goes in and adds something else or changes a file that isn’t managed by Puppet, it won’t be subjected to the same checks.
Scaling becomes a bit of an issue too. You can scale horizontally and spin up another server with Puppet, however if the original server takes one hour to provision, it’s going to take another hour to do so for the new one. BUT, if it’s in demand and you need that resource, you have to do it!
That’s where containerization really helps, because everything’s pre-built. Your images are already what you need, to run your application on; you don’t need to add anything to it.
Once you deploy your container, that’s it – no additional installation or configuration is needed because it’s all in the image you created.
It helps with scaling too. If you’re using Kubernetes or Docker Swarm you can configure it to auto-scale – if there’s high traffic these containers scale automatically, and the start-up time is significantly less than a VM starting up. Once the usage reduces, it then automatically scales back down. It enables you to do usage-based on-demand scaling, instead of having to do pre-planning.
The thresholds are of course configurable too, so you can optimize your resources consistently.
Gemma: What’s a favorite project you’ve worked on during your time at digitalML?
Anirban: I’ve been heavily involved in the migration of our own ignite platform from stand-alone Docker containers to Kubernetes. We were running most of our SaaS customers on these stand-alone containers, and so we weren’t getting any of the nice features I’ve just spoken about!
Migrating most of our customers to Kubernetes has had another benefit too – it’s helped us mature our product. Doing so helped us identify some potential improvements to the platform which had previously been overlooked. So that was a great opportunity to find and apply a genuine solution, and therefore make ignite even better and improve our own internal processes.
G: Security is naturally a big issue for large enterprises; what implications do DevOps tools have on this?
The security benefits and challenges of DevOps tools
A: There’s no doubt that security has always been a big concern for large enterprises. For example, I used to work for a payments company, so of course the money transactions taking place on the servers needed to be really secure.
When you’re hosting things on the cloud, security is a big concern – it’s shared infrastructure at the end of the day.
If we were deploying VMs, we’d use something like CIS-CAT, which is a benchmarking tool, to ensure the server hardening has been done correctly. When we moved to Puppet, it would apply those CIS-CAT standards and scans automatically. The same goes for containers as well; the base image you’re using needs to be fully compliant and pass a certain benchmark score.
There’s security at different levels, for example:
- Server hardening – if you end up getting unauthorized access to the server, how much damage can you cause
- Network security – How secure is the actual network itself? Can someone gain unauthorized access to you network and access something they are not meant to? Is your network protected against a DDoS attack which can grind your application to a halt? Is your database open to the network? Your application server needs access to the database, but can any other server or container talk to the database directly and retrieve information?
- Storage security – Is your data in your database encrypted? If your storage is stolen/copied can someone recover the data?
Whereas if you’re using DevOps tools, you’re already applying your policies upfront – do once and apply automatically during provisioning. If you then need to update a policy, it will update everywhere automatically too, while being fully compliant with ITIL practices. Because it’s automated, you’re excluding human error too, which is a big plus.
G: What are some of the security risk factors with using DevOps tools?
A: The problem with new technologies is that security quite often isn’t up to scratch from the outset – it’s locked-down later once the tech is established, and is constantly maturing. That’s why we sometimes see the largest organizations with a conservative mentality – the confidence to move is low.
G: What’s your favorite ignite feature, and why?
On communication internally and with partners
A: I’m really excited for the new version of ignite and the value it brings to reusability, by organizing your APIs and microservices as bundle-ready digital building blocks
G: What’s the best part about working at digitalML?
A: The team is great. We’re very communicative within the Engineering team, which helps things get discussed and done really quickly. Sometimes, when you have larger separate teams (e.g. DevOps, Front end, Back end, support), the communication between those teams breaks down, even though they’re interdependent.
There’s also a great issue-raising process which gives clarity on priorities throughout the company as a whole when it comes to our product.
G: Any other advice for our customers?
A: It would definitely be for customers to provide as much insight into their security restrictions as possible – things like what you’re allowed/not allowed to do, and why. A lot of the time that visibility helps us to help you make sure what we’re delivering with the ignite platform is fully compliant to your policies.
About AnirbanAnirban is a Senior DevOps Engineer here at digitalML. With over 8 years’ experience across DevOps, Cloud hosting, and automation, he uses his expertise to ensure a high availability
infrastructure and optimization of our ignite platform for our customers.
A big thanks to Anirban for sharing his views this time around!