Cloud Managed Services: Deployments and Managed Services

Lee Hylton / August 2, 2018

 Can I get a deployment to production, please?

This isn’t exactly how most tickets start, but it’s an appropriate summary of most client requests, in the most basic way. The request is simple, but involves several complicated mechanics that are developed over time. At the end of the day, the Holy Grail is to implement a process that is as effortless, and weighted with trust in your team, as possible.

As a member of a managed services team backed by engineers that help bring the customer into best practices, we need to be able to execute deployments. They need to be fast, reliable, and simple — particularly when considering the wealth of options, as well as the efficient and cost savings, inherent with Amazon Web Services. The cloud is more efficient to execute with simplicity than on premise or hosted infrastructure. For most enterprises, anything that saves time and resources, compute or human, is a savings.

Some customer builds we have worked on have either have adopted Jenkins, or we have chosen to incorporate it. Jenkins offers several appealing reasons for its incorporation into workloads: it’s widely adopted, highly flexible, has an unreasonably large number of plugins, and is designed with the cloud in mind. That last part, “designed with the cloud in mind,” really makes life easier. The differences could convert even the most staunch holdouts, in my varied experiences.

Jenkins provides a great way to manage cost with high load flexibility by auto scaling during periods of demand. In an on-premise or managed hosting service, this would be neither simple nor efficient. Due to the wide variety of plug-ins already present, you don’t need to spend to purchase or develop your own. Also, though writing your own automation is necessary and should happen, there’s no need to reinvent the wheel. Again, this is just one of the many ways to automate scalability while saving money. For even greater efficiency, we often work with clients to scale their cluster with docker and kubernetes — but I digress on that point (Though we currently ARE, in fact, building an implementation of said cluster, and will write on its success after some trials and tweaks).

A segue back into how I opened this article: the customer request for a deployment. There may not be much difference between on premise and AWS in this regard, though Jenkins has helped my managed services team simplify most complex deployment requests. The time required and errors introduced into manual deployments can be a frustrating proposition for all involved. Jenkins has some great flexibility with incorporating other scripting like Ansible, bash, python, and AWS cli, to name a few. We are able to break down deployment steps into smaller components and individually execute them. The reasoning for this is because some deployments aren’t always the same–  even if the destination and infrastructure are. The idea here though is what I like to call “button-izing”. Quite simply, make it a button because everyone loves an EZ button! Such a creation makes workloads executable by anyone and provides easy to read output. This output can then be provided in an escalation or resolved by the executioner.

An oft overlooked benefit with this strategy: security. Keys won’t be stored on instances like a jump box, or locally on a machine. The last part is a big deal, really. Having a bunch of keys on a department’s local computers for different clients/environments is a profoundly bad idea. The possible fallout from a stolen laptop (not encrypted, no remote software, etc) or a slip-up by a careless or ill intentioned employee would be catastrophic. Reducing the footprint of access by having stricter control measures, while inconvenient, is necessary.

The TL;DR: One of the ways that we find supporting customers in AWS infrastructure with deployments easier is because of the simplicity of scalability with automation. Earn customer gratitude with security and efficiency in time and infrastructure.