NEWS AND RESOURCES

Why Kubernetes?

Lee Hylton / March 30, 2020

In an earlier blog post, we described how virtualization brought cloud technology into the mainstream. It offers computing on demand, allowing enterprises to outsource (to varying degrees) their computing needs to cloud providers. For their part, cloud providers use virtualization to dynamically segment multiple users across a large number of commodity servers with unparalleled scalability and elasticity.

In another blog post, we described how software development has evolved from large monolithic applications to Agile Development and DevOps based techniques which favor a microservice-based approach. Microservices are small, self-contained units of functionality that work in concert to create an application. And containers, as small, self-contained and portable units of code, form the natural deployment units for microservices. Containerized microservices, deployed on private or public clouds, can then scale elastically as demand fluctuates. This offers better resource utilization leading to reduced CAPEX and OPEX in running an enterprise’s IT infrastructure.

The need

However, the efficiencies from such a transition are easily lost if developers and operations staff have to spend resources to understand the distributed computing environment where their containers are deployed and then manage and maintain these. In this context, it is good to remember that containers are deployed in the hundreds to thousands, and managing them is non-trivial, if not impossible, if done manually. Imagine having to know where a large number of containers are deployed, scale and load balance these, upgrade them if needed, create new ones if a server fails, etc. Thus, some form of automated container management tool is essential for such an environment if any advantages are to be achieved.

Kubernetes is now the most widely used tool to manage a workload of containerized microservices.  The rest of this post will provide a high-level view of Kubernetes and how it works. Usually, when approaching a new topic, there’s a lot of jargon to master and Kubernetes is no exception. We’ll try and break this down for you in simple terms, in a way which hopefully will allow you to see the big picture without overlooking some essential details.

Understanding Kubernetes

The official documentation on Kubernetes – as described by the Cloud Native Computing Foundation – states: “Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.” We’ll probe the highlighted words in what follows.

The easiest way to understand Kubernetes is to understand that it’s a set of abstractions, each of which add a level of indirection that shields a developer from knowing about underlying computing and storage resources. Instead of working directly with servers, and whether these are on-prem or virtual machines (VM) in a private or public cloud, the Devops team thinks in terms of the computing power needed to accomplish their microservice-based application’s functionality. The underlying distributed computing system that Kubernetes manages takes care of the actual computing resources needed to achieve this.

Kubernetes works with a top-level abstraction called a cluster, a group of computing resources – RAM, storage and CPUs – tied together and managed as one. Each computing resource within a cluster is called a node. A node is also an abstraction, because the actual realization of a node can be a developer’s laptop or workstation, an on-prem server or a virtual machine (VM) in a private or public cloud. In fact, once a containerized microservice is deployed into production, it can run on any node in the cluster, all managed by Kubernetes. One of these nodes is privileged, the master node, which is the “brains” of the cluster managed by Kubernetes. It contains the admin interface, through which all commands are entered. In turn, the master node controls and schedules work on the other nodes – called worker nodes.

For the next abstraction, a node contains multiple entities called pods, with each pod containing one (or, optionally, more) containers. Kubernetes does not manage applications/microservices or containers directly. After all there are hundreds of applications to be deployed likely relying on an equally large number of containerized microservices. Instead, Kubernetes has created the abstraction called a pod, which provide an isolated computing environment for its constituent containers.

Kubernetes manages pods. It is the common deployment unit for all containerized microservices and serves as the unit for scaling. This additional level of abstraction is needed to cater for use cases where two (or more) containers need to work closely together to achieve some function. By placing these containers in the same pod, one can guarantee that they will always be together, use the same underlying resources and scale as one unit.

Automated deployment

Pods are not created directly by developers. There is yet another abstraction called a deployment. A deployment is a descriptor, written in a simple markup language called YAML, that provides configuration information such as what containerized microservice each pod contains and the number of replicas of such a pod that should be created by Kubernetes. The descriptor can also include authentication and authorization policies governing access to the containers.

It is Kubernetes’ job to determine where these replicas run, i.e., how to distribute the pods to the nodes in a cluster, automatically deploy these and load balance incoming requests for the container’s service across the replicas.

Automated scaling and management

To summarize the previous section, DevOps engineers create a deployment descriptor which identifies the needs of the containerized application. Kubernetes uses this descriptor to set up a cluster of nodes that meets the requirements and deploys the containerized microservices as pods in the nodes within the cluster. Kubernetes monitors this deployment to ensure that the state of the system matches, at all times, the stated deployment descriptor and adds/moves pods as needed automatically.

If a node fails, Kubernetes automatically recreates its pods in another node within the cluster to ensure that the numbers of each pod type are maintained as provided in the deployment descriptor. Similarly, if a particular microservice needs to be scaled up, the DevOps engineer creates a new descriptor identifying the number of replicas needed, which Kubernetes uses to create and deploy the appropriate number of pods within the cluster. If a containerized microservice needs to be upgraded, Kubernetes creates the appropriate number of new pods with the upgraded containers and distributes these as needed within the nodes of the cluster, while gracefully retiring the older version.

Open Source

Google, which used an earlier variant of Kubernetes internally, contributed their code to the Cloud Native Computing Foundation, which maintains and evolves it as open source software. There is a large, world-wide group of developers that work on its enhancements. It is now at version 1.18.

Summary

It is a non-trivial task to manage hundreds to thousands of containerized microservices in a production-grade deployment. An automation tool such as Kubernetes is essential to make such deployments possible.  Without such a tool, containers would never have achieved the penetration they do.

Kubernetes is by now the most widely used container management tool for microservice-based applications. It is used by enterprises of all sizes in many different industries, as these case studies show.

Blue Sentry has a particular expertise in Kubernetes and Microservices deployment.  We can help you build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.  After deployment, our Day 2 services can support your environment with ongoing operational managed services.