NEWS AND RESOURCES

Microservices and Testing Environments

Dennis Webb / March 12, 2020

Everybody has a testing environment. Some people are lucky enough to have a totally separate environment to run production in. @stahnma

Luckily we live in the era of cloud computing and having separate deployment environments isn’t as cost prohibitive as it used to be. Twenty years ago, having a separate testing environment meant having double the costs to build one. That was a luxury most companies did not have. And even if you could afford the hardware, you also had to hire more engineers to help maintain the larger infrastructure. Today it’s a lot easier to have separate environments for all sorts of uses: development, testing, staging, etc. Tools like Terraform help make deploying and maintaining multiple environments easier, requiring less engineers. But there is still an additional expense of additional EC2 instances and RDS databases. For some of our clients, over 70% of their AWS bill is for non-production infrastructure.

Over the past three years, the push for containers has been huge. Almost 80% of our customers are now moving to running their applications inside of Docker containers. This is a good thing. Docker helps mold applications into Twelve-Factor applications. It also promotes the use of immutable artifacts and cattle over pets. Kubernetes adoption by these customers is almost 100%. We at Blue Sentry are excited about the Kubernetes adoption rate because it is the best option for running containers at scale.

Resource Sprawl

When the push for multiple deployment environments happened, the general consensus was that these environments must be completely separate. For a lot of businesses, this meant each environment lived not only on separate EC2 instances in their own VPCs, but many times that also included each environment was in its own AWS account. I’m all for production being as far from non-production as possible, having a separate account for development, QE testing, and UAT can start to complicate things. Even with tools like Terraform, spinning up a new environment involves creating many new resources in which to manage. This in turn starts to add resistance into adding more environments. Testing is something every company can improve on, and any resistance to improving testing is a bad thing.

Kubernetes Namespaces

With Kubernetes, you can avoid all of this resource sprawl by leveraging namespaces. Namespaces have multiple uses in the Kubernetes ecosystem. Out of the virtual box, Kubernetes comes with pre-defined namespaces: kube-system, kube-public, and default. Resources created by the Kubernetes system go into kube-system namespace. The default namespace is the place for user created resources. As a general rule, I try to not put anything into the default namespace if possible.

The textbook use of namespaces is to create isolation in a multi-tenancy cluster. In this use case, namespaces commonly found on a cluster could be: ClientA, ClientB, and ClientC. At Blue Sentry, we’ve taken the design approach to separate installed software into unique namespaces, e.g. logging software (Elasticsearch, Kibana, and Fluentd) go into a logging namespaces. Istio, by design, is installed into the istio-system namespace. The primary goal being to keep software separated for organizational purposes.

We also leverage namespaces for creating separate deployment environments. On many of our clients’ Kubernetes clusters you will find namespaces named: dev, qa, uat, staging, test,  and of course prod. This allows for better utilization of resources, and provisioning new environments with practically no extra resource expense (this of course depends on environment size and requirements).

Privacy

The primary reason people separate environments into individual clusters, VPCs, or even AWS accounts is for privacy concerns. It’s a general rule, and a good one, that the individual deployment environments do not talk to each other. Dev should not communicate with qa, and most definitely neither dev nor qa should ever communicate with production. 

How is this separation achieved when using a shared Kubernetes cluster? To prevent communication between the environments, NetworkPolicies are defined that restrict communication across the namespaces. Without them, it would be possible for dev to communicate with qa. To expand on this concept, it is also common for some of our clients to use NetworkPolicies to separate a namespace into layers: frontend, experience, process, system, etc., to add another level of separation between their micro services.

Kubernetes also has Secret resources used to hold sensitive information such as API keys and database credentials that are used by the running containers. These are typically unique between environments. The good news is that Kubernetes by design will not allow cross-namespace referencing of secrets. This means that it is not possible for a resource in the dev environment to be able to read a production secret.

Node Groups

Another concern with sharing a cluster is; “what if an application in dev goes sideways and crashes the underlying instance?” This is a valid concern. It is a concern that we’ve learned the hard way, and one we’ve taken many measures to prevent.

Node groups are a method of labeling and allocating individual groups of host instances to different workloads. Commonly used to prevent non-production services from affecting the performance of production workloads. An easy way to separate is to add the label workload=production or workload=test to your Kubernetes nodes. This will allow you to specify which nodes are used to run which software. 

At Blue Sentry, we always recommend running a separate cluster for your production resources. One of the many benefits is being able to test upgrading system software. For example, installing a new version of Istio in a test environment before applying it in production. It also prevents the very legitimate concern of alpha quality software having any sort of affect on production. 

Another benefit is resource segregation. Sometimes, there is this belief that utilizing containers prevents one container from affecting another. This is not the case, as a container that is not kept in check can consume all the CPU and/or memory resources on an underlying instance. This causes cascading failures that can jump from node to node, ultimately affecting your entire cluster.

Cost Comparison

Now that we’ve covered the basics of how we can operate multiple environments in the same Kubernetes cluster, let’s see how the numbers stack up. 

price of 3 k8s clusters with 3 nodes vs 1 EKS with like 6 nodes

Also elb savings of 1 lb vs 3

If you were to ignore namespaces and utilize hardware separation with VM’s, in a 3 node cluster, your costs could potentially be nearly triple. Using our standard client separation mentioned earlier, we’ll assume the loads are the same, running 3 kubernetes clusters with 3 nodes each. To ensure proper bandwidth throughput and not wanting to concern ourselves with CPU throttling, we’ll pay a few cents more an hour to have a general purpose instance type of M5.large. Our breakdown looks like:

9 m5.large EC2 635.00

18 EBS 135.00

ELBs 54.00

500MBs out 45.00

Total $869

Our comparison is more consolidated, same instance type and size, and EKS to manage it:

6 m5.large EC2 422.00

12 EBS 45.00

EKS 72.00

500MBs out 45.00

Total $584

A difference of $285 or 67%. This is a small setup so the dollars aren’t massive but you can see that at scale that percentage is pretty substantial.