Back to all Blog

5 Things You Need to Know About Adopting Kubernetes

Kubernetes has become the go-to container orchestration platform for enterprises. Since the COVID-19 pandemic, organizations have increased their usage by over 68% as tech architects look to Kubernetes to handle the increased delivery demands of the post-pandemic age.

Why now? It offers faster deployment, better portability and helps keep pace with the modern software development requirements.

Contentstack has adopted Kubernetes as a platform to build the next generation of microservices that power our product, and it offers incredible benefits.

But adopting a transformational technology like Kubernetes is never straightforward. We faced our own set of challenges while implementing it, and we learned quite a few lessons in the process. In this piece, we share what we learned to help you avoid common pitfalls and position yourself for success.

Kubernetes is Not Just an Ops Concern

With the rise of DevOps, more companies are moving toward the “you build it, you run it” approach. The operations team is no longer the only team responsible for maintaining the app. Developers get more operational responsibilities, bringing them closer to the customers and improving issue resolution time.

However, not all developers have exposure to Kubernetes. This may lead to developers being unable to optimize their applications. Some of their struggles include:

  • Inability to implement “Cloud Native” patterns
  • Difficulty in communicating and implementing scaling requirements for services
  • Inflexible configuration management for services
  • Inadvertent security loopholes

Developers also face debugging issues in production, which can often lead to catastrophic outcomes for organizations that are targeting strict availability SLAs.

We realized this when we started using Kubernetes at Contentstack. We manage this gap by investing in upskilling developers in Kubernetes. It has improved our SDLC (Software Development Life Cycle) lead time. We see improved communication between developers and operators and a clear shift to “Cloud Native” solutions.

Pay Close Attention to Microservice Delivery

“Microservice delivery” refers to the testing, packaging, integrating and deploying microservices to production. Streamlining delivery is an important aspect of managing microservices. For example, simply moving microservice deployments to Kubernetes will not give you immediate benefits if those deployments are not automated. Following are some of the first steps for setting up an efficient delivery pipeline:

  • Package your microservices: While containers allow you to bundle the application code, you still need an abstraction for managing the application’s Kubernetes configuration. Kubernetes configuration is required for defining the microservice image, ports, scaling parameters, monitoring, etc. At Contentstack, we use Helm as the package manager.
  • Implement a CI/CD pipeline to automate delivery: A CI/CD (Continuous Integration/Continuous Delivery) pipeline automates the entire process from building and testing apps to packaging and deploying them. Read more about Contentstack’s GitOps-based approach to CI/CD.
  • Automate tests for your application: An ideal test suite gives you fast feedback while being reliable at the same time. You can achieve this by structuring your tests as a pyramid. These automated tests need to be integrated into the CI/CD pipeline so no defective code makes its way into production.
  • Create a strategy for potential rollbacks: Failures are part of software development. However, your strategy for dealing with failures determines the reliability of your services. ‘Rollbacks’ is one such strategy. It involves re-deploying the previous working version when a build fails. A battle-tested strategy for implementing rollbacks using the CI/CD pipeline needs to be in place so you can handle deployment failures gracefully.

Secure Your Workloads

It’s no secret that containerization and microservices bring agility. However, security shouldn’t take a backseat. A recent survey found that about 94% of respondents experienced at least one security incident in their Kubernetes environments in the last 12 months. Most of these security issues were due to misconfiguration or manual errors. Security is one of the top challenges for the adoption of Kubernetes in the enterprise. We took several steps to ensure that our applications on Kubernetes clusters are secure:

  • Scan container images: Docker images are atomic packages of the microservice. These images are a combination of the application code and the runtime environment. These runtimes should be scanned regularly for Common Vulnerabilities and Exposures (CVEs). We use Amazon ECR’s Image Scanning feature to enable this.
  • Secure intra-cluster communication: Microservices running in a cluster need to talk to each other. These microservices may run across multiple nodes. The communication between them must be encrypted and they must be able to communicate only with authorized services. mTLS (mutual-TLS) is a great standard that helps to encrypt and authenticate clients. At Contentstack, we use istio, a service mesh tool, to automate the provisioning and rotation of mTLS certificates.
  • Manage secrets and credential injection: Injecting credentials into microservices is required for the microservices to connect to databases and other external services. You must manage these credentials carefully. There are several techniques and tools to do this, including using version-controlled sealed-secrets and Hashicorp Vault. This also helps improve reliability of your deployments using automation.

Invest in Effective Monitoring for Your Services

According to the “Kubernetes and Cloud Native Operations Report 2021,” about 64% of respondents said maintenance, monitoring and automation are the most important goals for their team to become cloud-native. Monitoring is an often overlooked aspect of operations, but it is crucial, especially when moving to platforms like Kubernetes. While Kubernetes may make it very easy to run your services, it may trick you into believing everything will keep working as expected. The fact is, microservices may fail for a variety of reasons. Without effective monitoring in place, your customers may alert you of degraded performance, instead of you catching it first. Contentstack has several measures in place to monitor our services:

  • Use centralized logging tools: When using microservices, having a centralized logging tool is invaluable. It helps developers and operations teams debug and trace issues across several microservices. Without access to centralized logging, you will have to spend a lot of time manually co-relating and tracking logs.
  • Create monitors and alerts: For operations, there are several SLAs (Service Level Agreements) and SLOs (Service Level Objectives) that are monitored. Getting alerts (on messaging tools like Slack) on degraded performance will help you take timely action. It will also help you predict and prevent potentially catastrophic issues.
  • Create monitoring dashboards: Comprehensive monitoring dashboards give you a birds-eye view of the health of the microservices. Dashboards are a perfect starting point for daily monitoring. At Contentstack, each team that manages a fleet of microservices has its own dashboard. Team members routinely check these dashboards for potential issues. Since both developers and operations teams rely on these dashboards, we can co-relate application information from both application logs and infrastructure monitors on the same dashboard.

Take Advantage of Being ‘Cloud Native’

Kubernetes is an all-encompassing platform that offers many abstractions for solving common infrastructure problems. While the solutions address infrastructure problems, they can also solve application problems. Being “Cloud Native” combines using certain patterns and techniques with cloud-native tools. Here are some examples:

  • Sidecar pattern: The sidecar pattern is the foundation of most service mesh technologies. It involves having a companion application (injected automatically) along with the main application container. In service meshes, it is used for routing and filtering all traffic to the main application container. At Contentstack, we have leveraged this pattern for distributed authorization across the cluster. Each application communicates with an authorization sidecar to validate incoming requests.
  • Kubernetes jobs: Your application may have to process some one-off tasks that are not in sync with the request-response cycle, such as batch processing jobs. In the past, we depended on a separate service that kept running in the background looking for new tasks to process. Kubernetes comes out of the box with “Jobs,” which allows running such tasks as a pod. At Contentstack, we use Jobs for running database migrations before releasing a new version of an application on the cluster.
  • Health probes: Kubernetes has a good health check system in place for your services. This means it will notify you if the health of any service is not as expected. Apart from notification, it also supports automatically restarting the service. Read more about how Contentstack configures health probes for its services.

At Contentstack, we strive to continuously learn and adopt new practices and technology to stay ahead of the curve, and we are glad to share what we learn with the rest of the world. Adopting Kubernetes is a slow but rewarding journey that allows you to take advantage of the latest best practices for creating resource-efficient, performant and reliable infrastructure.