Kubernetes: An Introduction to the Open-Source, Container Orchestration tool

Hemanthhari2000
10 min readMay 21, 2023

Understand the concept of container orchestration and its importance in deployment of modern microservices applications.

Photo by CHUTTERSNAP on Unsplash

Introduction

Nowadays with modern web services, users expect their applications to be available at any point in time and developers continuously do deployments to the applications. Running an application in such constrictive measure is a tedious task. Containerization is a concept where you can package your software into containers and ship it anywhere. You can release the application and also you can perform updates with no downtime. Now, Kubernetes makes sure that your containerized application run in its cluster with all the relevant resources that it needs. Hence, we can say that kubernetes is a platform designed to orchestrate containers in an efficient way.

Kubernetes is a production-ready, open-source, system for automating deployment, scaling, and management of containerized applications. It was originally designed by Google and written in Go language but it is now maintained by the Cloud Native Computing Foundation. This makes it perfect fit for microservices applications. In this article we will be looking at kubernetes in detail and also be deploying a simple microservice posts application in a kubernetes cluster.

Overview

Let’s look at topics covered in this article.

  • What is Kubernetes?
  • Why do we need Kubernetes in the first place?
  • Architecture of Kubernetes
  • Prerequisites
  • Implementation
  • Conclusion

What is Kubernetes?

Kubernetes is an Open-Source, portable, extensible and production-grade container orchestration system for automating software deployment, scaling and management of containerized applications. It supports multiple deployment environments like on-premise, cloud and even virtual machines. Kubernetes can manage the execution of containerized applications across a network of devices. Using techniques that offer predictability, scalability and high availability, it is a platform created to fully manage the life cycle of containerized applications and services. With the help of Kubernetes, you can specify how your apps should operate and communicate with one another and the outside world. k8s is used as an alias for Kubernetes as it has 8 letters between k and s. I’ll be using interchangeably in this article.

Kubernetes provides us with a lot of functionalities such as service discovery and load balancing, storage orchestration, automated rollouts and rollbacks, automatic bin packing, self-healing where the k8s restarts the failed containers, replaces, kills containers that do not respond to user-defined health checks and it also does secret and configuration management in which we can store and manage sensitive data like passwords, SSH keys and auth tokens. The deployment and updation of secrets and application configuration can be done without rebuilding the container image and also without exposing the secrets to your stack configuration.

Why do we need Kubernetes in the first place?

The necessity for organizations to satisfy customers micro service-driven architectural requirements is one of the main factors for k8s’ meteoric rise in popularity. This enables organizations to build independent and modular applications and deploy them in containers which is then managed by k8s. In production, k8s then manages all the containers which are deployed into something called as pods. Pods are k8s’ object that acts as a wrapper to the container and helps to manage the containers. It is always recommended to use only one container per pod to avoid high coupling.

Uses of k8s

Kubernetes helps us to solve the container managment problems. Deploying applications in containers is somewhat an easy task but the actual problem starts when we manage the containers. Let’s say the container is not responding or it failed. In such cases developers are expected to manually troubleshoot. But, Kubernetes makes this process automated and tries to restart or spin a new container or pods without any downtime. This allows multiple teams to work on individual applications which can be deployed with speed and agility. They are highly performant and scalable. It can also be integrated in the CI/CD pipleline which makes it even more developer friendly.

Architecture of Kubernetes

Architecture of Kubernetes

Above architecture is a high level architecture of Kubernetes. Basically, k8s consists of two major nodes called Master and Worker node. The actual architecture of k8s is pretty overwhelming and I will try to simplify as much as I can.

Let’s look into Master Node.

Master Node

Master node or the control plane makes global decisions about the cluster like scheduling, monitoring worker nodes and pods as well as detecting and responding to cluster events. There are basically four key processes and they are as follows:

  • apiserver ( kube-apiserver )
  • schedulers ( kube-scheduler )
  • etcd
  • ctrl mgr ( kube-controller-manager )

apiserver ( kube-apiserver )

The API server exposes the k8s API basically acting as a frontend to the control plane. This is designed to scale horizontally and you can also run several instances of kube-apiserver and manage traffic among those instances. It is often mistaken for being the master i.e, the face of k8s cluster.

schedulers ( kube-scheduler )

It continuously watches over the new pods request and assigns a node for newly requested pods. There are a lot of factors taken into consideration for scheduling decision and they are as follows: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.

etcd

It is a distributed, key-value store used to store all cluster information and state information. It is the only stateful component in k8s and it is also called as the “Source of Truth”.

ctrl mgr ( kube-controller-manager )

The controller ensures the desired state is achieved in the cluster. They run in a single process but logically each controller is a separate process, but to reduce complexity they are all compiled into a single binary. It is a daemon that runs in a control loop. Examples: replication contoller, endpoint controller, serviceaccounts controller and more.

Let’s now look into Worker node.

Worker Node

Worker node or node components run on every node, maintaining running pods and providing the kubernetes runtime environment. The main processes are as follows:

  • kubelet
  • kube-proxy
  • container runtime

kubelet

It is the main agent on the node that is often mistake for the node itself. It watches the apiserver for any work. It takes a set of PodSpecs and ensures the containers described in PodSpecs are running and is healthy or not. It directly report to master.

kube-proxy

It is a network proxy that runs on every node in the cluster. It assigns an IP to each pod with the help of CNI provider and they are primarily used to maint the service abstraction.

container runtime

It is responsible for running containers and mainly deals with container abstraction. It works with any OCI compliant container engine. It also supports containerd, docker and rkt as well.

Prerequisites

Let’s look at some prerequisites before we start deploying our posts application into k8s cluster.

  • Kubernetes (Preferably use Rancher Desktop)
  • docker

Implementation

Let’s start by taking our posts app where users can post and also have the ability to comment on each individual posts. The app is built using microservices architecture and there are mainly four services namely: posts, comments, event-bus and query.

  • Posts : A service that handles creation of Posts.
  • Comments : A service that handles creation of Comments in a specific Post
  • Event-Bus : Custom message broker service built from scratch, that handles sending message/events to other services.
  • Query : A service that queries posts and comments posted.

NOTE: This is not a step by step tutorial on how to build applications in node so, I will be focusing on how to deploy the app in kubernetes cluster. Please feel free to refer the source code for more info.

The posts application is already built and this application is just a boilerplate. You can take any application and deploy them in kubernetes cluster. The main parts of this application is that you can create a post and comment on individual posts. Here I have also used a custom event bus to make things simple. You can use your own message brokers for the same if needed. Let’s see how we can deploy the application in kubernetes cluster.

Let’s create our deployment file for the frontend i.e for client. Create a file called client-depl.yml inside infra/k8s folder. As this file indicates that we need to deploy, we will be using k8s’ Deployment object and mentioning the same as kind: Deployment . The apiVersion is set to apps/v1 and the replicas is given as 1 for now. Make sure your matchLabels under selector matches the label specified under metadata of template , As it is required to match pod with the deployment. Another thing to take care is to containerize your services and push to a registry like docker hub. I have already containerized my services and pushed to docker hub. If you are following my source code then you should not have any issues with it. Once we specify all the necessary requirements, our Deployment file look something like this.

Now, after deployment we also need to have something called as a service file which will help us in networking between our pods. So let’s specify the service file as well. Here I will be using ClusterIP as this is a default one and it is used to communicate within the cluster itself. You can expose the Service to the public internet using an Ingress or a Gateway. We will use an Ingress to expose our app to the public later. The service should look something like this.

That’s it for the client. We have our deployment and service which will be used to create a pod in our cluster and attach a network protocol to it as well. Now our client pod is created and can communicate with other resources as well. We will be using the same for rest of the services. We can specify both our deployment and service configuration in a single yaml file with — — — separated. With the similar configuration let’s look at the configuration for posts.

Similarly, for query service the deployment and service configurations are as follows,

For comments service the deployment and service configurations are as follows,

And finally, for event bus the deployment and service configurations are as follows,

As you can see for all the services we have created the deployment and service configurations. Now each of the services have their own deployment and a network protocol attached to them.

But the deployments are only accessible within the k8s cluster and not to the public. We want to connect from outside to the cluster so for this let’s create an Ingress service file as well to take the traffic from public and route it to the cluster. For this, I’ll be using the k8s’ Ingress object and specify some rules. Let’s install ingress-nginx controller from the instructions given in this page.

Instead of specifying localhost let’s specify a domain like posts.com in our hosts file like this 127.0.0.1 posts.com , Once the host is defined let’s move to the individual route. For my application the route /posts/create should be handled by the posts service so the service name and the port number is given. Similarly, for the /posts endpoint it should be handled by the query service and for /posts/?(.*)/comments where the ?(.*) acts as a wildcard and allows all pattern. The comments service should be mentioned. and finally for this endpoint /?(.*) another wildcard which means all other routes except the ones mentioned above. Finally, the ingress file should look like this.

This will make sure that the traffic from public is routed to the services inside the cluster properly. And that’s it. Phewww….. Yeah it was a really long process but yes it’s finally done. All we need to do now is to just apply all the configuration and to see if everything works fine or not. Apply all the yaml file using this command.

kubectl apply -f .

Once the services are deployed you can now head to the posts.com URL in your browser and see the application there.

Final Output

Conclusion

In this article, we saw what Kubernetes really is and why we use it. We have also deployed an existing application to k8s cluster. For the deployment, we wrote some configurations for each and every service that our application had. Deployment allows to create pods and manage them and Service allows to have a network rule witch which the communication can be established in and outside the cluster. Deploying applications to k8s cluster and managing the applications become very easy as k8s handles most of the work for us. All we have to do is just specify all the requirements, that’s all. I hope this article was useful to you all. Will see you in my next article until then, as always code learn repeat …….

Additional Resources

Follow for more…

--

--