Containers are all the rage these days ever since Docker came on the scene and made containers available to us mere mortals. But how do you deploy, manage, and monitor all these containers? Meet Kubernetes, an open-source, container management service!
Containers give us tons of benefits of previous methods of software deployment and infrastructure automation. But once an organization sees the benefit of containers and begins to see VM sprawl 2.0, they run into another problem; how to deploy, manage, and monitor all of these containers! Kubernetes is a platform that aims to help.
What is Kubernetes? (K8S)
Kubenetes, or K8S, is an open-source, container management service that helps organizations deploy, scale, and monitor Linux containers. It’s a platform that allows organizations to create container clusters, set a declared configuration state via a YAML file, send that file to K8S (cluster services) and K8S will then perform whatever actions are necessary to make the environment consistent with the desired state.
Kubernetes was developed and designed by Google with later help from other companies in the business, such as RedHat. The first version of Kuberbetes was released in July 2015 by Google. Interestingly, Google donated the project to a newly formed foundation at the time called the Cloud Native Computing Foundation that is now run by the Linux Foundation.
K8S is, in a nutshell, is a container management platform that allows users to define how they want their containerized application to look in a single file, send it to the K8S service and K8S will ensure it’s deployed and stays that way.
K8S is built around a cluster. A cluster consists of a master node and multiple worker nodes. Each cluster consists of roughly six main components:
- API server - A REST API interface for all Kubernetes resources
- Scheduler - Places containers into the cluster based on various policies, metrics, and command-line flags.
- Controller manager - Reconciles the state of the cluster against the desired state.
- kubelet - Interacts with the Docker engine to bring up containers.
- kube-proxy - Manages network connectivity between the containers.
The master node is the component that consists of the API server, scheduler, controller manager, etc. The master node is the point that controls all input to the cluster and makes all of the decisions for the cluster.
Each worker node has kubelet and kube-proxy components. A worker node is a container host and can manage individual containers or groups of containers called pods. A pod is a grouping of containers with a shared namespace and volumes. The worker node is “dumb” and takes instructions from the master node. The workers nodes carry out the decisions made by the master node. Each worker node is in charge of bringing up and tearing down containers at the discretion of the master node.
Deploying an Application to a Kubernetes Cluster
Once a cluster has been created, it’s time to deploy an application to it. This is called creating a deployment. A deployment mostly consists of creating a deployment spec file, which contains all of the specs of how your application, or pod, will be treated. A deployment spec file consists of container configuration options such as images to use, how many pods should be running, the network configuration to use in the cluster and so on.
The deployment spec file is crucial because it’s the file that represents all of the expected states your application will be in. Once the spec file is complete, simply send it to the cluster and K8S will bring up as many containers as necessary and begin monitoring them.
Creating a deployment is and should be the easy part. K8S makes it easy by simply forcing you to create a spec file and sending that to the cluster. No need to worry about how it’s implemented. K8S will take care of the rest of the work.
Kubernetes is a big shift from traditional infrastructure deployments. It aims to take away most of the typical infrastructure work we usually do with virtual machines and provides a service to set a desired state. K8S takes infrastructure automation to the next level and prevents IT systems administrators and developers from having to worry about how to bring up individual containers, and more importantly; how to monitor those containers to ensure they’re always up and maintaining the state they should be in.