Kubernetes in Industry…!!!

Sourabhmiraje
9 min readDec 25, 2020

Hello folks…

When containers were first introduced in 2008, Virtual Machines, or VMs, were the state-of-the-art option to optimize a data center’s physical resources.

This arrangement worked well enough, but had some flaws:

  • Virtual machines utilized too many resources because they required both a complete operating system, and emulated instructions to reach the physical CPU.

To solve this problem, a common kernel is shared to all applications that can choose any operational resources as necessary.

Containers can run on bare metal while sharing resources, but without being able to access other containers’ resources.

How do containers ensure high availability, disaster recovery, or scalability? Container orchestration systems such as Kubernetes (nicknamed K8S) offer a solution.

So we are going to discuss about the same topic today that is Kubernetes and how industry is getting benefitted by k8s.

Now you guys might be thinking “What is kubernetes exactly ?”… lets Jump to the topic

What is Kubernetes ?

Kubernetes (also known as k8s or “kube”) is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling, like real-time data streaming through Apache Kafka.

The Rise of Kubernetes :

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

Originally created by Google and then donated to Cloud Native Computing Foundation, Kubernetes is widely used in production environments to handle Docker containers and other container tools in a fault-tolerant manner.

As an open-source product, it is available on various platforms and systems. Google Cloud, Microsoft Azure, and Amazon AWS offer official support for Kubernetes, so configuration changes to the cluster itself are not necessary.

Google generates more than 2 billion container deployments a week, all powered by its internal platform, Borg. Borg was the predecessor to Kubernetes, and the lessons learned from developing Borg over the years became the primary influence behind much of Kubernetes technology.

The popularity of Kubernetes has steadily increased, with more than four major releases in 2017. K8s also was the most discussed project in GitHub during 2017, and was the project with the second most reviews.

Why you need Kubernetes and what it can do?

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

Developers can also create cloud-native apps with Kubernetes as a runtime platform by using Kubernetes patterns. Patterns are the tools a Kubernetes developer needs to build container-based applications and services

With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running the way you intended them to run.
  • Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.

However, Kubernetes relies on other projects to fully provide these orchestrated services. With the addition of other open source projects, you can fully realize the power of Kubernetes. These necessary pieces include (among others):

  • Registry, through projects like Docker Registry.
  • Networking, through projects like OpenvSwitch and intelligent edge routing.
  • Telemetry, through projects such as Kibana, Hawkular, and Elastic.
  • Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multitenancy layers.
  • Automation, with the addition of Ansible playbooks for installation and cluster life cycle management.
  • Services, through a rich catalog of popular app patterns.

How does Kubernetes work ?

Lets Know general keywords before knowing working.

Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.

Nodes: These machines perform the requested tasks assigned by the control plane.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage from the underlying container. This lets you move containers around the cluster more easily.

Replication controller: This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod no matter where it moves in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes, reads the container manifests, and ensures the defined containers are started and running.

kube-apiserver : Need to interact with your Kubernetes cluster? Talk to the API. The Kubernetes API is the front end of the Kubernetes control plane, handling internal and external requests. The API server determines if a request is valid and, if it is, processes it. You can access the API through REST calls, through the kubectl command-line interface, or through other command-line tools such as kubeadm.

kube-scheduler: Is your cluster healthy? If new containers are needed, where will they fit? These are the concerns of the Kubernetes scheduler. The scheduler considers the resource needs of a pod, such as CPU or memory, along with the health of the cluster. Then it schedules the pod to an appropriate compute node.

kubectl: The command line configuration tool for Kubernetes.

A working Kubernetes deployment is called a cluster. You can visualize a Kubernetes cluster as two parts: the control plane and the compute machines, or nodes.

The control plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Compute machines actually run the applications and workloads.

Each node is its own Linux® environment, and could be either a physical or virtual machine. Each node runs pods, which are made up of containers.

Kubernetes runs on top of an operating system and interacts with pods of containers running on the nodes.

The Kubernetes control plane takes the commands from an administrator (or DevOps team) and relays those instructions to the compute machines.

This handoff works with a multitude of services to automatically decide which node is best suited for the task. It then allocates resources and assigns the pods in that node to fulfill the requested work.

The desired state of a Kubernetes cluster defines which applications or other workloads should be running, along with which images they use, which resources should be made available to them, and other such configuration details.

At the end, lets know some case studies about how k8s solving the industry challenges …

CASE STUDY:

Company : Pinterest

Location : San Francisco, California

Industry : Web and Mobile App

Challenge

After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.

Solution

The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.

Impact

“By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins,” says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. “We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster.”

Company : Spotify

Location : Global

Industry : Entertainment

Challenge

Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. “Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,” he says.

Solution

“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.

Impact

The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. “A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we’ve heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.

Almost every company is getting benefitted with kubernetes somehow direct or indirectly.

Thus with the use of kubernetes we can automate our deployment with very less amount of time and so efficiently and less error prone. with increasing complex deployment demand for this technology increasing day by day.

Please comment below to let me know about the article…

Any suggestions are always appreciated.

Thank You for reading

--

--