Showing posts with label Introduction. Show all posts
Showing posts with label Introduction. Show all posts

How Kubernetes works

 Introduction

Kubernetes is a compact, extensible, open source stage for overseeing containerized jobs and administrations, that works with both explanatory setup and computerization. It has an enormous, quickly developing environment. Kubernetes administrations, backing, and instruments are broadly accessible.


The name Kubernetes begins from Greek, meaning helmsman or pilot. K8s as a condensing comes about because of counting the eight letters between the "K" and the "s". Google publicly released the Kubernetes project in 2014. Kubernetes consolidates more than 15 years of Google's experience running creation jobs at scale with best-of-breed thoughts and practices from the local area.

For associations that work at a gigantic scope, a solitary Linux compartment occasion isn't sufficient to fulfill their applications' all's necessities. It's normal for adequately complex applications, for example, ones that convey through microservices, to require different Linux containers that speak with one another. That design presents another scaling issue: how would you deal with that large number of individual containers? Designers will in any case have to deal with booking the organization of holders to explicit machines, dealing with the systems administration between them, developing the assets allotted under weighty burden, and significantly more.


Enter Kubernetes, a container orchestration framework — a method for dealing with the lifecycle of containerized applications across a whole armada. It's a kind of meta-process that gives the capacity to robotize the organization and scaling of a few compartments without a moment's delay. A few holders running a similar application are gathered together. These compartments go about as reproductions, and effectively load balance approaching solicitations. A compartment orchestrator, then, manages these gatherings, guaranteeing that they are working accurately.

Kubernetes architecture






There are multiple components which are involved to run Kubernetes which are listed below.

Pods

A Kubernetes pod is a group of containers, and is the smallest unit that Kubernetes administers. Pods have a single IP address that is applied to every container within the pod. Containers in a pod share the same resources such as memory and storage. This allows the individual Linux containers inside a pod to be treated collectively as a single application, as if all the containerized processes were running together on the same host in more traditional workloads. If you know about the docker then pod on kubernetes won't surprise you much. You can run multiple containers inside a Pod but keep in mind that you can't run same container twice inside Pod.

Deployments


Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment.


Service

An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

Nodes


A Kubernetes cluster must have at least one compute node, although it may have many, depending on the need for capacity. Pods orchestrated and scheduled to run on nodes, so more nodes are needed to scale up cluster capacity.

Nodes do the work for a Kubernetes cluster. They connect applications and networking, compute, and storage resources.


Control plane

The Kubernetes control plane is the main entry point for administrators and users to manage the various nodes. Operations are issued to it either through HTTP calls or connecting to the machine and running command-line scripts. As the name implies, it controls how Kubernetes interacts with your applications.


Cluster

A cluster is all of the above components put together as a single unit.

API Server

The API server exposes a REST interface to the Kubernetes cluster. All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it.

Scheduler

The scheduler is responsible for assigning work to the various nodes. 

Controller manager

The controller-manager is responsible for making sure that the shared state of the cluster is operating as expected. More accurately, the controller manager oversees various controllers which respond to events (e.g., if a node goes down).

Kubelet

A Kubelet tracks the state of a pod to ensure that all the containers are running. It provides a heartbeat message every few seconds to the control plane. If a replication controller does not receive that message, the node is marked as unhealthy.

Kube proxy

The Kube proxy routes traffic coming into a node from the service. It forwards requests for work to the correct containers.

etcd

etcd is a distributed key-value store that Kubernetes uses to share information about the overall state of a cluster. 


Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...