Showing posts with label containers. Show all posts
Showing posts with label containers. Show all posts

How Kubernetes works

 Introduction

Kubernetes is a compact, extensible, open source stage for overseeing containerized jobs and administrations, that works with both explanatory setup and computerization. It has an enormous, quickly developing environment. Kubernetes administrations, backing, and instruments are broadly accessible.


The name Kubernetes begins from Greek, meaning helmsman or pilot. K8s as a condensing comes about because of counting the eight letters between the "K" and the "s". Google publicly released the Kubernetes project in 2014. Kubernetes consolidates more than 15 years of Google's experience running creation jobs at scale with best-of-breed thoughts and practices from the local area.

For associations that work at a gigantic scope, a solitary Linux compartment occasion isn't sufficient to fulfill their applications' all's necessities. It's normal for adequately complex applications, for example, ones that convey through microservices, to require different Linux containers that speak with one another. That design presents another scaling issue: how would you deal with that large number of individual containers? Designers will in any case have to deal with booking the organization of holders to explicit machines, dealing with the systems administration between them, developing the assets allotted under weighty burden, and significantly more.


Enter Kubernetes, a container orchestration framework — a method for dealing with the lifecycle of containerized applications across a whole armada. It's a kind of meta-process that gives the capacity to robotize the organization and scaling of a few compartments without a moment's delay. A few holders running a similar application are gathered together. These compartments go about as reproductions, and effectively load balance approaching solicitations. A compartment orchestrator, then, manages these gatherings, guaranteeing that they are working accurately.

Kubernetes architecture






There are multiple components which are involved to run Kubernetes which are listed below.

Pods

A Kubernetes pod is a group of containers, and is the smallest unit that Kubernetes administers. Pods have a single IP address that is applied to every container within the pod. Containers in a pod share the same resources such as memory and storage. This allows the individual Linux containers inside a pod to be treated collectively as a single application, as if all the containerized processes were running together on the same host in more traditional workloads. If you know about the docker then pod on kubernetes won't surprise you much. You can run multiple containers inside a Pod but keep in mind that you can't run same container twice inside Pod.

Deployments


Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment.


Service

An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

Nodes


A Kubernetes cluster must have at least one compute node, although it may have many, depending on the need for capacity. Pods orchestrated and scheduled to run on nodes, so more nodes are needed to scale up cluster capacity.

Nodes do the work for a Kubernetes cluster. They connect applications and networking, compute, and storage resources.


Control plane

The Kubernetes control plane is the main entry point for administrators and users to manage the various nodes. Operations are issued to it either through HTTP calls or connecting to the machine and running command-line scripts. As the name implies, it controls how Kubernetes interacts with your applications.


Cluster

A cluster is all of the above components put together as a single unit.

API Server

The API server exposes a REST interface to the Kubernetes cluster. All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it.

Scheduler

The scheduler is responsible for assigning work to the various nodes. 

Controller manager

The controller-manager is responsible for making sure that the shared state of the cluster is operating as expected. More accurately, the controller manager oversees various controllers which respond to events (e.g., if a node goes down).

Kubelet

A Kubelet tracks the state of a pod to ensure that all the containers are running. It provides a heartbeat message every few seconds to the control plane. If a replication controller does not receive that message, the node is marked as unhealthy.

Kube proxy

The Kube proxy routes traffic coming into a node from the service. It forwards requests for work to the correct containers.

etcd

etcd is a distributed key-value store that Kubernetes uses to share information about the overall state of a cluster. 


Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

What are Volumes in Docker and how to use them

 Docker Volume




Volumes are preferred mechanism for persisting data by docker containers. Volumes are completely managed by Docker. There are couple of important points which can convince you to use volumes over
bind mounts.

  • Volumes are easy to backup or migrate.
  • Volumes can be easily shared with other containers.
  • Volumes work on both Linux and windows.
  • Volumes can be managed by Docker CLI or Docker API.
  • New volumes can have their content pre populated by containers.
  • Volumes are better choice than persisting data on container' writable layer.
  • Volume's contents exist outside of lifecycle of a given container.

If your container generates non persistent state data, then you can consider using tmpfs mount to avoid storing the data  anywhere on permanently. And to increase the performance of the container you can always avoid writing data to container's writable layer. Always keep your container lightweight. 


What is tmpfs mount 

If you are running containers on Linux system, then you have one more option to use is tmpfs mount.
When you create a container with tmpfs mount then container can create files outside of container's writable layer. 

The tmps mount is temporary and only persisted in host memory. So when you stop the container the tmpfs mounts is removed and data persisted there won't be available.

The best use case for this is ,when you temporarily want to store the sensitive data and you don't want to persist it inside either host or container's writable layer.


Limitations on tmpfs mount

If you want to use tmpfs mount the you need to remember two things below
  • You can share tmpfs mount between containers
  • This functionality is only available on Linux.

Let's create tmpfs mount 

Start the container alpine and inspect it. Give the container a name. In my case i gave it tmptest0.


docker inspect tmptest0

[

    {

     

            "AutoRemove": false,

            "VolumeDriver": "",

            "VolumesFrom": null,

            "CapAdd": null,

            "CapDrop": null,

            "CgroupnsMode": "host",

            "Dns": [],

            "DnsOptions": [],

            "DnsSearch": [],

            "ExtraHosts": null,

            "GroupAdd": null,

            "IpcMode": "private",

            "Cgroup": "",

            "Links": null,

            "OomScoreAdj": 0,

            "PidMode": "",

            "Privileged": false,

            "PublishAllPorts": false,

            "ReadonlyRootfs": false,

            "SecurityOpt": null,


Now lets start one more container with alpine image and give it a name tmptest and attach tmpfs mount.

docker run -it -d --name tmptest --tmpfs /app alpine


Now inspect this tmptest container.

 "AutoRemove": false,

            "VolumeDriver": "",

            "VolumesFrom": null,

            "CapAdd": null,

            "CapDrop": null,

            "CgroupnsMode": "host",

            "Dns": [],

            "DnsOptions": [],

            "DnsSearch": [],

            "ExtraHosts": null,

            "GroupAdd": null,

            "IpcMode": "private",

            "Cgroup": "",

            "Links": null,

            "OomScoreAdj": 0,

            "PidMode": "",

            "Privileged": false,

            "PublishAllPorts": false,

            "ReadonlyRootfs": false,

            "SecurityOpt": null,

            "Tmpfs": {

                "/app": ""

            },



See the difference. It has added tmpfs mount.


Let's create the volume

Docker volume create new-volume

docker volume create new-volume                     

new-volume


Check the volume has been created or not.

docker volume ls

DRIVER    VOLUME NAME

local     new-volume


So we have the volume now. Lets inspect and check what's inside the volume

docker volume inspect new-volume

[

    {

        "CreatedAt": "2021-09-22",

        "Driver": "local",

        "Labels": {},

        "Mountpoint": "/var/lib/docker/volumes/new-volume/_data",

        "Name": "new-volume",

        "Options": {},

        "Scope": "local"

    }

]



Start the container with Volume


docker run -d --name voltest -v new-volume:/app alpine


Inspect the container and you'll see below result:

docker container inspect voltest


"HostConfig": {

            "Binds": [

                "new-volume:/app"

            ],


Now follow the cleanup with below commands.


Here i am stopping all running containers at once.

docker container stop $(docker ps -q) 


Remove all containers

docker container rm $(docker ps -aq)


Remove the volume

docker volume rm new-volume


That was it on volume. There is lot more on volume and that is creating a volume using volume driver, which I'll post on upcoming blogs. That's how we create and use the volumes on docker.

Docker essentials learning the Docker Networking

Getting started with Docker networking




In this article we are going to learn how to create a network and use the network.

Listing All the Docker networks

Docker network ls 

Output below

docker network ls

NETWORK ID   NAME      DRIVER    SCOPE

e152bd78da   bridge    bridge    local

7e94216ea4   host      host      local

9eb7b364ec   none      null      local



You can above command output. You can driver name is bridge. So by default bridge driver gets created on docker.

If you don't specify the name then default network driver bridge is created. Bridge network is used when your application runs in standalone containers that need to communicate.

There is HOST network as well.  For standalone container, remove network isolation between container and docker host.

User defined bridge networks are used when you need multiple containers to communicate on the same docker host.

Host networks are best when the network stack should not be isolated from docker host. Container shares the host's networking namespace. And container does not get allocated it's own IPAddress.

Overlay networks are best when containers are running on different docker host to communicate, or multiple applications  work together swarm services.

Docker with IPTables

Docker manipulates iptables rules to provide network isolation. If you are running Docker on a Host which is exposed to the Internet, you will probably want to have iptables policies in place to prevent unauthorized access to containers or any other services that are running on the host system.

Docker installs two custom iptables chains named DOCKER-USER and DOCKER and it ensures that incoming packets are always checked by these two chains first.

All of Docker's iptables rules are added to the Docker chain. Don't manipulate this chain manually. If you need to add rules which load before Docker's rules, add them to the DOCKER-USER chain. These rules are applied before any rules Docker creates automatically.


To create HOST network

docker run --rm -d --network host --name my_nginx nginx


verify that no new IP was created.

Now run container using HOST network

docker run --rm -d --network host --name my_nginx nginx


--rm : remove stopped containers

Create user defined bridge network

docker network create new_net  


You can inspect the new network 


docker inspect new_net

[

    {

        "Name": "new_net",

        "Id": "8cd40b2f992bd6045824c70163dbe93462200d46897dbff18fe71",

        "Scope": "local",

        "Driver": "bridge",

        "EnableIPv6": false,

        "IPAM": {

            "Driver": "default",

            "Options": {},

            "Config": [

                {

                    "Subnet": "172.xx.0.0/16",

                    "Gateway": "172.xx.0.1"

                }

            ]

        },

        "Internal": false,

        "Attachable": false,

        "Ingress": false,

        "ConfigFrom": {

            "Network": ""

        },

        "ConfigOnly": false,

        "Containers": {

            "f9dca1401da2e6a42f124799c282592f5586cd8587c665ec32bc284": {

                "Name": "my_nginx",

                "EndpointID": "0e14e2e057c56e6583ddb7a6fed7a2545f08d3c0ec95314b3f290c2fc5678",

                

            }

        },

        "Options": {},

        "Labels": {}

    }

]

Now lets run a container on this network

docker run --rm -d --network new_net --name my_nginx nginx


Suppose if container is already running the how we can connect this network using that running container?

docker network connect new_net nginx

Lets disconnect the container from user defined bridge network

docker network disconnect new_net alp 



Restrict Connections to the Docker Host

By default all external source IPs are allowed to connect to Docker host. To allow only a specific IP or network to access the containers, insert a negated rule at the top of the DOCKER-USER filter chain. For example, the below rule restricts external access from all IP addresses except 192.168.1.1 :
 

iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP


Please note that you will need to change ext_if to correspond with your host’s actual external interface. You could instead allow connections from a source subnet. The following rule only allows access from the subnet 192.168.1.0/20:

iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.0/20 -j DROP



That was it , a very short article where I could have been explain on basic networking on Docker.
but I'll try to add more on networking in upcoming days.

Docker essentials-Docker commands for day to day use for beginners

 Useful docker commands for day to day use




Docker is awesome technology  in the virtualization world. If you are willing to learn docker or you just have started learning docker then you should know all these basics commands for docker which will be used in your daily routine. 

First we need to understand what is docker hub. Docker hub is an image repository where everyone pushes their images so that others can use those image. Suppose I want to use sql server on Docker then I'll download the image from docker hub and then I'll use it. 

Wait! did you just say SQL Server on docker? SQL server is only for windows. No, SQL server is now available to use on linux platform as well using docker.

To follow below tutorial make sure you have downloaded the docker and it's up and running.
You can open your terminal or CMD and try running any docker command. If it is not installed then you'll get command not found or if it's installed and docker is not running then also you'll get command not found.

How to download image from docker hub

When you say downloading image using docker then it means pulling the image using docker. Use below command to pull image. By-default docker uses DockerHub .

docker pull alpine

Using default tag: latest

latest: Pulling from library/alpine

Digest: sha256:e1c082e3d3c45cccac829840a25941e679c25d412c2fa221cf1a824e6a

Status: Image is up to date for alpine:latest

docker.io/library/alpine:latest


Docker image has been pulled. Let's check images on your system using below command.

docker images

REPOSITORY   TAG       IMAGE ID       CREATED       SIZE

alpine       latest    bb3de5531c18   3 weeks ago   5.34MB

pankajdwivedi@Pankajs-MacBook-Air ~ % 




You can also use docker image ls command

docker image ls

REPOSITORY   TAG       IMAGE ID       CREATED       SIZE

alpine       latest    bb3de5531c18   3 weeks ago   5.34MB



Running image inside container

docker container run --name alp -it alpine

/ # 



You can omit container. Writing container was old way. Now you don't have to write container to run it

docker run --name alp -it alpine 

/ # 


Here are some flags and keywords in above command:

  • run : giving command to run the container
  • --name alp : giving name to your container (It's always a better idea to give your container name)
  • -it : running container in interactive mode
  • alpine : is image name which you just pulled.
  • -d : detach mode 
Using detach will be running your container in background. You will not be logged in to your container.

docker run --name alp -dit alpine

a6ba34f88715cf3d63879872a65d6a7aeb48572ee7a79fb22ecf751049c49dc2 

To enter in a container's shell you need below command:

docker exec -it alp /bin/sh

/ # 

 Since we have started our alp container, let's check if it is running. Using docker ps

docker ps

CONTAINER ID   IMAGE     COMMAND     CREATED         STATUS         PORTS     NAMES

a6ba34f88715   alpine    "/bin/sh"   4 minutes ago   Up 4 minutes             alp



docker ps -a will give you running as well as stopped containers. Use it accordingly.

How to check logs

check logs using docker logs container_name command.

docker logs alp 

/ # exit


How to start and stop container

docker start container_name_or_id and docker stop container_name_or_id

docker stop alp

alp


docker start alp

alp



How to inspect container

use docker inspect container_name_or_id

docker inspect alp

[

    {

        "Id": "a6ba34f88715cf3d638797a79fb22ecf751049c49dc2",

        "Created": "2021-09-19T08:04:17.202282Z",

        "Path": "/bin/sh",

        "Args": [],

        "State": {

            "Status": "running",

            "Running": true,


You can find all details about the containers here including IPaddress.

docker inspect alp | egrep "IPAddress"

            "SecondaryIPAddresses": null,

            "IPAddress": "172.17.0.2",

                    "IPAddress": "172.17.0.2",




Now very important , you always want to check how many images and containers are running on your system and how much space they are consuming. Use docker system df command

df command is similar to Linux system to check memory utilisation.

docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE

Images          1         1         5.337MB   0B (0%)

Containers      3         1         15B       10B (66%)

Local Volumes   0         0         0B        0B

Build Cache     0         0         0B        0B



How to check diff of container


There is also very important command to track changes to files or directories on container file system.


docker diff alp

C /root

A /root/.ash_history



How to remove all containers running.

first stop the running containers. 

docker stop $(docker ps -q)

a6ba34f88715



$(docker ps -q)

docker ps -q will give you container_id of running containers.

docker rm alp     

alp


This above will remove alp container.

Remove all stopped containers

docker container prune

WARNING! This will remove all stopped containers.

Are you sure you want to continue? [y/N] y

Deleted Containers:

803ec8603418b8e5ca7cbe9178724ca2714ab3fba48ac406282e78

7a8ee73bef1b766d38594ceeed8bdabf0cbc4e114faa65de71492a


Total reclaimed space: 10B



Remove all images

docker image prune

WARNING! This will remove all dangling images.

Are you sure you want to continue? [y/N] y

Total reclaimed space: 0B


Delete alpine image

docker image rm alpine

Untagged: alpine:latest

Untagged: alpine@sha256:e1c082e3d3c45cccahe679c25d438cc8412c2fa221cf1a824e6a

Deleted: sha256:bb3de5531c18f185667b0be0e400ab244440093de82baf4072e14af3b84

Deleted: sha256:ee420dfed78aeb92dbd73a3fbb59fa5dac4e04639210cc7905



Now check again the space using docker system command

docker system df      

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE

Images          0         0         0B        0B

Containers      0         0         0B        0B

Local Volumes   0         0         0B        0B

Build Cache     0         0         0B        0B



Now you can see no images, no containers are running and space was also reclaimed.

So these were my basics essential commands which could help you in debugging docker containers.
In next article I'll talk about volumes, networks and other stuff.


Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...