Containers orchestration: Kubernetes vs Docker swarm




When deploying applications at scale, you need to plan all your architecture components with current and future strategies in mind. Container orchestration tools help achieve this by automating the management of application microservices across all clusters. 

There are few major containers orchestration tools listed below:

  • Docker Swarm
  • Kubernetes
  • OpenShift
  • Hashicorp Nomad
  • Mesos
Today we'll talk about Docker Swarm and Kubernetes and we'll compare them in terms of features.

What is container orchestration 

Container orchestration is a set of practices for managing the Docker Containers at large scale. As soon as containerized applications scale to large number of containers, then there is need of container management capabilities. Such as provisioning containers, scaling up and scaling down, manage networking, load balancing ,security and others.  

Let's talk Kubernetes

Kubernetes is an open source, cloud native infrastructure tool that automates scaling, deployment and management of containerized applications. 

Kubernetes was originally developed by google and later was handed over to Cloud Native Computing Foundation(CNCF) for enhancement and maintenance. Kubernetes is the most popular and highly in demand  orchestrator tool. Kubernetes is complex tool and a bit difficult to learn compare to swarm.

Here are few main architecture components of Kubernetes below:

Cluster 

A collection of multiple nodes, typically at least one master node and several worker nodes(also known as minions)

Node

A physical or Virtual Machine(VM)

Control Plane

A component that schedule and deploys application instances across all nodes

Kubelete

An agent process running on nodes. It is responsible of managing the state of each nodes and it can perform several actions to maintain a desired state.

Pods

Pods are basic scheduling unit. Pods consist of one or more containers co-located on a host machine and share same resources.

Deployments, Replicas and ReplicaSets

Docker Swarm

Docker swarm is native to Docker platform Docker was developed to maintain the application efficiency and availability in different runtime environments by deploying containerized application microservices across multiple clusters. 

A mix of docker-compose, swarm, overlay network can be used to manage cluster of docker containers.

Docker swarm is still maturing in terms of functionalities when compare to other open source container orchestration tools.

Here are few main architecture components of Docker swarm below:

Swarm 

A collection of nodes that include at-least one manager and several worker nodes.

Service

A task that agent nodes or managers are required to perform on the swarm.

Manager node

A node tasked with delivering work. It manages and distributes the task among worker nodes.

Worker node

A node responsible for running tasks distributed by the swarm's manager node.

Tasks

Set of commands

Choosing the right Orchestrator for your containers

Kubernetes focuses on open-source and modular orchestration, offering an efficient container orchestration solution for high demand applications with complex configuration.

Docker swarm emphasises ease of use, making it most suitable for simple applications that are quick to deploy and easy to manage.

Some fundamental differences between both 

GUI:

Kubernetes features an easy web user interface(dashboards) that helps you
  • Deploy containerized application on cluster 
  • Manage cluster resources 
  • View an error log, deployments, jobs
Unlike Kubernetes, Docker swarm does not come with Web UI to deploy applications and orchestrate containers. But there are some third party tools which can achieve this with Docker.

Availability:

Kubernetes ensure high availability by creating clusters to eliminate ingle point of failures. You can use Stacked Control Plane nodes that ensure availability by co-locating etcd objects with all available nodes of a cluster during failover. Or you can use external etcd objects for load balancing while controlling the control plane nodes separately.  

For Docker to maintain high-availability, Docker uses service replication at swarm nodes level. A swarm manager deploys multiple instances of the same container with replicas of services in each.

Scalability:

Kubernetes supports autoscaling on both  cluster level and pod level. Whereas Docker Swarm deploys containers quickly. This gives the orchestration tool faster reaction times that allow for on-demand scaling.

Monitoring: 

Kubernetes offers multiple native logging and monitoring solutions for deployed services within a cluster. Also Kubernetes supports third-party integration to help with event-based monitoring.

On the other side Docker Swarm doesn't offer monitoring solution like Kubernetes. As a result you need to rely on third party applications to support monitoring. So monitoring a Docker Swarm is considered to e more complex than Kubernetes. 
 
Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How and why container monitoring is so important



What is container monitoring?

Containers are ephemeral in nature, they are difficult to monitor compared to bare metal server based applications or even those running on virtualized server. Monitoring is critical to ensure avalability, performance and security of containers. So containers infrastructure requires new monitoring tools and strategies.

Container observability

Visibility and monitoring are essential a running environment and to optimize resource usage and costs.

Because each container image can have a large number of running instances and due to high pace at which new images and versions are introduced, problems can be easily spread across containers and applications and can interrupt the entire architecture. So this makes it very critical to identify the root cause of a problem as soon as it occurs.

In large scale containerized environments, this is only possible through dedicated cloud native monitoring tools.

But if you are unable to achieve observability so this can result in below:


  • It is very difficult for developers and operations task to understand what is running and how it is performing. So without observability it is very difficult to troubleshoot the problem and meeting the SLA for a production system.
  • Scalability is also the major challenge to achieve without observability. Scaling your application on demand can enhance your user's experience. But if scalability is too slow it can make it poor.

Challenges with container monitoring 

There are few challenges in container monitoring:
  • Containers are ephemeral so provisioning and destroying a container very quick process. This is one of the biggest advantage but for complex and big production system it makes very difficult to identify the issue.
  • Containers share resources. These consume resources from host machine. If there is no monitoring of resources on host machine then any point of time high CPU or memory spike can scare you and can lead your production running application to stop.

Then how can we monitor containers

You can always use alerting system to monitor your containers. Setting up alert across the delivery pipeline can prevent the risk of system failure at early stage.

What are the common features in monitoring tools 

  • Real time monitoring 
  • Performance baseline
  • Anomaly detection
  • Network Performance monitoring 
  • Config monitoring 
  • Dashboards
  • API monitoring
  • Alerting
  • Automation

Here are famous container monitoring tools used by modern industries

Prometheus

Prometheus is open-source systems monitoring and alerting toolkit and it was originally built at SoundCloud. Prometheus collects and stores it's metrics s time series data ie. metrics information was stored with the timestamp at which it was recorded alongside optional key value pairs called labels.

features:

  • A multi-dimensional data model with time series data identified by metric name and key/value pairs
  • PromQL is a flexible query language to query the dimensionality 
  • Multiple modes of  graphing and dashboard support

Grafana

With Grafana you can visualise, analyse and alert on your system. No matter where your data is stored you can create dashboards and monitor. your data source can be anything like postgres, mysql, redis etc. 

Apart from above two there are few more popular tools like ElasticsSearch and Kibana, Zabbix, datadog etc.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How to run PostgreSQL on Docker



Postgres on Docker 

Postgres is most advanced object relational database management system(ORDBMS). Postgres implements majority of SQL:2011 standard. It's ACID compliant and It avoids locking issues using multiversion concurrency control. So today we are going to run Postgres on Docker.

To start with Postgres we first need to pull the image from DockerHub. DockerHub is image repository for all images. Let's run the below command and pull the image:

docker pull postgres

Using default tag: latest

latest: Pulling from library/postgres

a9eb63951c1c: Pull complete 

b192c7f382df: Pull complete 

e7ce3f587986: Pull complete 

4098744a1414: Pull complete 

4c98d6f3399d: Pull complete 

65e57fefc38a: Pull complete 

d61d9528cfd5: Pull complete 

de6b20f44659: Pull complete 

25db13ff0bef: Pull complete 

7f74f4b0e936: Pull complete 

144c847b11fb: Pull complete 

cf0afd1be009: Pull complete 

fe0c14991327: Pull complete 


Now let's check that we have downloaded the image.

docker images

REPOSITORY        TAG       IMAGE ID       CREATED       SIZE

postgres          latest    83ce63c594ee   5 days ago    355MB


Let's run the image and start a container.

docker run --name test -e POSTGRES_PASSWORD=Test@123 -d postgres


Just run the docker ps command to check if container is running

docker ps

CONTAINER ID   IMAGE      COMMAND                  CREATED         STATUS   

83ec4a222   postgres   "docker-entrypoint.s…"   2 minutes ago   Up 


Let's enter in bash shell of container by running below command

docker exec -it 83ec4a222 bash

root@83ec4a222:/# 


Connect to Postgres now:

psql -h localhost -p 5432 -U postgres -w

psql (14.0 (Debian 14.0-1.pgdg110+1))

Type "help" for help.


You are connected to Postgres now. Lets' create some tables and execute some queries.

postgres=# \l

                                 List of databases

   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   

-----------+----------+----------+------------+------------+-----------------------

 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 

 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +

           |          |          |            |            | postgres=CTc/postgres

 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +

           |          |          |            |            | postgres=CTc/postgres

(3 rows)


postgres=# 



Let's check the current database name by running below command.

postgres=# select current_database();

 current_database 

------------------

 postgres

(1 row)


So current database is Postgres. We'll check now how many databases are there on the system.

postgres=# select datname from pg_catalog.pg_database;

  datname  

-----------

 postgres

 template1

 template0

(3 rows)


There are total 3 databases on system.

You can check all tables on a database by querying information schema.

postgres=# select table_name from information_schema.tables limit 10;

      table_name       

-----------------------

 pg_statistic

 pg_type

 pg_foreign_table

 pg_authid

 pg_shadow

 pg_statistic_ext_data

 pg_roles

 pg_settings

 pg_file_settings

 pg_hba_file_rules

(10 rows)



We can do a lot more than this on Postgres this was just a small part about Postgres. We can get all information about all tables and databases just by using information schema. Docker can be very useful in this case when we don't want to install it on system and want to run Postgres inside container and can leverage the power of Docker.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How to dockerize your python application in docker

Dockerize your python application:






Docker is a technology which lets you build, deploy and run your applications. Docker enables you separate your infrastructure from your application. With Docker all you need to do is just write your code,.
dockerize it and distribute it in form of image. That way any one can use your application who is running the Docker.

What do you mean by Dockerize application?

Dockerize mean you write your code on your system then you prepare the image and distribute it over the internet or on DockerHub. You don't have to worry about the underlying infrastructure and dependencies.

Let's write a python program which will count the occurrence of words from a given string.


#Input : string = "Docker is a technology which
# lets you build, deploy and run your applications.";
#Count occurence of words from a given string example


def findFreq(s):
dictt = {}
strng = s.split(" ")
strr1 = set(strng)
for word in strr1:
dictt[word] = s.count(word)
return dictt
if __name__ == "__main__":
x = input("Enter your string:")
#raw_input in python 2.x and input() in python 3.x
print(findFreq(x))

#Output: {'a': 4, 'and': 1, 'run': 1, '': 80,
# 'deploy': 1, 'technology': 1, 'is': 1,
# 'you': 2, 'lets': 1, 'applications.': 1,
# 'which': 1, 'build,': 1, 'Docker': 1, 'your': 1}

Save this file with findfrequency.py in same directory. I am saving it in current directory for my convenience but you can save it anywhere and pass the absolute path.


Now lets create a Dockerfile.


FROM python:3

We need to use python in docker so we are using FROM keyword so this will create layer from python image. Means your image is based on python image. 

Now we need to run our python file so we need to add this file to Dockerfile.

ADD findfrequency.py /

Use CMD to execute commands when image loads

CMD ["python", "./findfrequency.py"]

Combine all above lines and create a Dockerfile.

FROM python:3
ADD findfrequency.py /
CMD ["python", "./findfrequency.py"]

So we have created a Dockerfile now. I saved it with the name "Dockerfile" in current directory. When you run docker build .     command then docker looks for Dockerfile if ithis file doesn't exist or file name is wrong or extension is wrong you'll get file not exists error.

Now we are ready to build image from the dockerfile. 

Open the terminal and run the below command and make sure you are in the same directory where you saved your Dockerfile as well as python file.

docker build -t myapp .


-t : This is tagging a name to your image. In this case I gave my image a name "myapp"
.(dot) : Is current directory

Ok so you have successfully build your image. Now Let's check what's inside the image by inspecting it.

docker inspect myapp

[

    {

        "Id": "sha256:c4595feabbd0b9aba4ae67037ea3c43a8c0aaf2abe6f6fd28d25b22a7cf9",

        "RepoTags": [

            "myapp:latest"

        ],

        "RepoDigests": [],

        "Parent": "",

        "Comment": "buildkit.dockerfile.v0",

        "Created": "2021-10-01T08:42:53.450488763Z",

        "Container": "",

        "ContainerConfig": {

            "Hostname": "",

            "Domainname": "",

            "User": "",

            "AttachStdin": false,

            "AttachStdout": false,

            "AttachStderr": false,

            "Tty": false,

            "OpenStdin": false,

            "StdinOnce": false,

            "Env": null,

            "Cmd": null,

            "Image": "",

            "Volumes": null,

            "WorkingDir": "",

            "Entrypoint": null,

            "OnBuild": null,

            "Labels": null

        },

        "DockerVersion": "",

        "Author": "",

        "Config": {

            "Hostname": "",

            "Domainname": "",

            "User": "",

            "AttachStdin": false,

            "AttachStdout": false,

            "AttachStderr": false,

            "Tty": false,

            "OpenStdin": false,

            "StdinOnce": false,

            "Env": [

                

                "LANG=C.UTF-8",

                "PYTHON_VERSION=3.9.7",

                "PYTHON_PIP_VERSION=21.2.4",

                "PYTHON_SETUPTOOLS_VERSION=57.5.0",

                "PYTHON_GET_PIP_SHA256=fa6f3fb93cce234cd4e8dd2be9c247653b52855a48dd44e6b21ff28b"

            ],

            "Cmd": [

                "python",

                "./findfrequency.py"

            ],


You'll see output something like above. Our python function is there inside the output under CMD tag.

Let's run the image.

docker run -it myapp   

Enter your string: This is my test to test dockerfile. 

{'': 37, 'is': 2, 'dockerfile.': 1, 'to': 1, 'my': 1, 'test': 2, 'This': 1}


See the output above and pass the desired string to count the words.

So we have successfully dockerized our application. You can send this image to others so that they can use your program and they don't have to worry about installing any dependencies which can cause your program to crash. 

How to create a Dockerfile to automate the build image process

 Creating a Dockerfile




Dockerfile is a text file which contains set of commands which are used by user to build an image via command line. Dockerfile is an automated way to build an image with set of commands.

Docker build is a command which builds an image from Dockerfile. Let's create a docker file and put some commands inside it. 

touch Dockerfile


Above touch command is used to create a new file on mac/linux. Make sure you create the docker file in current directory. Although you can create the docker file anywhere but for convenience I am creating it on current directory. So that I don't have to type the absolute path for docker file.

Always remember these three commands while creating a new docker file - 

FROM : This creates a layer from docker image
RUN : builds your application
MAINTAINER : Best practice to put the name of maintainer of the image
CMD : Run the commands inside container 

I have created the dockerfile and put the below text inside dockerfile.

FROM ubuntu


MAINTAINER DATATIPSS


RUN apt-get update


CMD ["echo","This is my first image using dockerfile"]


I am using ubuntu as base image and giving maintainer as datatipss. You can give your name. 

Now run the docker file using docker build -t myapp .  command

Here -t : tag your image or giving a name to your image and .(dot) is running dockerfile in current directory.

docker build -t myapp .

[+] Building 79.3s (6/6) FINISHED                                                                                                                             

 => [internal] load build definition from Dockerfile                                                                                                     0.0s

 => => transferring dockerfile: 152B                                                                                                                     0.0s

 => [internal] load .dockerignore                                                                                                                        0.0s

 => => transferring context: 2B                                                                                                                          0.0s

 => [internal] load metadata for docker.io/library/ubuntu:latest                                                                                         7.9s

 => [1/2] FROM docker.io/library/ubuntu@sha256:9d6a8699fb5c9c39cf71bd6219f0400981c570894cd8cbea30d3424a31f                                          7.3s

 => => resolve docker.io/library/ubuntu@sha256: 9d6a8699fb5c9c39cf71bd6219f0400981c570894cd8cbea30d3424a31f                                          0.0s

 => => sha256: 9d6a8699fb5c9c39cf71bd6219f0400981c570894cd8cbea30d3424a31f 1.42kB / 1.42kB                                                           0.0s

 => => sha256: 9d6a8699fb5c9c39cf71bd6219f0400981c570894cd8cbea30d3424a31f 529B / 529B                                                               0.0s

 => => sha256:54ab604fab8d1b3d1c8e02509cc7031f8541428051401f4122619e5968e16 1.48kB / 1.48kB                                                           0.0s

 => => sha256:ab2d02b1ec420fdb84c9f52abda403b6a0f5de904a2ecda5ae4f5cd6e4d46 27.17MB / 27.17MB                                                         6.6s

 => => extracting sha256:ab2d02b1ec420fdb84c9f52abda403b6ae0f5de904a2ecda5ae4f5cd6e4d46                                                                0.7s

 => [2/2] RUN apt-get update                                                                                                                            63.9s

 => exporting to image                                                                                                                                   0.0s

 => => exporting layers                                                                                                                                  0.0s

 => => writing image sha256:76ff7254c9ca346bfe34c42c3f7c17ad0958e70210f01f9e200cbdadb2e4f   



Ok now you have successfully created your image. Let's check and run the image using docker run imageid_or_image_name command.

Check images first:

docker images

REPOSITORY        TAG       IMAGE ID       CREATED         SIZE

myapp             latest    76ff7254c9ca   3 minutes ago   92.2MB


Now run the image:

docker run myapp

This is my first image using dockerfile



So we have successfully created our first image and ran it. 

Also you can inspect the image and get insights of it. If you are using image build by someone else so before running it you can always inspect it to check if it is legitimate image. You can inspect any image and check the author and what commands are inside the image.

In this case you can check the author is datatipss and under cmd tag it's echo command only.

docker image inspect myapp

[

    {

        "Id": "sha256:76ff7254c9ca346bfe34c42c3f7c1958e70210f01f9e200cbdadb2e4f",

        "RepoTags": [

            "myapp:latest"

        ],

        "RepoDigests": [],

        "Parent": "",

        "Comment": "buildkit.dockerfile.v0",

        "Created": "2021-09-28T17:40:34.86456821Z",

        "Container": "",

        "ContainerConfig": {

            "Hostname": "",

            "Domainname": "",

            "User": "",

            "AttachStdin": false,

            "AttachStdout": false,

            "AttachStderr": false,

            "Tty": false,

            "OpenStdin": false,

            "StdinOnce": false,

            "Env": null,

            "Cmd": null,

            "Image": "",

            "Volumes": null,

            "WorkingDir": "",

            "Entrypoint": null,

            "OnBuild": null,

            "Labels": null

        },

        "DockerVersion": "",

        "Author": "DATATIPSS",

        "Config": {

            "Hostname": "",

            "Domainname": "",

            "User": "",

            "AttachStdin": false,

            "AttachStdout": false,

            "AttachStderr": false,

            "Tty": false,

            "OpenStdin": false,

            "StdinOnce": false,

            "Env": [

                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

            ],

            "Cmd": [

                "echo",

                "This is my first image using dockerfile"

            ],

            "ArgsEscaped": true,

            "Image": "",

            "Volumes": null,

            "WorkingDir": "",

            "Entrypoint": null,

            "OnBuild": null,

            "Labels": null

        },

        "Architecture": "arm64",

        "Variant": "v8",

        "Os": "linux",

        "Size": 92173773,

        "VirtualSize": 92173773,

        "GraphDriver": {

            "Data": {

                "LowerDir": "/var/lib/docker/overlay2/76ff7254c9ca346bfe34c42c3f7c1958e70210f01f9e200cbdadb2e4f/diff",

                "MergedDir": "/var/lib/docker/overlay2/c7dp26x65obhv96p76ff9lbo1/merged",

                "UpperDir": "/var/lib/docker/overlay2/c7dp26x65obhv96p76ff9lbo1/diff",

                "WorkDir": "/var/lib/docker/overlay2/c7dp26x65obhv96p76ff9lbo1/work"

            },

            "Name": "overlay2"

        },

        "RootFS": {

            "Type": "layers",

            "Layers": [

                "sha256: 76ff7254c9ca346bfe34c42c3f7c1958e70210f01f9e200cbdadb2e4f",

                "sha256:ecd02c9154cf7dc6fa2257f9378687064ebbb87524aa7191a683650b0c"

            ]

        },

        "Metadata": {

            "LastTagTime": "2021-09-28T17:43:44.89023209Z"

        }

    }

]


That way you can check what's going to run from the image. Once you build your image you can make it publicly available for everyone. You can upload it on DockerHub and anyone can use it. 

You can build multiple images of multiple applications and distribute it over the internet so that anyone can use your docker image. 

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...