Showing posts with label docker swarm. Show all posts
Showing posts with label docker swarm. Show all posts

Containers orchestration: Kubernetes vs Docker swarm




When deploying applications at scale, you need to plan all your architecture components with current and future strategies in mind. Container orchestration tools help achieve this by automating the management of application microservices across all clusters. 

There are few major containers orchestration tools listed below:

  • Docker Swarm
  • Kubernetes
  • OpenShift
  • Hashicorp Nomad
  • Mesos
Today we'll talk about Docker Swarm and Kubernetes and we'll compare them in terms of features.

What is container orchestration 

Container orchestration is a set of practices for managing the Docker Containers at large scale. As soon as containerized applications scale to large number of containers, then there is need of container management capabilities. Such as provisioning containers, scaling up and scaling down, manage networking, load balancing ,security and others.  

Let's talk Kubernetes

Kubernetes is an open source, cloud native infrastructure tool that automates scaling, deployment and management of containerized applications. 

Kubernetes was originally developed by google and later was handed over to Cloud Native Computing Foundation(CNCF) for enhancement and maintenance. Kubernetes is the most popular and highly in demand  orchestrator tool. Kubernetes is complex tool and a bit difficult to learn compare to swarm.

Here are few main architecture components of Kubernetes below:

Cluster 

A collection of multiple nodes, typically at least one master node and several worker nodes(also known as minions)

Node

A physical or Virtual Machine(VM)

Control Plane

A component that schedule and deploys application instances across all nodes

Kubelete

An agent process running on nodes. It is responsible of managing the state of each nodes and it can perform several actions to maintain a desired state.

Pods

Pods are basic scheduling unit. Pods consist of one or more containers co-located on a host machine and share same resources.

Deployments, Replicas and ReplicaSets

Docker Swarm

Docker swarm is native to Docker platform Docker was developed to maintain the application efficiency and availability in different runtime environments by deploying containerized application microservices across multiple clusters. 

A mix of docker-compose, swarm, overlay network can be used to manage cluster of docker containers.

Docker swarm is still maturing in terms of functionalities when compare to other open source container orchestration tools.

Here are few main architecture components of Docker swarm below:

Swarm 

A collection of nodes that include at-least one manager and several worker nodes.

Service

A task that agent nodes or managers are required to perform on the swarm.

Manager node

A node tasked with delivering work. It manages and distributes the task among worker nodes.

Worker node

A node responsible for running tasks distributed by the swarm's manager node.

Tasks

Set of commands

Choosing the right Orchestrator for your containers

Kubernetes focuses on open-source and modular orchestration, offering an efficient container orchestration solution for high demand applications with complex configuration.

Docker swarm emphasises ease of use, making it most suitable for simple applications that are quick to deploy and easy to manage.

Some fundamental differences between both 

GUI:

Kubernetes features an easy web user interface(dashboards) that helps you
  • Deploy containerized application on cluster 
  • Manage cluster resources 
  • View an error log, deployments, jobs
Unlike Kubernetes, Docker swarm does not come with Web UI to deploy applications and orchestrate containers. But there are some third party tools which can achieve this with Docker.

Availability:

Kubernetes ensure high availability by creating clusters to eliminate ingle point of failures. You can use Stacked Control Plane nodes that ensure availability by co-locating etcd objects with all available nodes of a cluster during failover. Or you can use external etcd objects for load balancing while controlling the control plane nodes separately.  

For Docker to maintain high-availability, Docker uses service replication at swarm nodes level. A swarm manager deploys multiple instances of the same container with replicas of services in each.

Scalability:

Kubernetes supports autoscaling on both  cluster level and pod level. Whereas Docker Swarm deploys containers quickly. This gives the orchestration tool faster reaction times that allow for on-demand scaling.

Monitoring: 

Kubernetes offers multiple native logging and monitoring solutions for deployed services within a cluster. Also Kubernetes supports third-party integration to help with event-based monitoring.

On the other side Docker Swarm doesn't offer monitoring solution like Kubernetes. As a result you need to rely on third party applications to support monitoring. So monitoring a Docker Swarm is considered to e more complex than Kubernetes. 
 
Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Getting started with Docker swarm, the Docker Swarm way

 Getting started with docker swarm

Docker Swarm is a container orchestration tool. Docker Swarm allows user to manage multiple containers deployed across multiple hosts/machines.

In Docker Swarm you create at least one manager and one/multiple workers. Manager manages all the workers.


One of the key benefit of Docker Swarm is the high availability. You deploy your application  on multiple nodes and there could be one/more than one managers managing all worker nodes and ensuring the high availability of your application.

Second important benefit is the  Load Balancing. Load balancing is very crucial and important part of any application deployed in production. Docker swarm makes sure that all containers have sufficient resources and load is balanced properly across all nodes to serve optimal efficiency.

Scalability also can serve very important role to keep application scale up when it's required.

Demo

Let's start with demo part on docker swarm. To perform demo you need some need to install docker first. If you are using older docker version then docker-machine gets installed along with the docker. But in recent versions docker machine does not get install along with docker. So you need to separately install the docker machine.

You also need to install virtualbox to run docker swarm. So make sure virtualbox is also installed on your machine. 

Now just the below command to check if you have docker machine on your system. Make sure your docker is running on your system. This will give you docker version.

$ docker-machine version


docker-machine.exe version 0.16.2, build bd45ab13



If you get docker-machine command not found on windows then you need to install docker-machine separately. You also need to install GitBash first. Once you have installed gitbash you can run below commands.

On windows using GitBash:

$ if [[ ! -d "$HOME/bin" ]]; then mkdir -p "$HOME/bin"; fi && \


curl -L https://github.com/docker/machine/releases/download/v0.16.2/docker-machine-Windows-x86_64.exe > "$HOME/bin/docker-machine.exe" && \


chmod +x "$HOME/bin/docker-machine.exe"



Run above command on GitBash. For Linux and Mac users follow  




Now we have the setup, lets create the docker manager first.

 docker-machine create --driver virtualbox manager1



If this command runs successfully then we are good but if this produces the below error,

Running pre-create checks...                                                                           


Error with pre-create check: "This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory"


Then you probably need to bypass the virtualbox check in your command. Run the below command.

docker-machine create -d virtualbox --virtualbox-memory=4096   --virtualbox-cpu-count=4 --virtualbox-disk-size=40960   --virtualbox-no-vtx-check manager


~ manager is name of node here. You can give any name.

Meanwhile it's running the checks, You can open virtualbox and see that it is creating a VM for you to run.



I kept the name default so it was created using default name. Follow the same command with different names like below.

docker-machine create -d virtualbox --virtualbox-memory=4096   --virtualbox-cpu-count=4 --virtualbox-disk-size=40960   --virtualbox-no-vtx-check worker1



docker-machine create -d virtualbox --virtualbox-memory=4096   --virtualbox-cpu-count=4 --virtualbox-disk-size=40960   --virtualbox-no-vtx-check worker2




Since we have created all the nodes lets check what are the nodes running on system using below command.

$ docker-machine ls                                                                                                    


NAME      ACTIVE   DRIVER       STATE     URL                        SWARM   DOCKER


ERRORS


default   -        virtualbox   Running   tcp://192.168.99.100           v19.03.12


worker1   -        virtualbox   Running   tcp://192.168.99.101          v19.03.12



worker2   -        virtualbox   Running   tcp://192.168.99.102          v19.03.12



That IP address will be same for everyone. Docker generates this as default. See the last octate which is only different ie. 100,101,102 .

Lets make the default node as manager node using below command.

$ docker-machine ssh default




   ( '>')


  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.


 (/-_--_-\)           www.tinycorelinux.net


Above command will login to default node. Once you are in the node then you can run the below command to make it manager.

docker swarm init --advertise-addr 192.168.99.100:2376




Output below:


Swarm initialized: current node (c7if56abzumka79k8mg0xbnu0) is now a manager.




To add a worker to this swarm, run the following command:




docker swarm join --token xyz


You can run the command which was just generated by manager node. You need to run this command on worked node. To login to worker node run the below command.

$ docker-machine ssh worker1


Now run the docker swarm join --token command to join this worker node as worker to manager.

Check all the nodes using docker node ls command.

So far we have created one total three nodes where one will be manager and other two will be worker nodes. That was it for preparing a setup and running your docker swarm. We have just created multiple nodes here. Then defined one of the node as manager and other two as worker node. 

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link


Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...