OOM:Out of memory/Insufficient memory error inside Docker container

 How to handle Out of memory error inside Docker Container








Sometimes we face some special scenarios where we encounter insufficient memory or out of memory error. Today we'll try to find out why this error happens and how we can resolve it.

By default Docker has no resource limits and it can use as much the assigned resource depends on how much host's kernel scheduler allows. Docker provides various ways to control this situation just by passing some flags to docker run commands.

Why mitigating this risk is so important

It is important to control that not too much of host's memory is consumed by docker container by performing some tasks.  On Linux systems, if the kernel detects that there is not enough memory to perform some system task then it throws OOM or out of memory error and starts killing the processes to free up some memory. Any process can be killed by kernel. That means it can kill Docker and Docker containers as well. Which can be a high risk if your application is running on production and suddenly due to out of memory error occurred , your container was killed and your production application faces some down time. So it is very important to understand and control this memory issue by docker.

What we can do

  1. We need to perform some analysis to understand the application's memory requirement before deploying it to production.
  2. Make sure you are using a host which has sufficient resources to run your containers and application.
  3. Control your container's behaviour and restrict it's too much memory consumption.
Docker allows you to set a hard limit of memory so that container can't use memory more than assigned to it. 

Let's try to understand the scenario and reproduce the error. Let's pull the alpine image by below command.

docker pull alpine


Once image has been downloaded then run the docker run command.

docker run -it --name alpine alpine


Check the running container

docker ps

CONTAINER ID   IMAGE     COMMAND     CREATED          STATUS          PORTS     NAMES

74f020533   alpine    "/bin/sh"   51 seconds ago   Up 50 seconds             alpine3



When we ran our container we didn't specify any memory limit. Let's check how much is available for it by default.

Use docker stats command to check what's going on inside the container.

CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O     BLOCK I/O    PIDS

74f020533   alpine3   0.00%     500KiB / 1.942GiB   0.02%     866B / 0B   131kB / 0B   1


See memory usage is 500kB and max memory it can consume upto 1.94 GB. That's too much.

Now let's run another container and run it with hard memory limits.

docker run -it --memory 6m --name alpinememory  alpine


6m : 6 MB, You can set 6b :bytes, 6kb: KB, 6g: 6GB.

Now let's see the docker stats

CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O           BLOCK I/O     PIDS

a68b65bf36   alpinememory   0.00%     668KiB / 6MiB       10.87%    2.25MB / 68.3kB   31.3GB / 0B   1


Mem Usage: 668KB and Mem Limit : 6MB

Let's go inside alpine container and install python. See what happens:

/ # apk add python3

fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/aarch64/APKINDEX.tar.gz

fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/aarch64/APKINDEX.tar.gz



Open another terminal and check docker stats for this container

CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O           BLOCK I/O     PIDS

a68b65bf36   alpinememory   73.76%    6MiB / 6MiB         100.00%   2.25MB / 68.3kB   47.4GB / 0B   2


Mem Usage: 6MB and Mem Limit : 6MB

Seems your container is unable to install python due to low memory and it was killed by kernel.

Killed



Conclusion:

We always need to plan our memory usage during analysis and set that hard limit to container. Never ever leave your container without setting up memory limits so that it doesn't cause any issues due to memory. Also we need to keep in mind that we don't set a too low memory limit to our container and it can't use required memory to perform tasks.

If you like the above article then do check out the latest article about exposing pod to a port using below link.
Expose HTML application pod to specific port

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

What are Volumes in Docker and how to use them

 Docker Volume




Volumes are preferred mechanism for persisting data by docker containers. Volumes are completely managed by Docker. There are couple of important points which can convince you to use volumes over
bind mounts.

  • Volumes are easy to backup or migrate.
  • Volumes can be easily shared with other containers.
  • Volumes work on both Linux and windows.
  • Volumes can be managed by Docker CLI or Docker API.
  • New volumes can have their content pre populated by containers.
  • Volumes are better choice than persisting data on container' writable layer.
  • Volume's contents exist outside of lifecycle of a given container.

If your container generates non persistent state data, then you can consider using tmpfs mount to avoid storing the data  anywhere on permanently. And to increase the performance of the container you can always avoid writing data to container's writable layer. Always keep your container lightweight. 


What is tmpfs mount 

If you are running containers on Linux system, then you have one more option to use is tmpfs mount.
When you create a container with tmpfs mount then container can create files outside of container's writable layer. 

The tmps mount is temporary and only persisted in host memory. So when you stop the container the tmpfs mounts is removed and data persisted there won't be available.

The best use case for this is ,when you temporarily want to store the sensitive data and you don't want to persist it inside either host or container's writable layer.


Limitations on tmpfs mount

If you want to use tmpfs mount the you need to remember two things below
  • You can share tmpfs mount between containers
  • This functionality is only available on Linux.

Let's create tmpfs mount 

Start the container alpine and inspect it. Give the container a name. In my case i gave it tmptest0.


docker inspect tmptest0

[

    {

     

            "AutoRemove": false,

            "VolumeDriver": "",

            "VolumesFrom": null,

            "CapAdd": null,

            "CapDrop": null,

            "CgroupnsMode": "host",

            "Dns": [],

            "DnsOptions": [],

            "DnsSearch": [],

            "ExtraHosts": null,

            "GroupAdd": null,

            "IpcMode": "private",

            "Cgroup": "",

            "Links": null,

            "OomScoreAdj": 0,

            "PidMode": "",

            "Privileged": false,

            "PublishAllPorts": false,

            "ReadonlyRootfs": false,

            "SecurityOpt": null,


Now lets start one more container with alpine image and give it a name tmptest and attach tmpfs mount.

docker run -it -d --name tmptest --tmpfs /app alpine


Now inspect this tmptest container.

 "AutoRemove": false,

            "VolumeDriver": "",

            "VolumesFrom": null,

            "CapAdd": null,

            "CapDrop": null,

            "CgroupnsMode": "host",

            "Dns": [],

            "DnsOptions": [],

            "DnsSearch": [],

            "ExtraHosts": null,

            "GroupAdd": null,

            "IpcMode": "private",

            "Cgroup": "",

            "Links": null,

            "OomScoreAdj": 0,

            "PidMode": "",

            "Privileged": false,

            "PublishAllPorts": false,

            "ReadonlyRootfs": false,

            "SecurityOpt": null,

            "Tmpfs": {

                "/app": ""

            },



See the difference. It has added tmpfs mount.


Let's create the volume

Docker volume create new-volume

docker volume create new-volume                     

new-volume


Check the volume has been created or not.

docker volume ls

DRIVER    VOLUME NAME

local     new-volume


So we have the volume now. Lets inspect and check what's inside the volume

docker volume inspect new-volume

[

    {

        "CreatedAt": "2021-09-22",

        "Driver": "local",

        "Labels": {},

        "Mountpoint": "/var/lib/docker/volumes/new-volume/_data",

        "Name": "new-volume",

        "Options": {},

        "Scope": "local"

    }

]



Start the container with Volume


docker run -d --name voltest -v new-volume:/app alpine


Inspect the container and you'll see below result:

docker container inspect voltest


"HostConfig": {

            "Binds": [

                "new-volume:/app"

            ],


Now follow the cleanup with below commands.


Here i am stopping all running containers at once.

docker container stop $(docker ps -q) 


Remove all containers

docker container rm $(docker ps -aq)


Remove the volume

docker volume rm new-volume


That was it on volume. There is lot more on volume and that is creating a volume using volume driver, which I'll post on upcoming blogs. That's how we create and use the volumes on docker.

Docker essentials learning the Docker Networking

Getting started with Docker networking




In this article we are going to learn how to create a network and use the network.

Listing All the Docker networks

Docker network ls 

Output below

docker network ls

NETWORK ID   NAME      DRIVER    SCOPE

e152bd78da   bridge    bridge    local

7e94216ea4   host      host      local

9eb7b364ec   none      null      local



You can above command output. You can driver name is bridge. So by default bridge driver gets created on docker.

If you don't specify the name then default network driver bridge is created. Bridge network is used when your application runs in standalone containers that need to communicate.

There is HOST network as well.  For standalone container, remove network isolation between container and docker host.

User defined bridge networks are used when you need multiple containers to communicate on the same docker host.

Host networks are best when the network stack should not be isolated from docker host. Container shares the host's networking namespace. And container does not get allocated it's own IPAddress.

Overlay networks are best when containers are running on different docker host to communicate, or multiple applications  work together swarm services.

Docker with IPTables

Docker manipulates iptables rules to provide network isolation. If you are running Docker on a Host which is exposed to the Internet, you will probably want to have iptables policies in place to prevent unauthorized access to containers or any other services that are running on the host system.

Docker installs two custom iptables chains named DOCKER-USER and DOCKER and it ensures that incoming packets are always checked by these two chains first.

All of Docker's iptables rules are added to the Docker chain. Don't manipulate this chain manually. If you need to add rules which load before Docker's rules, add them to the DOCKER-USER chain. These rules are applied before any rules Docker creates automatically.


To create HOST network

docker run --rm -d --network host --name my_nginx nginx


verify that no new IP was created.

Now run container using HOST network

docker run --rm -d --network host --name my_nginx nginx


--rm : remove stopped containers

Create user defined bridge network

docker network create new_net  


You can inspect the new network 


docker inspect new_net

[

    {

        "Name": "new_net",

        "Id": "8cd40b2f992bd6045824c70163dbe93462200d46897dbff18fe71",

        "Scope": "local",

        "Driver": "bridge",

        "EnableIPv6": false,

        "IPAM": {

            "Driver": "default",

            "Options": {},

            "Config": [

                {

                    "Subnet": "172.xx.0.0/16",

                    "Gateway": "172.xx.0.1"

                }

            ]

        },

        "Internal": false,

        "Attachable": false,

        "Ingress": false,

        "ConfigFrom": {

            "Network": ""

        },

        "ConfigOnly": false,

        "Containers": {

            "f9dca1401da2e6a42f124799c282592f5586cd8587c665ec32bc284": {

                "Name": "my_nginx",

                "EndpointID": "0e14e2e057c56e6583ddb7a6fed7a2545f08d3c0ec95314b3f290c2fc5678",

                

            }

        },

        "Options": {},

        "Labels": {}

    }

]

Now lets run a container on this network

docker run --rm -d --network new_net --name my_nginx nginx


Suppose if container is already running the how we can connect this network using that running container?

docker network connect new_net nginx

Lets disconnect the container from user defined bridge network

docker network disconnect new_net alp 



Restrict Connections to the Docker Host

By default all external source IPs are allowed to connect to Docker host. To allow only a specific IP or network to access the containers, insert a negated rule at the top of the DOCKER-USER filter chain. For example, the below rule restricts external access from all IP addresses except 192.168.1.1 :
 

iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP


Please note that you will need to change ext_if to correspond with your host’s actual external interface. You could instead allow connections from a source subnet. The following rule only allows access from the subnet 192.168.1.0/20:

iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.0/20 -j DROP



That was it , a very short article where I could have been explain on basic networking on Docker.
but I'll try to add more on networking in upcoming days.

Docker essentials-Docker commands for day to day use for beginners

 Useful docker commands for day to day use




Docker is awesome technology  in the virtualization world. If you are willing to learn docker or you just have started learning docker then you should know all these basics commands for docker which will be used in your daily routine. 

First we need to understand what is docker hub. Docker hub is an image repository where everyone pushes their images so that others can use those image. Suppose I want to use sql server on Docker then I'll download the image from docker hub and then I'll use it. 

Wait! did you just say SQL Server on docker? SQL server is only for windows. No, SQL server is now available to use on linux platform as well using docker.

To follow below tutorial make sure you have downloaded the docker and it's up and running.
You can open your terminal or CMD and try running any docker command. If it is not installed then you'll get command not found or if it's installed and docker is not running then also you'll get command not found.

How to download image from docker hub

When you say downloading image using docker then it means pulling the image using docker. Use below command to pull image. By-default docker uses DockerHub .

docker pull alpine

Using default tag: latest

latest: Pulling from library/alpine

Digest: sha256:e1c082e3d3c45cccac829840a25941e679c25d412c2fa221cf1a824e6a

Status: Image is up to date for alpine:latest

docker.io/library/alpine:latest


Docker image has been pulled. Let's check images on your system using below command.

docker images

REPOSITORY   TAG       IMAGE ID       CREATED       SIZE

alpine       latest    bb3de5531c18   3 weeks ago   5.34MB

pankajdwivedi@Pankajs-MacBook-Air ~ % 




You can also use docker image ls command

docker image ls

REPOSITORY   TAG       IMAGE ID       CREATED       SIZE

alpine       latest    bb3de5531c18   3 weeks ago   5.34MB



Running image inside container

docker container run --name alp -it alpine

/ # 



You can omit container. Writing container was old way. Now you don't have to write container to run it

docker run --name alp -it alpine 

/ # 


Here are some flags and keywords in above command:

  • run : giving command to run the container
  • --name alp : giving name to your container (It's always a better idea to give your container name)
  • -it : running container in interactive mode
  • alpine : is image name which you just pulled.
  • -d : detach mode 
Using detach will be running your container in background. You will not be logged in to your container.

docker run --name alp -dit alpine

a6ba34f88715cf3d63879872a65d6a7aeb48572ee7a79fb22ecf751049c49dc2 

To enter in a container's shell you need below command:

docker exec -it alp /bin/sh

/ # 

 Since we have started our alp container, let's check if it is running. Using docker ps

docker ps

CONTAINER ID   IMAGE     COMMAND     CREATED         STATUS         PORTS     NAMES

a6ba34f88715   alpine    "/bin/sh"   4 minutes ago   Up 4 minutes             alp



docker ps -a will give you running as well as stopped containers. Use it accordingly.

How to check logs

check logs using docker logs container_name command.

docker logs alp 

/ # exit


How to start and stop container

docker start container_name_or_id and docker stop container_name_or_id

docker stop alp

alp


docker start alp

alp



How to inspect container

use docker inspect container_name_or_id

docker inspect alp

[

    {

        "Id": "a6ba34f88715cf3d638797a79fb22ecf751049c49dc2",

        "Created": "2021-09-19T08:04:17.202282Z",

        "Path": "/bin/sh",

        "Args": [],

        "State": {

            "Status": "running",

            "Running": true,


You can find all details about the containers here including IPaddress.

docker inspect alp | egrep "IPAddress"

            "SecondaryIPAddresses": null,

            "IPAddress": "172.17.0.2",

                    "IPAddress": "172.17.0.2",




Now very important , you always want to check how many images and containers are running on your system and how much space they are consuming. Use docker system df command

df command is similar to Linux system to check memory utilisation.

docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE

Images          1         1         5.337MB   0B (0%)

Containers      3         1         15B       10B (66%)

Local Volumes   0         0         0B        0B

Build Cache     0         0         0B        0B



How to check diff of container


There is also very important command to track changes to files or directories on container file system.


docker diff alp

C /root

A /root/.ash_history



How to remove all containers running.

first stop the running containers. 

docker stop $(docker ps -q)

a6ba34f88715



$(docker ps -q)

docker ps -q will give you container_id of running containers.

docker rm alp     

alp


This above will remove alp container.

Remove all stopped containers

docker container prune

WARNING! This will remove all stopped containers.

Are you sure you want to continue? [y/N] y

Deleted Containers:

803ec8603418b8e5ca7cbe9178724ca2714ab3fba48ac406282e78

7a8ee73bef1b766d38594ceeed8bdabf0cbc4e114faa65de71492a


Total reclaimed space: 10B



Remove all images

docker image prune

WARNING! This will remove all dangling images.

Are you sure you want to continue? [y/N] y

Total reclaimed space: 0B


Delete alpine image

docker image rm alpine

Untagged: alpine:latest

Untagged: alpine@sha256:e1c082e3d3c45cccahe679c25d438cc8412c2fa221cf1a824e6a

Deleted: sha256:bb3de5531c18f185667b0be0e400ab244440093de82baf4072e14af3b84

Deleted: sha256:ee420dfed78aeb92dbd73a3fbb59fa5dac4e04639210cc7905



Now check again the space using docker system command

docker system df      

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE

Images          0         0         0B        0B

Containers      0         0         0B        0B

Local Volumes   0         0         0B        0B

Build Cache     0         0         0B        0B



Now you can see no images, no containers are running and space was also reclaimed.

So these were my basics essential commands which could help you in debugging docker containers.
In next article I'll talk about volumes, networks and other stuff.


Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...