How to handle Out of memory error inside Docker Container
Sometimes we face some special scenarios where we encounter insufficient memory or out of memory error. Today we'll try to find out why this error happens and how we can resolve it.
By default Docker has no resource limits and it can use as much the assigned resource depends on how much host's kernel scheduler allows. Docker provides various ways to control this situation just by passing some flags to docker run commands.
Why mitigating this risk is so important
It is important to control that not too much of host's memory is consumed by docker container by performing some tasks. On Linux systems, if the kernel detects that there is not enough memory to perform some system task then it throws OOM or out of memory error and starts killing the processes to free up some memory. Any process can be killed by kernel. That means it can kill Docker and Docker containers as well. Which can be a high risk if your application is running on production and suddenly due to out of memory error occurred , your container was killed and your production application faces some down time. So it is very important to understand and control this memory issue by docker.
What we can do
- We need to perform some analysis to understand the application's memory requirement before deploying it to production.
- Make sure you are using a host which has sufficient resources to run your containers and application.
- Control your container's behaviour and restrict it's too much memory consumption.
Docker allows you to set a hard limit of memory so that container can't use memory more than assigned to it.
Let's try to understand the scenario and reproduce the error. Let's pull the alpine image by below command.
Once image has been downloaded then run the docker run command.
docker run -it --name alpine alpine
Check the running container
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74f020533 alpine "/bin/sh" 51 seconds ago Up 50 seconds alpine3
When we ran our container we didn't specify any memory limit. Let's check how much is available for it by default.
Use docker stats command to check what's going on inside the container.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
74f020533 alpine3 0.00% 500KiB / 1.942GiB 0.02% 866B / 0B 131kB / 0B 1
See memory usage is 500kB and max memory it can consume upto 1.94 GB. That's too much.
Now let's run another container and run it with hard memory limits.
docker run -it --memory 6m --name alpinememory alpine
6m : 6 MB, You can set 6b :bytes, 6kb: KB, 6g: 6GB.
Now let's see the docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
a68b65bf36 alpinememory 0.00% 668KiB / 6MiB 10.87% 2.25MB / 68.3kB 31.3GB / 0B 1
Mem Usage: 668KB and Mem Limit : 6MB
Let's go inside alpine container and install python. See what happens:
/ # apk add python3
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/aarch64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/aarch64/APKINDEX.tar.gz
Open another terminal and check docker stats for this container
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
a68b65bf36 alpinememory 73.76% 6MiB / 6MiB 100.00% 2.25MB / 68.3kB 47.4GB / 0B 2
Mem Usage: 6MB and Mem Limit : 6MB
Seems your container is unable to install python due to low memory and it was killed by kernel.
Conclusion:
We always need to plan our memory usage during analysis and set that hard limit to container. Never ever leave your container without setting up memory limits so that it doesn't cause any issues due to memory. Also we need to keep in mind that we don't set a too low memory limit to our container and it can't use required memory to perform tasks.
Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link
No comments:
Post a Comment