Showing posts with label memory. Show all posts
Showing posts with label memory. Show all posts

Default memory limits for a Kubernetes Pod

Understanding about memory and other resources consumption is very important in Kubernetes. Whenever we run a Pod it consumes some amount of memory and cpu depending on the load.

By default Pods run with unbounded CPU and memory limits. That means any Pod in the system will be able to consume as much CPU and memory on the node that executes the Pod. So to avoid these situations user can impose some restrictions on the amount of resource a single Pod can use for variety of reasons. 

To impose a memory restrictions we have few options:

1. Either we can define our memory and cpu limits in deployment file

2. Or we can create a limit which will used to set default memory limit to Pods in specific namespace.

Let's create a namespace first by below command.

kubectl create namespace memory-demo

Now create a pod using yaml file.

apiVersion: v1
kind: Pod
metadata:
name: mem-demo
namespace: memory-demo
spec:
containers:
- name: mem-demo-cont
image: polinux/stress
resources:
requests:
memory: "100Mi"
limits:
memory: "200Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

We have provided a memory limit 200Mi. Save it as mempod.yaml and run below command to create the pod.

kubectl create -f mempod.yaml

Note: Mi and MB are different but they are close in size. 
Mi(Mebibyte) = 1024 KB
MB(Megabyte) = 1000 KB

Now let's check the memory consumption of above pod using below command.

kubectl get pod mem-demo -n memory-demo -o yaml | grep -i resources -A 4

O/P:

resources:
limits:
memory: 200Mi
requests:
memory: 100Mi

Let's try to understand what will happen when pod will exceed the memory limit. We'll create a pod again to exceed memory limits. Use the above mempod.yaml file and just change the memory limits to lower and see.

apiVersion: v1
kind: Pod
metadata:
name: mem-demo
namespace: memory-demo
spec:
containers:
- name: mem-demo-cont
image: polinux/stress
resources:
requests:
memory: "50Mi"
limits:
memory: "50Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

Before creating the same again again we need to delete the existing pod first using below command.

kubectl delete pod mem-demo -n memory-demo

kubectl create -f mempod.yaml


Check if new pod created is in running state? by running below command

kubectl get pods -n memory-demo

NAME READY STATUS RESTARTS AGE
mem-demo 0/1 CrashLoopBackOff 6 9m34s

So it's not running and it's failed. Let's debug why it's failed using kubectl describe command.

kubectl describe pods mem-demo -n memory-demo | grep -i state -A 5
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 1

It says OOM killed means memory limit was exceeded and it was killed. 

Instead of assigning memory limits and resources to individual pods we can create memory limits in namespace and every pod created in this namespace will have the default memory limit. 

To do this we'll create a limit range in memory-demo namespace. 

apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

save it as limit.yaml

kubectl apply -f limit.yaml -n memory-demo

So limit range has been created and we can verify it by below command.

kubectl get limits -n memory-demo

NAME CREATED AT
mem-limit-range 2022-08-07T13:27:36Z

Now create a Pod using imperative command way

kubectl run nginx --image=nginx -n memory-demo

And check the memory limits for the container , it should be default limits by limit range.

spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
memory: 512Mi
requests:
memory: 256Mi

That's how we can use the limit range to set default memory limits.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

OOM:Out of memory/Insufficient memory error inside Docker container

 How to handle Out of memory error inside Docker Container








Sometimes we face some special scenarios where we encounter insufficient memory or out of memory error. Today we'll try to find out why this error happens and how we can resolve it.

By default Docker has no resource limits and it can use as much the assigned resource depends on how much host's kernel scheduler allows. Docker provides various ways to control this situation just by passing some flags to docker run commands.

Why mitigating this risk is so important

It is important to control that not too much of host's memory is consumed by docker container by performing some tasks.  On Linux systems, if the kernel detects that there is not enough memory to perform some system task then it throws OOM or out of memory error and starts killing the processes to free up some memory. Any process can be killed by kernel. That means it can kill Docker and Docker containers as well. Which can be a high risk if your application is running on production and suddenly due to out of memory error occurred , your container was killed and your production application faces some down time. So it is very important to understand and control this memory issue by docker.

What we can do

  1. We need to perform some analysis to understand the application's memory requirement before deploying it to production.
  2. Make sure you are using a host which has sufficient resources to run your containers and application.
  3. Control your container's behaviour and restrict it's too much memory consumption.
Docker allows you to set a hard limit of memory so that container can't use memory more than assigned to it. 

Let's try to understand the scenario and reproduce the error. Let's pull the alpine image by below command.

docker pull alpine


Once image has been downloaded then run the docker run command.

docker run -it --name alpine alpine


Check the running container

docker ps

CONTAINER ID   IMAGE     COMMAND     CREATED          STATUS          PORTS     NAMES

74f020533   alpine    "/bin/sh"   51 seconds ago   Up 50 seconds             alpine3



When we ran our container we didn't specify any memory limit. Let's check how much is available for it by default.

Use docker stats command to check what's going on inside the container.

CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O     BLOCK I/O    PIDS

74f020533   alpine3   0.00%     500KiB / 1.942GiB   0.02%     866B / 0B   131kB / 0B   1


See memory usage is 500kB and max memory it can consume upto 1.94 GB. That's too much.

Now let's run another container and run it with hard memory limits.

docker run -it --memory 6m --name alpinememory  alpine


6m : 6 MB, You can set 6b :bytes, 6kb: KB, 6g: 6GB.

Now let's see the docker stats

CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O           BLOCK I/O     PIDS

a68b65bf36   alpinememory   0.00%     668KiB / 6MiB       10.87%    2.25MB / 68.3kB   31.3GB / 0B   1


Mem Usage: 668KB and Mem Limit : 6MB

Let's go inside alpine container and install python. See what happens:

/ # apk add python3

fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/aarch64/APKINDEX.tar.gz

fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/aarch64/APKINDEX.tar.gz



Open another terminal and check docker stats for this container

CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O           BLOCK I/O     PIDS

a68b65bf36   alpinememory   73.76%    6MiB / 6MiB         100.00%   2.25MB / 68.3kB   47.4GB / 0B   2


Mem Usage: 6MB and Mem Limit : 6MB

Seems your container is unable to install python due to low memory and it was killed by kernel.

Killed



Conclusion:

We always need to plan our memory usage during analysis and set that hard limit to container. Never ever leave your container without setting up memory limits so that it doesn't cause any issues due to memory. Also we need to keep in mind that we don't set a too low memory limit to our container and it can't use required memory to perform tasks.

If you like the above article then do check out the latest article about exposing pod to a port using below link.
Expose HTML application pod to specific port

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...