Understanding about memory and other resources consumption is very important in Kubernetes. Whenever we run a Pod it consumes some amount of memory and cpu depending on the load.
By default Pods run with unbounded CPU and memory limits. That means any Pod in the system will be able to consume as much CPU and memory on the node that executes the Pod. So to avoid these situations user can impose some restrictions on the amount of resource a single Pod can use for variety of reasons.
To impose a memory restrictions we have few options:
1. Either we can define our memory and cpu limits in deployment file
2. Or we can create a limit which will used to set default memory limit to Pods in specific namespace.
Let's create a namespace first by below command.
kubectl create namespace memory-demo
Now create a pod using yaml file.
apiVersion: v1
kind: Pod
metadata:
name: mem-demo
namespace: memory-demo
spec:
containers:
- name: mem-demo-cont
image: polinux/stress
resources:
requests:
memory: "100Mi"
limits:
memory: "200Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
We have provided a memory limit 200Mi. Save it as mempod.yaml and run below command to create the pod.
kubectl create -f mempod.yaml
Note: Mi and MB are different but they are close in size.
Mi(Mebibyte) = 1024 KB
MB(Megabyte) = 1000 KB
Now let's check the memory consumption of above pod using below command.
kubectl get pod mem-demo -n memory-demo -o yaml | grep -i resources -A 4
O/P:
resources:
limits:
memory: 200Mi
requests:
memory: 100Mi
Let's try to understand what will happen when pod will exceed the memory limit. We'll create a pod again to exceed memory limits. Use the above mempod.yaml file and just change the memory limits to lower and see.
apiVersion: v1
kind: Pod
metadata:
name: mem-demo
namespace: memory-demo
spec:
containers:
- name: mem-demo-cont
image: polinux/stress
resources:
requests:
memory: "50Mi"
limits:
memory: "50Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
Before creating the same again again we need to delete the existing pod first using below command.
kubectl delete pod mem-demo -n memory-demo
kubectl create -f mempod.yaml
Check if new pod created is in running state? by running below command
kubectl get pods -n memory-demo
NAME READY STATUS RESTARTS AGE
mem-demo 0/1 CrashLoopBackOff 6 9m34s
So it's not running and it's failed. Let's debug why it's failed using kubectl describe command.
kubectl describe pods mem-demo -n memory-demo | grep -i state -A 5
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 1
It says OOM killed means memory limit was exceeded and it was killed.
Instead of assigning memory limits and resources to individual pods we can create memory limits in namespace and every pod created in this namespace will have the default memory limit.
To do this we'll create a limit range in memory-demo namespace.
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
save it as limit.yaml
kubectl apply -f limit.yaml -n memory-demo
So limit range has been created and we can verify it by below command.
kubectl get limits -n memory-demo
NAME CREATED AT
mem-limit-range 2022-08-07T13:27:36Z
Now create a Pod using imperative command way
kubectl run nginx --image=nginx -n memory-demo
And check the memory limits for the container , it should be default limits by limit range.
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
memory: 512Mi
requests:
memory: 256Mi
That's how we can use the limit range to set default memory limits.
Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link