How to search image name in multiple Pods on Kubernetes like a Pro?



Introduction

Sometimes we have a requirement to find a particular image which any pod is using. But you are not sure about the image. In that case we have the describe command to check and see all the information about the pod. The problem occurs when you have multiple pods and you can't write multiple describe commands to find that image. So we'll do it differently using a loop like Pro. 

Prerequisites

You need to have the Kubernetes installed on your machine and it's up and running. You can check it by running below command.

kubectl get pods

If you see some error like command not found then probably you don't have Kubernetes installed on your machine so you need to get it installed. 

Let's start

First we'll create multiple Pods on our cluster with different images. Code is below:

for img in nginx3 alpine alpine2 alpine3
do
kubectl run $img --image=$img
done


Results:
pod/nginx3 created
pod/alpine created
pod/alpine2 created
pod/alpine3 created

So I used a for loop in bash script and passed multiple images name and created multiple Pods at once.

Now I am checking the status of pods. Since i used wrong images so there must be multiple pods in not running status.

kubectl get pods

NAME READY STATUS RESTARTS AGE
alpine 0/1 CrashLoopBackOff 4 (56s ago) 3m10s
alpine2 0/1 ImagePullBackOff 0 3m10s
alpine3 0/1 ImagePullBackOff 0 3m10s
nginx 1/1 Running 0 3h38m
nginx2 1/1 Running 0 3h33m
nginx3 0/1 ImagePullBackOff 0 3m10s

So we have multiple Pods now, we can look for images using our favourite for loop.

Let's list Pod and it's image.

for pds in $(kubectl get pods --no-headers -o custom-columns=":metadata.name")
do
echo "*************Pod_Name: $pds**************"
kubectl describe pods $pds | grep -i image:
done

Results:

********************Pod_Name: alpine***************
Image: alpine
********************Pod_Name: alpine2**************
Image: alpine2
********************Pod_Name: alpine3**************
Image: alpine3
*******************Pod_Name: nginx*****************
Image: nginx
*******************Pod_Name: nginx2****************
Image: nginx
*******************Pod_Name: nginx3****************
Image: nginx3

Now we can see which Pod is using which image. But this solution is still not feasible because this will give me all the pods and images in default namespace. I need to search one image ie. alpine3 among all Pods.

for pds in $(kubectl get pods --no-headers -o custom-columns=":metadata.name")
do
echo "************Pod_Name: $pds**************"
kubectl describe pods $pds | grep -i "image: alpine3"
done


Just search alpine3 in your grep command and see results below. You can use if else to remove extra print from the output.

for pds in $(kubectl get pods --no-headers -o custom-columns=":metadata.name")
do
if kubectl describe pods $pds | grep -q -i "image: alpine3"
then
echo "***********Pod_Name: $pds*****************"
else
echo
fi
done

*************Pod_Name: alpine3**************

Conclusion

Combining your shell scripting skill can be very useful in Kubernetes. You can get multiple things done quickly. This was example to find images in multiple Pods but you can use above script to find other properties as well like variables and configs. Using shell script you can automate lot of stuff on Kubernetes. 


Default memory limits for a Kubernetes Pod

Understanding about memory and other resources consumption is very important in Kubernetes. Whenever we run a Pod it consumes some amount of memory and cpu depending on the load.

By default Pods run with unbounded CPU and memory limits. That means any Pod in the system will be able to consume as much CPU and memory on the node that executes the Pod. So to avoid these situations user can impose some restrictions on the amount of resource a single Pod can use for variety of reasons. 

To impose a memory restrictions we have few options:

1. Either we can define our memory and cpu limits in deployment file

2. Or we can create a limit which will used to set default memory limit to Pods in specific namespace.

Let's create a namespace first by below command.

kubectl create namespace memory-demo

Now create a pod using yaml file.

apiVersion: v1
kind: Pod
metadata:
name: mem-demo
namespace: memory-demo
spec:
containers:
- name: mem-demo-cont
image: polinux/stress
resources:
requests:
memory: "100Mi"
limits:
memory: "200Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

We have provided a memory limit 200Mi. Save it as mempod.yaml and run below command to create the pod.

kubectl create -f mempod.yaml

Note: Mi and MB are different but they are close in size. 
Mi(Mebibyte) = 1024 KB
MB(Megabyte) = 1000 KB

Now let's check the memory consumption of above pod using below command.

kubectl get pod mem-demo -n memory-demo -o yaml | grep -i resources -A 4

O/P:

resources:
limits:
memory: 200Mi
requests:
memory: 100Mi

Let's try to understand what will happen when pod will exceed the memory limit. We'll create a pod again to exceed memory limits. Use the above mempod.yaml file and just change the memory limits to lower and see.

apiVersion: v1
kind: Pod
metadata:
name: mem-demo
namespace: memory-demo
spec:
containers:
- name: mem-demo-cont
image: polinux/stress
resources:
requests:
memory: "50Mi"
limits:
memory: "50Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

Before creating the same again again we need to delete the existing pod first using below command.

kubectl delete pod mem-demo -n memory-demo

kubectl create -f mempod.yaml


Check if new pod created is in running state? by running below command

kubectl get pods -n memory-demo

NAME READY STATUS RESTARTS AGE
mem-demo 0/1 CrashLoopBackOff 6 9m34s

So it's not running and it's failed. Let's debug why it's failed using kubectl describe command.

kubectl describe pods mem-demo -n memory-demo | grep -i state -A 5
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 1

It says OOM killed means memory limit was exceeded and it was killed. 

Instead of assigning memory limits and resources to individual pods we can create memory limits in namespace and every pod created in this namespace will have the default memory limit. 

To do this we'll create a limit range in memory-demo namespace. 

apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

save it as limit.yaml

kubectl apply -f limit.yaml -n memory-demo

So limit range has been created and we can verify it by below command.

kubectl get limits -n memory-demo

NAME CREATED AT
mem-limit-range 2022-08-07T13:27:36Z

Now create a Pod using imperative command way

kubectl run nginx --image=nginx -n memory-demo

And check the memory limits for the container , it should be default limits by limit range.

spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
memory: 512Mi
requests:
memory: 256Mi

That's how we can use the limit range to set default memory limits.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...