Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

Navigating Data Flow in Kubernetes: Unraveling Ingress and Egress Concepts


What is Ingress and Egress? An Introduction:

In the ever-evolving landscape of information technology and container orchestration, terms like "ingress" and "egress" are integral to understanding how data traverses within Kubernetes clusters. As organizations increasingly adopt containerized applications, the proper management of ingress and egress points becomes crucial for ensuring secure and efficient communication between microservices. In this article, we will explore the significance of ingress and egress within the context of Kubernetes, shedding light on their roles in facilitating seamless data flow.

Defining Ingress and Egress in general:

  1. Ingress: 

    Ingress refers to the entry point of data into a network or system. It is the pathway through which external data or traffic enters a local network. This can include data from the internet, other networks, or external devices. Ingress points are typically managed and monitored to control the type and volume of incoming data, ensuring network security and optimal performance.

  2. Egress: Conversely, egress is the exit point for data leaving a network. It represents the pathway through which data flows out of a system to external destinations. Egress points are strategically managed to regulate the outbound traffic, preventing unauthorized access and safeguarding sensitive information from leaving the network without proper authorization.


Defining Ingress and Egress in Kubernetes:


  1. Ingress in Kubernetes:

    In the Kubernetes ecosystem, ingress refers to the entry point for external traffic into the cluster. It serves as a way to manage external access to services within the cluster, acting as a traffic controller. Ingress resources allow users to define routing rules, hostnames, and TLS settings, directing incoming requests to the appropriate services.

  2. Egress in Kubernetes: Egress, on the other hand, involves the outbound traffic from pods within the cluster to external services or destinations. Managing egress in Kubernetes is crucial for controlling which external resources a pod can access and ensuring that communication adheres to security and compliance standards.

Importance of Ingress and Egress in Kubernetes:

  1. Service Discovery: Ingress resources enable service discovery by providing a standardized way to route external traffic to services within the cluster. This simplifies the process of exposing and accessing services, enhancing the overall scalability and flexibility of Kubernetes applications.
  2. Security Policies: Ingress controllers, such as Nginx Ingress or Traefik, allow for the implementation of security policies at the entry point of the cluster. This includes SSL/TLS termination, rate limiting, and web application firewall capabilities, bolstering the security posture of the entire Kubernetes deployment.
  3. Egress Control: Kubernetes Network Policies can be leveraged to enforce egress controls, specifying which pods are allowed to communicate with external resources and under what conditions. This ensures that only authorized communication occurs, mitigating potential security risks.

Practical Applications in Kubernetes:

  1. Ingress Controllers: Deploying and configuring Ingress controllers play a pivotal role in managing external access to services. These controllers are responsible for processing and implementing the rules defined in Ingress resources, directing traffic to the appropriate services within the cluster.
  2. Egress Policies: Utilizing Kubernetes Network Policies allows organizations to define fine-grained controls over egress traffic. This is particularly important in scenarios where strict compliance requirements or data sovereignty regulations need to be adhered to.
  3. API Gateway Integration:                                                                                                         Ingress points can be integrated with API gateways to manage external access to microservices, enabling features like authentication, rate limiting, and request transformation. This ensures a secure and streamlined interaction between external clients and services within the Kubernetes cluster.                                                                                                     

Conclusion:


Ingress and egress play pivotal roles in shaping the data flow within Kubernetes clusters. As organizations embrace container orchestration and microservices architectures, understanding and effectively managing these entry and exit points are essential for building resilient, secure, and scalable applications. By leveraging the capabilities provided by Kubernetes Ingress and implementing robust egress controls, organizations can navigate the complexities of modern application deployment with confidence.

The Power Duo: Kubernetes and Microservices




In the realm of modern software development, two buzzwords have taken center stage: Kubernetes and microservices. These two technologies have revolutionized how applications are built, deployed and managed in a new era of scalability, flexibility, and efficiency.

Microservices, an architectural style, breaks down applications into smaller, loosely-coupled services that can be developed, deployed and scaled independently. This approach promotes agility and accelerates development, allowing teams to focus on specific functionalities. However, managing these services manually can be complex and resource-intensive.


Enter Kubernetes, an open-source container orchestration platform. Kubernetes automates the deployment, scaling and management of containerized applications, making it a perfect match for microservices. It provides tools for automating updates, load balancing and fault tolerance, ensuring seamless operation across a dynamic environment.


Together, Kubernetes and microservices offer several benefits. They enable organizations to swiftly respond to market demands by deploying updates to individual services without disrupting the entire application. Autoscaling ensures optimal resource utilization, and inherent fault tolerance enhances reliability.


In conclusion, the synergy between Kubernetes and microservices has reshaped the software development landscape. Organizations embracing this duo can innovate faster, deliver robust applications, and effectively navigate the complexities of modern IT infrastructure.


How to Expose a Kubernetes Pod to a Specific Port for Running an application

If you are running an application on Kubernetes, you may want to expose a specific port to a pod so that you can access it outside world. Kubernetes provides several ways to do this and we are going to use one of the method.

Step 1: Let's create an HTML application 

I am going to create an EMI calculator using HTML and Javascript and save it as index.html

<!DOCTYPE html>
<html>
<head>
<title>EMI Calculator</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
form {
display: flex;
flex-direction: column;
align-items: center;
margin-top: 50px;
}
input[type="number"], select {
padding: 10px;
margin-bottom: 20px;
width: 300px;
border-radius: 5px;
border: none;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}
input[type="submit"] {
padding: 10px;
width: 200px;
background-color: #4CAF50;
color: white;
border: none;
border-radius: 5px;
cursor: pointer;
}
input[type="submit"]:hover {
background-color: #3e8e41;
}
</style>
</head>
<body>
<h1>EMI Calculator</h1>
<form onsubmit="calculateEMI(); return false;">
<label for="principal">Loan Amount:</label>
<input type="number" id="principal" name="principal"
placeholder="Enter loan amount in INR" required>

<label for="interest">Interest Rate:</label>
<input type="number" id="interest" name="interest"
placeholder="Enter interest rate in %" required>

<label for="tenure">Loan Tenure:</label>
<select id="tenure" name="tenure" required>
<option value="">--Select Loan Tenure--</option>
<option value="12">1 Year</option>
<option value="24">2 Years</option>
<option value="36">3 Years</option>
<option value="48">4 Years</option>
<option value="60">5 Years</option>
</select>

<input type="submit" value="Calculate EMI">
</form>

<div id="result"></div>

<script>
function calculateEMI() {
// Get input values
let principal = document.getElementById('principal').value;
let interest = document.getElementById('interest').value;
let tenure = document.getElementById('tenure').value;

// Calculate EMI
let monthlyInterest = interest / 1200; // 12 months * 100%
let monthlyPayment =
(principal * monthlyInterest) / (1 - (1 / Math.pow(1 + monthlyInterest, tenure)
));
let totalPayment = monthlyPayment * tenure;

// Display result
document.getElementById('result').innerHTML = `
<h2>EMI Calculation Result</h2>
<p>Loan Amount: INR ${principal}</p>
<p>Interest Rate: ${interest}%</p>
<p>Loan Tenure: ${tenure} months</p>
<p>Monthly EMI: INR ${monthlyPayment.toFixed(2)}</p>
<p>Total Payment: INR ${totalPayment.toFixed(2)}</p>
`;
}
</script>
</body>
</html>

Now your html application is ready. 

Step 2: Dockerize your application

let's create a Dockerfile with below command inside it and name it Dockerfile in the same location.

FROM nginx:alpine
COPY index.html /usr/share/nginx/html/index.html

Here we are using nginx server where our index file/EMI calculator will be hosted.

Step 3: Build an image for your application

Use below command to build an image

docker build -t emi .

here -t is called as tag and emi is tag name.

. is current directory. So the docker build command will look for Dockerfile in current directory.

=> [internal] load build definition from Dockerfile
=> => transferring dockerfile: 201B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> [internal] load metadata for docker.io/library/nginx:alpine
=> [internal] load build context
=> => transferring context: 79B
=> [1/2] FROM docker.io/library/nginx:alpine
=> CACHED [2/2] COPY index.html /usr/share/nginx/html/index.html

You would see output like this above.

Now if you check if the image has been created with below command.

docker images

You would see the tag name as emi in your result.

Step 4: Create a deployment

Since we have already created an image, now it's time to create a deployment using the same image.

apiVersion: v1
kind: Pod
metadata:
name: emi
namespace: default
spec:
containers:
- name: emi
image: emi:latest
imagePullPolicy: Never
restartPolicy: Never

save it as deployment.yaml

Now run the below command to create a deployment:

kubectl apply -f deployment.yaml

Once command is completed. Let's verify it by kubectl get pod command like below.

kubectl get pods
NAME READY STATUS RESTARTS AGE
emi 1/1 Running 0 7s

Step 5: Access it via browser

Since we have already created our application want to access it via browser. We may need to use port forwarder. It is for TCP connections only.

kubectl port-forward emi 8087:80

Once command has been completed. Let's access it via localhost:8087 in the browser.


Finally we have created an application, dockerized it and running it on a Pod and able to access via browser. That was it about the spinning up an application on Pod.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Know about Kubernetes Security

 Introduction




Kubernetes has become the most popular container orchestration tool, enabling organizations to deploy and manage containerized applications at scale. However, this popularity has also made it an attractive target for cybercriminals. Kubernetes security is critical to safeguarding your containerized applications and data. In this article, we will discuss the risks involved in Kubernetes security and how to harden pod security with code.

Risk


Kubernetes security risks come from different areas, including:

  • Container images: Container images used to create pods may contain vulnerabilities that can be exploited by attackers.

  • API server: The Kubernetes API server is a central point of control for managing Kubernetes clusters. An attacker who gains access to the API server can control the entire cluster.

  • Network security: Kubernetes allows pods to communicate with each other and the outside world. Without proper network security, an attacker can intercept network traffic or launch a denial-of-service attack.

  • Authorization and access control: Access to Kubernetes resources should be restricted based on the principle of least privilege. If authorization and access control are not properly implemented, an attacker can gain access to sensitive data and resources.


Hardening Pod security with code


Hardening pod security involves implementing security best practices at the code level. Here are some tips for hardening pod security:

  • Use least privilege: Grant the minimum level of privileges necessary for pods to function. Use role-based access control (RBAC) to enforce these privileges.

  • Use security contexts: Kubernetes security contexts allow you to set security policies for pods. You can use security contexts to specify a range of settings, such as user IDs, file permissions, and capabilities.

  • Use container image scanning: Use tools such as Aqua Security, Anchore, or Trivy to scan container images for vulnerabilities before deploying them in Kubernetes.

  • Use network policies: Use network policies to restrict pod-to-pod communication and ingress/egress traffic.

  • Implement secure service accounts: Kubernetes service accounts provide authentication tokens for pods to access the Kubernetes API server. Use RBAC to restrict the permissions of service accounts.

  • Monitor Kubernetes API server activity: Monitor Kubernetes API server activity for any suspicious activity or unauthorized access.

  • Here's an example of how to harden pod security by using security contexts and network policies in Kubernetes YAML configuration file:

  • apiVersion: v1
    kind: Pod
    metadata:
    name: my-pod
    spec:
    containers:
    - name: my-container
    image: my-image
    securityContext:
    runAsUser: 1000
    capabilities:
    add:
    - NET_ADMIN
    ports:
    - containerPort: 80
    securityContext:
    runAsUser: 1000
    networkPolicy:
    ingress:
    - from:
    - podSelector:
    matchLabels:
    app: my-app


In this example, we have defined a pod named "my-pod" that includes a container named "my- container" and a security context that specifies the user ID and capabilities of the container. We have also defined a network policy that restricts incoming traffic to the pod only from pods labeled with "app: my-app".

The "securityContext" section of the YAML file specifies the following settings:

"runAsUser": The container runs as user ID 1000, which is a non-root user. This reduces the risk of privilege escalation attacks.
"capabilities": The container has added the "NET_ADMIN" capability, which allows it to perform network administration tasks. By limiting the container's capabilities, we reduce the risk of a container being used to launch an attack.

The "networkPolicy" section of the YAML file specifies the following settings:

"ingress": This network policy restricts incoming traffic to the pod only from pods labeled with "app: my-app". This helps prevent unauthorized access to the pod.

By using security contexts and network policies in this way, we can help harden pod security and reduce the risk of a Kubernetes security breach.


Conclusion


Kubernetes security is critical for protecting your containerized applications and data. The risks involved in Kubernetes security come from different areas, including container images, API server, network security, and authorization and access control. To harden pod security, implement security best practices at the code level, including using least privilege, security contexts, container image scanning, network policies, secure service accounts, and monitoring Kubernetes API server activity. By following these best practices, you can minimize the risk of a Kubernetes security breach and ensure the security of your containerized applications.


Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How to delete a Pod in Kubernetes - Beginner tutorial

Delete a pod using kubectl delete pod





Sometimes we encounter some issues on running pod and then we decide to delete a Pod because we can't create a new pod with the same name. You can't have two pods with same name in a cluster. 

There are two ways to delete a pod: 
  • Using delete command
  • Using delete command with force keyword
The first way of deleting the pod is called a graceful delete. So before deleting any pod we first need to create a new pod.

Create a pod

To create a pod we need to run below command in our terminal.

kubectl run nginx --image=nginx --restart=Never


Above command will run nginx image with pod name nginx itself. 
Here --restart flag says kubernetes to create a single pod not to create a deployment.

Now check if the pod is running using below command

kubectl get pod


Output:

NAME    READY   STATUS              RESTARTS   AGE

nginx   0/1     ContainerCreating   0          8s


Just wait for few seconds and pod will be created.

kubectl get pod

NAME    READY   STATUS    RESTARTS   AGE

nginx   1/1     Running   0          2m6s


Since we have created our pod. Now lets try to delete it. To delete a pod you can use below command.

kubectl delete pods nginx


Here in the syntax you need to pass pod name that's it. In this case I passed nginx.

This may take some time depending upon pod usage. But if you want this to be deleted quickly then we can use force flag in the command.

A pod is not deleted automatically when a node is unreachable.The pods running on an unreachable Node enter the terminating or unknown state after timeout. Pods may also enter these states when user attempts graceful deletion of pod on unreachable node. The only ways in which a pod can be deleted/removed from apiserver are as follows:
  • The node object is deleted.
  • The kubelet on unresponsive node starts responding ,kills the pod and removes the pod from apiserver.
  • Force deletion of pod by user.
The recommended best practice is to follow first two from above. If a node is confirmed to be dead then
delete the node object. Normally system deletes the pod once it's no longer running on a node or the node is deleted by administrator. 

Delete the pod forcefully using below command:

kubectl delete pod nginx --force 

Output:

pod "nginx" force deleted


If even after above command the pod is still stuck on unknown state. You can use the below command to remove the pod from the cluster.

kubectl patch pod nginx -p '{"metadata":{"finalizers":null}}'


That was it about deleting the pod using kubectl command. Be careful while deleting a pod especially using the force keyword.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How Kubernetes works

 Introduction

Kubernetes is a compact, extensible, open source stage for overseeing containerized jobs and administrations, that works with both explanatory setup and computerization. It has an enormous, quickly developing environment. Kubernetes administrations, backing, and instruments are broadly accessible.


The name Kubernetes begins from Greek, meaning helmsman or pilot. K8s as a condensing comes about because of counting the eight letters between the "K" and the "s". Google publicly released the Kubernetes project in 2014. Kubernetes consolidates more than 15 years of Google's experience running creation jobs at scale with best-of-breed thoughts and practices from the local area.

For associations that work at a gigantic scope, a solitary Linux compartment occasion isn't sufficient to fulfill their applications' all's necessities. It's normal for adequately complex applications, for example, ones that convey through microservices, to require different Linux containers that speak with one another. That design presents another scaling issue: how would you deal with that large number of individual containers? Designers will in any case have to deal with booking the organization of holders to explicit machines, dealing with the systems administration between them, developing the assets allotted under weighty burden, and significantly more.


Enter Kubernetes, a container orchestration framework — a method for dealing with the lifecycle of containerized applications across a whole armada. It's a kind of meta-process that gives the capacity to robotize the organization and scaling of a few compartments without a moment's delay. A few holders running a similar application are gathered together. These compartments go about as reproductions, and effectively load balance approaching solicitations. A compartment orchestrator, then, manages these gatherings, guaranteeing that they are working accurately.

Kubernetes architecture






There are multiple components which are involved to run Kubernetes which are listed below.

Pods

A Kubernetes pod is a group of containers, and is the smallest unit that Kubernetes administers. Pods have a single IP address that is applied to every container within the pod. Containers in a pod share the same resources such as memory and storage. This allows the individual Linux containers inside a pod to be treated collectively as a single application, as if all the containerized processes were running together on the same host in more traditional workloads. If you know about the docker then pod on kubernetes won't surprise you much. You can run multiple containers inside a Pod but keep in mind that you can't run same container twice inside Pod.

Deployments


Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment.


Service

An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

Nodes


A Kubernetes cluster must have at least one compute node, although it may have many, depending on the need for capacity. Pods orchestrated and scheduled to run on nodes, so more nodes are needed to scale up cluster capacity.

Nodes do the work for a Kubernetes cluster. They connect applications and networking, compute, and storage resources.


Control plane

The Kubernetes control plane is the main entry point for administrators and users to manage the various nodes. Operations are issued to it either through HTTP calls or connecting to the machine and running command-line scripts. As the name implies, it controls how Kubernetes interacts with your applications.


Cluster

A cluster is all of the above components put together as a single unit.

API Server

The API server exposes a REST interface to the Kubernetes cluster. All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it.

Scheduler

The scheduler is responsible for assigning work to the various nodes. 

Controller manager

The controller-manager is responsible for making sure that the shared state of the cluster is operating as expected. More accurately, the controller manager oversees various controllers which respond to events (e.g., if a node goes down).

Kubelet

A Kubelet tracks the state of a pod to ensure that all the containers are running. It provides a heartbeat message every few seconds to the control plane. If a replication controller does not receive that message, the node is marked as unhealthy.

Kube proxy

The Kube proxy routes traffic coming into a node from the service. It forwards requests for work to the correct containers.

etcd

etcd is a distributed key-value store that Kubernetes uses to share information about the overall state of a cluster. 


Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Creating a Pod inside a Namespace

 Creating a Pod inside a Namespace




We are going to understand what is namespace in programming and what is namespace in Kubernetes world. Both are actually very much same. So today we'll talk about the namespace ,how to create it, check it and creating a Pod in a namespace.

Namespace

A namespace is set of signs or names that are used to identify and refer to objects of various kinds. A namespace ensures that all of the objects have unique name so that they can be easily identified. You can 
correlate this with schema in SQL server database where you can have multiple  tables with same name but in different schema. 

Similarly you can have multiple pods with same name but in different namespace.

How to check all available namespaces?

You can run kubectl get namespaces to get all available namespaces on cluster.

kubectl get namespaces

NAME              STATUS   AGE

default           Active   2d

kube-public       Active   2d

kube-system       Active   2d


You can also run kubectl get ns , where ns is short form of namespace.

How to create a namespace

To create a namespace you need to run kubectl create namespace namespace_name

kubectl create namespace test

namespace/test created


kubectl get namespaces         

NAME              STATUS   AGE

default           Active   82d

kube-public       Active   82d

kube-system       Active   82d

test              Active   5s


You can see test namespace has been created. 

Now you'll notice what are these namespaces other than test. I didn't create them. Let me explain.

  • kube-system: Namespace for objects created by kubernetes system
  • default: It's default namespace when you don't specify name then objects will be created in default namespace
  • kube-public: This is created automatically and readable by all users. This namespace is mostly reserved for cluster usage.

Let' create a Pod in test namespace now using below commands.

kubectl run mypod --image=nginx -n test

kubectl run mypod --image=nginx -n test

pod/mypod created


Let's check the pod and make sure you look at test namespace.

kubectl get pods -n test

NAME    READY   STATUS    RESTARTS   AGE

mypod   1/1     Running   0          2m10s


I have created mypod in prod namespace as well. So we can have an application with same name but in different namespace.

kubectl run mypod --image=nginx -n prod

pod/mypod created


kubectl get pods -n prod               

NAME    READY   STATUS    RESTARTS   AGE

mypod   1/1     Running   0          14s



Conclusion

Namespace is very useful when deploying your application on cluster. It's always a best practice to create a namespace and deploy your application. You can think of a scenario where one team creates a pod with name testPod and other team also tries to name a pod testPod. So in this case Pod creation will be failed due to duplicate name. So best practice says create your pod inside a namespace. 

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Debugging your pod on Kubernetes?

 Debugging the pods on Kubernetes




Debugging is very important to identify the issue in Pod. sometimes your application may behave differently /wrong. Your Pod has stopped or some other issues happening inside your Pod. You can always debug that to identify the issue and fix it. 

So the most basic thing we do to debug the issue is to start with checking the logs. Logs are very crucial part and play very important role in any application. If anything goes wrong we can always check the logs and analyse it. Similar to above we are going to check the logs, events and definition of Pod to identify the issues. 

To start with it, we first need to run the pod. You can follow below command to run the pod.


kubectl run mypod --image=nginx


Here mypod is name of the pod and nginx image will be used.

Check if the pod is running. 

kubectl get pods


Result below:

NAME    READY   STATUS              RESTARTS   AGE

mypod   0/1     ContainerCreating   0          5s


It says container creating state. We may wait and see if it is completed and running state.

NAME    READY   STATUS    RESTARTS   AGE

mypod   1/1     Running   0          21s


So it's in running state. We can also use yaml file to create the pod.

Let's break something and see what we issues and status we get. I'll delete the pod and recreate it with the wrong image name.

To delete the Pod use below command:


kubectl delete pod mypod

pod "mypod" deleted


Now since the pod has been delete let's recreate it with wrong image.

kubectl run mypod --image=nginx-myimage-123


Run it and pod will be created successfully. But wait did we check the status if the pod is running. 
Check it using Kubectl get pods command.

kubectl get pods

NAME    READY   STATUS         RESTARTS   AGE

mypod   0/1     ErrImagePull   0          9s


You can identify under the status column that there is an error. It says ErrImagePull. We can guess that there is an error during image pull. We know that because we put that wrong image name. But in real life scenario we don't know about if our image is wrong. So we can check the events. 

To check the events we can use describe command:

kubectl describe pod mypod


Once we run this command it will give us result in key/value pair format. Just scroll to the bottom and look for the events. 

Events:

  Type     Reason     Age                From               Message

  ----     ------     ----               ----               -------

  Normal   Scheduled  34s                default-scheduler  Successfully assigned default/mypod to docker-desktop

  Normal   BackOff    26s                kubelet            Back-off pulling image "nginx-myimage-123"

  Warning  Failed     26s                kubelet            Error: ImagePullBackOff

  Normal   Pulling    12s (x2 over 33s)  kubelet            Pulling image "nginx-myimage-123"

  Warning  Failed     6s (x2 over 27s)   kubelet            Failed to pull image "nginx-myimage-123": rpc error: code = Unknown desc = Error response from daemon: pull access denied for nginx-myimage-123, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

  Warning  Failed     6s (x2 over 27s)   kubelet            Error: ErrImagePull


Here we can see it says either repository does not exist or we don't have access to it. 

So checking the events is very important step. This can tell us what happened when we ran our kubectl run command. There is one more way by which we can check and that is logs. By checking logs can give us some insights. Sometimes we don't get much information from the events and in those scenarios checking logs can be very helpful if we can find something. We can check the log in this case as well but since the issue is only with image. We used wrong image name and this issue was very much understandable from the events. But let's check the logs anyway by running below command.

kubectl logs mypod


Result:

Error from server (BadRequest): container "mypod" in pod "mypod" is waiting to start: trying and failing to pull image


In logs also we can check that its saying "trying and failing to pull image" step.

If we have specific container failing inside the pod then you can run below command.

kubectl logs mypod container_name                           

If our container has ever crashed previously then we can use --previous flag in command.

kubectl logs --previous mypod container_name                 

Conclusion 

These are the basic ways by which we can check and fix the issues in Pod. Events can be very helpful but sometimes logs can be more helpful. So it depends on what is the issue. It's always better to check for events and then go for logs. 

Hope this can be helpful!

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...