Know about Kubernetes Security

 Introduction




Kubernetes has become the most popular container orchestration tool, enabling organizations to deploy and manage containerized applications at scale. However, this popularity has also made it an attractive target for cybercriminals. Kubernetes security is critical to safeguarding your containerized applications and data. In this article, we will discuss the risks involved in Kubernetes security and how to harden pod security with code.

Risk


Kubernetes security risks come from different areas, including:

  • Container images: Container images used to create pods may contain vulnerabilities that can be exploited by attackers.

  • API server: The Kubernetes API server is a central point of control for managing Kubernetes clusters. An attacker who gains access to the API server can control the entire cluster.

  • Network security: Kubernetes allows pods to communicate with each other and the outside world. Without proper network security, an attacker can intercept network traffic or launch a denial-of-service attack.

  • Authorization and access control: Access to Kubernetes resources should be restricted based on the principle of least privilege. If authorization and access control are not properly implemented, an attacker can gain access to sensitive data and resources.


Hardening Pod security with code


Hardening pod security involves implementing security best practices at the code level. Here are some tips for hardening pod security:

  • Use least privilege: Grant the minimum level of privileges necessary for pods to function. Use role-based access control (RBAC) to enforce these privileges.

  • Use security contexts: Kubernetes security contexts allow you to set security policies for pods. You can use security contexts to specify a range of settings, such as user IDs, file permissions, and capabilities.

  • Use container image scanning: Use tools such as Aqua Security, Anchore, or Trivy to scan container images for vulnerabilities before deploying them in Kubernetes.

  • Use network policies: Use network policies to restrict pod-to-pod communication and ingress/egress traffic.

  • Implement secure service accounts: Kubernetes service accounts provide authentication tokens for pods to access the Kubernetes API server. Use RBAC to restrict the permissions of service accounts.

  • Monitor Kubernetes API server activity: Monitor Kubernetes API server activity for any suspicious activity or unauthorized access.

  • Here's an example of how to harden pod security by using security contexts and network policies in Kubernetes YAML configuration file:

  • apiVersion: v1
    kind: Pod
    metadata:
    name: my-pod
    spec:
    containers:
    - name: my-container
    image: my-image
    securityContext:
    runAsUser: 1000
    capabilities:
    add:
    - NET_ADMIN
    ports:
    - containerPort: 80
    securityContext:
    runAsUser: 1000
    networkPolicy:
    ingress:
    - from:
    - podSelector:
    matchLabels:
    app: my-app


In this example, we have defined a pod named "my-pod" that includes a container named "my- container" and a security context that specifies the user ID and capabilities of the container. We have also defined a network policy that restricts incoming traffic to the pod only from pods labeled with "app: my-app".

The "securityContext" section of the YAML file specifies the following settings:

"runAsUser": The container runs as user ID 1000, which is a non-root user. This reduces the risk of privilege escalation attacks.
"capabilities": The container has added the "NET_ADMIN" capability, which allows it to perform network administration tasks. By limiting the container's capabilities, we reduce the risk of a container being used to launch an attack.

The "networkPolicy" section of the YAML file specifies the following settings:

"ingress": This network policy restricts incoming traffic to the pod only from pods labeled with "app: my-app". This helps prevent unauthorized access to the pod.

By using security contexts and network policies in this way, we can help harden pod security and reduce the risk of a Kubernetes security breach.


Conclusion


Kubernetes security is critical for protecting your containerized applications and data. The risks involved in Kubernetes security come from different areas, including container images, API server, network security, and authorization and access control. To harden pod security, implement security best practices at the code level, including using least privilege, security contexts, container image scanning, network policies, secure service accounts, and monitoring Kubernetes API server activity. By following these best practices, you can minimize the risk of a Kubernetes security breach and ensure the security of your containerized applications.


Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

What is Apache Airflow

 Airflow?


Apache airflow is an open source platform for developing, scheduling and monitoring batch oriented workflows. Airflow's extensible python framework allows you to write code and create dag for workflows.

What is Dag?


A dag is called as directed acyclic graph means while creating or designing a dag, Cycle should not be formed. Here for example if I execute a task A and task A executes task B, task B should execute task C. Task C should not be calling task A otherwise there will be a cycle formed and In math you can relate it to transitive dependency. So there shouldn't be a transitive dependency.

Dag represents a workflow which is a collection of tasks. Below is how a dag looks like:




Airflow has a nice GUI by which you can manage creating variables and other admin stuff. You can also view your dag on GUI , trigger it or cancel it. Everything you can manage via GUI.

Use Case for Airflow


Suppose you have a unix job that  processes a text file at specific time and then it calls other job which does it's part. But sometimes you missed the file and your first job would still run and fail and your next dependent job will also run and process no data. So to overcome this scenario we can use Airflow and create a DAG where all tasks will be dependent on each other. So when we get a file then only our first job will be triggered and once first job will be succeeded our next job will run.

Let's write your first dag


from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.operators.bash import BashOperator

# A DAG represents a workflow, a collection of tasks
with DAG(dag_id="demo", start_date=datetime(2022, 1, 1), schedule="0 0 * * *") as dag:

# Tasks are represented as operators
hello = BashOperator(task_id="hello", bash_command="echo hello")

@task()
def airflow():
print("airflow")

# Set dependencies between tasks
hello >> airflow()

In above code you need to import the DAG from airflow and BashOperator.

When you have a requirement to execute linux commands and you want to create a workflow use BashOperator. Similarly when you have a requirement to write a python function and execute it then use pythonOperator. In a summary BashOperator is used to execute bash commands and pythonOperator is used to execute the python commands.

Now once you have imported everything required the next line would be to define your dag.
When Airflow scheduler runs, it looks for a line which has "with DAG" in it. When it finds it then it creates a dag based on the config you passed.

After that there is BashOperator used where we creating a task with the name "hello" and executing command using BashOperator. You need to give a unique task id to each task keep it remember. And on the last line you can plan your flow like which task has to run first and last. 

Once you save the file the Airflow scheduler will automatically pick the new file and create the DAG. If any error occurs it will be displayed on the top of the Airflow UI.




Conclusion


Airflow is an awesome framework to use. It can be very useful to orchestrate all legacy unix jobs and flows. We just to choose airflow for right use case and it can solve those problems easily.


How to delete a Pod in Kubernetes - Beginner tutorial

Delete a pod using kubectl delete pod





Sometimes we encounter some issues on running pod and then we decide to delete a Pod because we can't create a new pod with the same name. You can't have two pods with same name in a cluster. 

There are two ways to delete a pod: 
  • Using delete command
  • Using delete command with force keyword
The first way of deleting the pod is called a graceful delete. So before deleting any pod we first need to create a new pod.

Create a pod

To create a pod we need to run below command in our terminal.

kubectl run nginx --image=nginx --restart=Never


Above command will run nginx image with pod name nginx itself. 
Here --restart flag says kubernetes to create a single pod not to create a deployment.

Now check if the pod is running using below command

kubectl get pod


Output:

NAME    READY   STATUS              RESTARTS   AGE

nginx   0/1     ContainerCreating   0          8s


Just wait for few seconds and pod will be created.

kubectl get pod

NAME    READY   STATUS    RESTARTS   AGE

nginx   1/1     Running   0          2m6s


Since we have created our pod. Now lets try to delete it. To delete a pod you can use below command.

kubectl delete pods nginx


Here in the syntax you need to pass pod name that's it. In this case I passed nginx.

This may take some time depending upon pod usage. But if you want this to be deleted quickly then we can use force flag in the command.

A pod is not deleted automatically when a node is unreachable.The pods running on an unreachable Node enter the terminating or unknown state after timeout. Pods may also enter these states when user attempts graceful deletion of pod on unreachable node. The only ways in which a pod can be deleted/removed from apiserver are as follows:
  • The node object is deleted.
  • The kubelet on unresponsive node starts responding ,kills the pod and removes the pod from apiserver.
  • Force deletion of pod by user.
The recommended best practice is to follow first two from above. If a node is confirmed to be dead then
delete the node object. Normally system deletes the pod once it's no longer running on a node or the node is deleted by administrator. 

Delete the pod forcefully using below command:

kubectl delete pod nginx --force 

Output:

pod "nginx" force deleted


If even after above command the pod is still stuck on unknown state. You can use the below command to remove the pod from the cluster.

kubectl patch pod nginx -p '{"metadata":{"finalizers":null}}'


That was it about deleting the pod using kubectl command. Be careful while deleting a pod especially using the force keyword.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...