Understanding the replica set in Kubernetes.

Understanding the replica set in Kubernetes

What is Replica?

Replica means copy so having exact same copy of a running pod is called as Replica. Where as ReplicaSet is a way to maintain the stable set of replica Pods running at any time. It guarantees the availability of specified number of identical Pods.

How does it work

ReplicaSet is defined with fields having a selector that specifies how to identify Pods It can acquire, a number of replicas indicating how many Pods it should be maintaining and a pod template specifying the data of  of new pods which it should be maintaining. When replica-set needs to create a new pod it uses pod template to create a new pod.


ReplicaSet VS ReplicationController

ReplicaSet and Replication Controller both do the same thing. Both of them ensures that a specified number of Pods mentioned in replicas are running at any time. The only difference comes with the usage 
of selectors to replicate Pods.ReplicaSet uses Set-Based selectors while Replication Controller use Equity-Based selectors.

See in below examples:

Replication Controller

apiVersion: v1
kind: ReplicationController
metadata:
name: RepCon
spec:
replicas: 3
selector:
app: RepCon
template:
metadata:
name: RepCon
labels:
app: RepCon
spec:
containers:
- name: RepCon
image: RepCon/rc
ports:
- containerPort: 80


Replica-Set

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: RS
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: redis
image: gcr.io/google_samples/gb-frontend:v3


Save it as ReplicaSet.yaml in current directory and run below command.

kubectl apply -f ReplicaSet.yaml

Now you can check the ReplicaSet with below command:

kubectl get rs

or

kubectl get replicaset


Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Creating a Pod inside a Namespace

 Creating a Pod inside a Namespace




We are going to understand what is namespace in programming and what is namespace in Kubernetes world. Both are actually very much same. So today we'll talk about the namespace ,how to create it, check it and creating a Pod in a namespace.

Namespace

A namespace is set of signs or names that are used to identify and refer to objects of various kinds. A namespace ensures that all of the objects have unique name so that they can be easily identified. You can 
correlate this with schema in SQL server database where you can have multiple  tables with same name but in different schema. 

Similarly you can have multiple pods with same name but in different namespace.

How to check all available namespaces?

You can run kubectl get namespaces to get all available namespaces on cluster.

kubectl get namespaces

NAME              STATUS   AGE

default           Active   2d

kube-public       Active   2d

kube-system       Active   2d


You can also run kubectl get ns , where ns is short form of namespace.

How to create a namespace

To create a namespace you need to run kubectl create namespace namespace_name

kubectl create namespace test

namespace/test created


kubectl get namespaces         

NAME              STATUS   AGE

default           Active   82d

kube-public       Active   82d

kube-system       Active   82d

test              Active   5s


You can see test namespace has been created. 

Now you'll notice what are these namespaces other than test. I didn't create them. Let me explain.

  • kube-system: Namespace for objects created by kubernetes system
  • default: It's default namespace when you don't specify name then objects will be created in default namespace
  • kube-public: This is created automatically and readable by all users. This namespace is mostly reserved for cluster usage.

Let' create a Pod in test namespace now using below commands.

kubectl run mypod --image=nginx -n test

kubectl run mypod --image=nginx -n test

pod/mypod created


Let's check the pod and make sure you look at test namespace.

kubectl get pods -n test

NAME    READY   STATUS    RESTARTS   AGE

mypod   1/1     Running   0          2m10s


I have created mypod in prod namespace as well. So we can have an application with same name but in different namespace.

kubectl run mypod --image=nginx -n prod

pod/mypod created


kubectl get pods -n prod               

NAME    READY   STATUS    RESTARTS   AGE

mypod   1/1     Running   0          14s



Conclusion

Namespace is very useful when deploying your application on cluster. It's always a best practice to create a namespace and deploy your application. You can think of a scenario where one team creates a pod with name testPod and other team also tries to name a pod testPod. So in this case Pod creation will be failed due to duplicate name. So best practice says create your pod inside a namespace. 

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Debugging your pod on Kubernetes?

 Debugging the pods on Kubernetes




Debugging is very important to identify the issue in Pod. sometimes your application may behave differently /wrong. Your Pod has stopped or some other issues happening inside your Pod. You can always debug that to identify the issue and fix it. 

So the most basic thing we do to debug the issue is to start with checking the logs. Logs are very crucial part and play very important role in any application. If anything goes wrong we can always check the logs and analyse it. Similar to above we are going to check the logs, events and definition of Pod to identify the issues. 

To start with it, we first need to run the pod. You can follow below command to run the pod.


kubectl run mypod --image=nginx


Here mypod is name of the pod and nginx image will be used.

Check if the pod is running. 

kubectl get pods


Result below:

NAME    READY   STATUS              RESTARTS   AGE

mypod   0/1     ContainerCreating   0          5s


It says container creating state. We may wait and see if it is completed and running state.

NAME    READY   STATUS    RESTARTS   AGE

mypod   1/1     Running   0          21s


So it's in running state. We can also use yaml file to create the pod.

Let's break something and see what we issues and status we get. I'll delete the pod and recreate it with the wrong image name.

To delete the Pod use below command:


kubectl delete pod mypod

pod "mypod" deleted


Now since the pod has been delete let's recreate it with wrong image.

kubectl run mypod --image=nginx-myimage-123


Run it and pod will be created successfully. But wait did we check the status if the pod is running. 
Check it using Kubectl get pods command.

kubectl get pods

NAME    READY   STATUS         RESTARTS   AGE

mypod   0/1     ErrImagePull   0          9s


You can identify under the status column that there is an error. It says ErrImagePull. We can guess that there is an error during image pull. We know that because we put that wrong image name. But in real life scenario we don't know about if our image is wrong. So we can check the events. 

To check the events we can use describe command:

kubectl describe pod mypod


Once we run this command it will give us result in key/value pair format. Just scroll to the bottom and look for the events. 

Events:

  Type     Reason     Age                From               Message

  ----     ------     ----               ----               -------

  Normal   Scheduled  34s                default-scheduler  Successfully assigned default/mypod to docker-desktop

  Normal   BackOff    26s                kubelet            Back-off pulling image "nginx-myimage-123"

  Warning  Failed     26s                kubelet            Error: ImagePullBackOff

  Normal   Pulling    12s (x2 over 33s)  kubelet            Pulling image "nginx-myimage-123"

  Warning  Failed     6s (x2 over 27s)   kubelet            Failed to pull image "nginx-myimage-123": rpc error: code = Unknown desc = Error response from daemon: pull access denied for nginx-myimage-123, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

  Warning  Failed     6s (x2 over 27s)   kubelet            Error: ErrImagePull


Here we can see it says either repository does not exist or we don't have access to it. 

So checking the events is very important step. This can tell us what happened when we ran our kubectl run command. There is one more way by which we can check and that is logs. By checking logs can give us some insights. Sometimes we don't get much information from the events and in those scenarios checking logs can be very helpful if we can find something. We can check the log in this case as well but since the issue is only with image. We used wrong image name and this issue was very much understandable from the events. But let's check the logs anyway by running below command.

kubectl logs mypod


Result:

Error from server (BadRequest): container "mypod" in pod "mypod" is waiting to start: trying and failing to pull image


In logs also we can check that its saying "trying and failing to pull image" step.

If we have specific container failing inside the pod then you can run below command.

kubectl logs mypod container_name                           

If our container has ever crashed previously then we can use --previous flag in command.

kubectl logs --previous mypod container_name                 

Conclusion 

These are the basic ways by which we can check and fix the issues in Pod. Events can be very helpful but sometimes logs can be more helpful. So it depends on what is the issue. It's always better to check for events and then go for logs. 

Hope this can be helpful!

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How to scan vulnerabilities for Docker images

 Vulnerability scanning for Docker




Today we use a lot of docker. It enables developers to package application into containers, A standardized
executable component combining application source code with OS libraries and dependencies required to 
run code in any environment. We create the docker image and distribute it to others but how sure are we if that image is secure enough and doesn't have any vulnerability? 

Suppose you have an image which has lot of vulnerabilities and that is being used in your production system. Then any hacker can find those weaknesses in your system and can exploit easily. So identifying the vulnerabilities in your image is very important part for the security of your system.

Vulnerability scanning  

Vulnerability scanning is the process of identifying the security weakness and flaws in the system. This is an integral part of vulnerability management program which is to protect organizations from data breach.
Vulnerability scanning for docker local images allow teams to review the security state of the container images and take actions on fixing issues identified during scan.

Docker scan runs on Snyk engine. It is providing users the visibility into the security standards of their Dockerfiles and images. Users triggers vulnerability scans through CLI and use the CLI to view the results. The scan results contain the list of common vulnerabilities and exposures also called as CVEs. 
  
I recommend upgrading to latest version to Docker scan tool. 

Let's check the options available for docker scan using help command.

docker scan --help

docker scan --help


Usage: docker scan [OPTIONS] IMAGE


A tool to scan your images


Options:

      --accept-license    Accept using a third party scanning provider

      --dependency-tree   Show dependency tree with scan results

      --exclude-base      Exclude base image from vulnerability scanning (requires --file)

  -f, --file string       Dockerfile associated with image, provides more detailed results

      --group-issues      Aggregate duplicated vulnerabilities and group them to a single one (requires --json)

      --json              Output results in JSON format

      --login             Authenticate to the scan provider using an optional token (with --token), or web base token if empty

      --reject-license    Reject using a third party scanning provider

      --severity string   Only report vulnerabilities of provided level or higher (low|medium|high)

      --token string      Authentication token to login to the third party scanning provider

      --version           Display version of the scan plugin


Now you can see all the options available with docker scan. Let's check the version using below command.

docker scan --accept-license --version



So if you have version earlier that v0.11.0 then docker scan is not able to detect log4j-CVE-2021-44228.
You must update you docker desktop to 4.3.1 or higher.

How to scan

You can docker scan command just by passing the image name. 

docker scan my-image


Above command will provide you a report on terminal about your scan. 

Scan images during Development and Production

Creating an image from Dockerfile or rebuilding it can introduce new vulnerabilities in the system. So scanning the image during the development process should be a normal workflow. You can automate this process like:
 image_building ==> docker scan image ==> Push to dockerhub/private registry

For Production system, whenever there is new vulnerability discovered, running the scan can always be a better idea to detect that vulnerability in your system. Periodically scanning of container should be a good choice.

Ending thoughts

Building secure images is continuous process. Consider all the best practices to build an efficient, scalable and secure images. Start with your base images and always remember to choose images from official and verified publisher. Because you don't know what's inside that image.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Running your first Pod on Kubernetes

 What is Kubernetes




Kubernetes is an open source, cloud native infrastructure tool that automates scaling, deployment and management of containerized applications. 

Kubernetes was originally developed by google and later was handed over to Cloud Native Computing Foundation(CNCF) for enhancement and maintenance. Kubernetes is the most popular and highly in demand  orchestrator tool. Kubernetes is complex tool and a bit difficult to learn compare to swarm.

Here are few main architecture components of Kubernetes below:

Cluster 

A collection of multiple nodes, typically at least one master node and several worker nodes(also known as minions)

Node

A physical or Virtual Machine(VM)

Control Plane

A component that schedule and deploys application instances across all nodes

Kubelete

An agent process running on nodes. It is responsible of managing the state of each nodes and it can perform several actions to maintain a desired state.

Pods

Pods are basic scheduling unit. Pods consist of one or more containers co-located on a host machine and share same resources.

How to run your first Pod on Kubernetes

Before you begin you need to have a Kubernetes cluster running on your system and kubectl must be configured on it. Kubectl is command line tool which will be communicating with your cluster.

The easiest way to start with it, is get the docker for desktop on windows/Mac. Once you have it you can start docker for desktop and go to settings and you can find Kubernetes label on it. Click it and it will install Kubernetes on your system.



Once done you can run below command to check if Kubernetes cluster is running.

kubectl cluster-info


This command will give you information about your Kubernetes cluster. Now since we checked that our cluster is up and running, we'll deploy our first Pod now.

To check running pods on system run below command:

kubectl get pods 


No pods running currently so you'll see no information. To run a Pod execute below command:

kubectl run ng --image=nginx 


Here ng is name of Pod I have given. you can give it any name. Now check if Pod is running?

kubectl get pods            

NAME    READY   STATUS    RESTARTS   AGE

ng      1/1     Running   0          98s


So our first Pod is running. 

A Pod can run more than one container in it. Behind the scene you are actually running a container with added abstraction layer which is called a Pod. But remember you can't have more than one container with same name inside a Pod.

You can add -o wide in you get Pod command to get more information about running Pods.

kubectl get pods -o wide    

NAME    READY   STATUS    RESTARTS   AGE   IP          NODE             NOMINATED NODE   READINESS GATES

So you get more info. 

Note: 

kubectl get pods will check running Pods in default Namespace. Kubernetes has a concept of Namespace. So you can have multiple namespaces. When you install Kubernetes so by default the are two namespaces. 

  1. Default
  2. kube-system

kubectl get pods --all-namespaces -o wide

By running above command you can see all Pods running on all different namespaces.


What are some more flags/options in running a pod?

#Start a single instance of busybox and keep it in the foreground, don't restart it if it exits.

Command Below:


kubectl run -i --tty busybox --image=busybox --restart=Never



# Start a replicated instance of nginx.

Command Below:


kubectl run nginx --image=nginx --replicas=3



Sometimes you need to stop and start the Pod like you do in docker. You stop the container and you start the container. But in Kubernetes, it's not possible to stop the Pod and resume later. You can edit the Pod.yaml file and redeploy your changes. But you also can delete your Pod and easily recreate it.

kubectl delete pod ng                  

pod "ng" deleted


We have successfully deleted a Pod. 


Thats how you can start you first Pod on Kubernetes. Kubernetes is most popular container orchestrator. You can run multiple Pods at scale and monitor them easily. Pods are very essential part of Kubernetes system. So Pods are used to control containers in an indirect manner in Kubernetes. This blog has covered basics of starting a Pod and deleting it.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Containers orchestration: Kubernetes vs Docker swarm




When deploying applications at scale, you need to plan all your architecture components with current and future strategies in mind. Container orchestration tools help achieve this by automating the management of application microservices across all clusters. 

There are few major containers orchestration tools listed below:

  • Docker Swarm
  • Kubernetes
  • OpenShift
  • Hashicorp Nomad
  • Mesos
Today we'll talk about Docker Swarm and Kubernetes and we'll compare them in terms of features.

What is container orchestration 

Container orchestration is a set of practices for managing the Docker Containers at large scale. As soon as containerized applications scale to large number of containers, then there is need of container management capabilities. Such as provisioning containers, scaling up and scaling down, manage networking, load balancing ,security and others.  

Let's talk Kubernetes

Kubernetes is an open source, cloud native infrastructure tool that automates scaling, deployment and management of containerized applications. 

Kubernetes was originally developed by google and later was handed over to Cloud Native Computing Foundation(CNCF) for enhancement and maintenance. Kubernetes is the most popular and highly in demand  orchestrator tool. Kubernetes is complex tool and a bit difficult to learn compare to swarm.

Here are few main architecture components of Kubernetes below:

Cluster 

A collection of multiple nodes, typically at least one master node and several worker nodes(also known as minions)

Node

A physical or Virtual Machine(VM)

Control Plane

A component that schedule and deploys application instances across all nodes

Kubelete

An agent process running on nodes. It is responsible of managing the state of each nodes and it can perform several actions to maintain a desired state.

Pods

Pods are basic scheduling unit. Pods consist of one or more containers co-located on a host machine and share same resources.

Deployments, Replicas and ReplicaSets

Docker Swarm

Docker swarm is native to Docker platform Docker was developed to maintain the application efficiency and availability in different runtime environments by deploying containerized application microservices across multiple clusters. 

A mix of docker-compose, swarm, overlay network can be used to manage cluster of docker containers.

Docker swarm is still maturing in terms of functionalities when compare to other open source container orchestration tools.

Here are few main architecture components of Docker swarm below:

Swarm 

A collection of nodes that include at-least one manager and several worker nodes.

Service

A task that agent nodes or managers are required to perform on the swarm.

Manager node

A node tasked with delivering work. It manages and distributes the task among worker nodes.

Worker node

A node responsible for running tasks distributed by the swarm's manager node.

Tasks

Set of commands

Choosing the right Orchestrator for your containers

Kubernetes focuses on open-source and modular orchestration, offering an efficient container orchestration solution for high demand applications with complex configuration.

Docker swarm emphasises ease of use, making it most suitable for simple applications that are quick to deploy and easy to manage.

Some fundamental differences between both 

GUI:

Kubernetes features an easy web user interface(dashboards) that helps you
  • Deploy containerized application on cluster 
  • Manage cluster resources 
  • View an error log, deployments, jobs
Unlike Kubernetes, Docker swarm does not come with Web UI to deploy applications and orchestrate containers. But there are some third party tools which can achieve this with Docker.

Availability:

Kubernetes ensure high availability by creating clusters to eliminate ingle point of failures. You can use Stacked Control Plane nodes that ensure availability by co-locating etcd objects with all available nodes of a cluster during failover. Or you can use external etcd objects for load balancing while controlling the control plane nodes separately.  

For Docker to maintain high-availability, Docker uses service replication at swarm nodes level. A swarm manager deploys multiple instances of the same container with replicas of services in each.

Scalability:

Kubernetes supports autoscaling on both  cluster level and pod level. Whereas Docker Swarm deploys containers quickly. This gives the orchestration tool faster reaction times that allow for on-demand scaling.

Monitoring: 

Kubernetes offers multiple native logging and monitoring solutions for deployed services within a cluster. Also Kubernetes supports third-party integration to help with event-based monitoring.

On the other side Docker Swarm doesn't offer monitoring solution like Kubernetes. As a result you need to rely on third party applications to support monitoring. So monitoring a Docker Swarm is considered to e more complex than Kubernetes. 
 
Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How and why container monitoring is so important



What is container monitoring?

Containers are ephemeral in nature, they are difficult to monitor compared to bare metal server based applications or even those running on virtualized server. Monitoring is critical to ensure avalability, performance and security of containers. So containers infrastructure requires new monitoring tools and strategies.

Container observability

Visibility and monitoring are essential a running environment and to optimize resource usage and costs.

Because each container image can have a large number of running instances and due to high pace at which new images and versions are introduced, problems can be easily spread across containers and applications and can interrupt the entire architecture. So this makes it very critical to identify the root cause of a problem as soon as it occurs.

In large scale containerized environments, this is only possible through dedicated cloud native monitoring tools.

But if you are unable to achieve observability so this can result in below:


  • It is very difficult for developers and operations task to understand what is running and how it is performing. So without observability it is very difficult to troubleshoot the problem and meeting the SLA for a production system.
  • Scalability is also the major challenge to achieve without observability. Scaling your application on demand can enhance your user's experience. But if scalability is too slow it can make it poor.

Challenges with container monitoring 

There are few challenges in container monitoring:
  • Containers are ephemeral so provisioning and destroying a container very quick process. This is one of the biggest advantage but for complex and big production system it makes very difficult to identify the issue.
  • Containers share resources. These consume resources from host machine. If there is no monitoring of resources on host machine then any point of time high CPU or memory spike can scare you and can lead your production running application to stop.

Then how can we monitor containers

You can always use alerting system to monitor your containers. Setting up alert across the delivery pipeline can prevent the risk of system failure at early stage.

What are the common features in monitoring tools 

  • Real time monitoring 
  • Performance baseline
  • Anomaly detection
  • Network Performance monitoring 
  • Config monitoring 
  • Dashboards
  • API monitoring
  • Alerting
  • Automation

Here are famous container monitoring tools used by modern industries

Prometheus

Prometheus is open-source systems monitoring and alerting toolkit and it was originally built at SoundCloud. Prometheus collects and stores it's metrics s time series data ie. metrics information was stored with the timestamp at which it was recorded alongside optional key value pairs called labels.

features:

  • A multi-dimensional data model with time series data identified by metric name and key/value pairs
  • PromQL is a flexible query language to query the dimensionality 
  • Multiple modes of  graphing and dashboard support

Grafana

With Grafana you can visualise, analyse and alert on your system. No matter where your data is stored you can create dashboards and monitor. your data source can be anything like postgres, mysql, redis etc. 

Apart from above two there are few more popular tools like ElasticsSearch and Kibana, Zabbix, datadog etc.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

How to run PostgreSQL on Docker



Postgres on Docker 

Postgres is most advanced object relational database management system(ORDBMS). Postgres implements majority of SQL:2011 standard. It's ACID compliant and It avoids locking issues using multiversion concurrency control. So today we are going to run Postgres on Docker.

To start with Postgres we first need to pull the image from DockerHub. DockerHub is image repository for all images. Let's run the below command and pull the image:

docker pull postgres

Using default tag: latest

latest: Pulling from library/postgres

a9eb63951c1c: Pull complete 

b192c7f382df: Pull complete 

e7ce3f587986: Pull complete 

4098744a1414: Pull complete 

4c98d6f3399d: Pull complete 

65e57fefc38a: Pull complete 

d61d9528cfd5: Pull complete 

de6b20f44659: Pull complete 

25db13ff0bef: Pull complete 

7f74f4b0e936: Pull complete 

144c847b11fb: Pull complete 

cf0afd1be009: Pull complete 

fe0c14991327: Pull complete 


Now let's check that we have downloaded the image.

docker images

REPOSITORY        TAG       IMAGE ID       CREATED       SIZE

postgres          latest    83ce63c594ee   5 days ago    355MB


Let's run the image and start a container.

docker run --name test -e POSTGRES_PASSWORD=Test@123 -d postgres


Just run the docker ps command to check if container is running

docker ps

CONTAINER ID   IMAGE      COMMAND                  CREATED         STATUS   

83ec4a222   postgres   "docker-entrypoint.s…"   2 minutes ago   Up 


Let's enter in bash shell of container by running below command

docker exec -it 83ec4a222 bash

root@83ec4a222:/# 


Connect to Postgres now:

psql -h localhost -p 5432 -U postgres -w

psql (14.0 (Debian 14.0-1.pgdg110+1))

Type "help" for help.


You are connected to Postgres now. Lets' create some tables and execute some queries.

postgres=# \l

                                 List of databases

   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   

-----------+----------+----------+------------+------------+-----------------------

 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 

 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +

           |          |          |            |            | postgres=CTc/postgres

 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +

           |          |          |            |            | postgres=CTc/postgres

(3 rows)


postgres=# 



Let's check the current database name by running below command.

postgres=# select current_database();

 current_database 

------------------

 postgres

(1 row)


So current database is Postgres. We'll check now how many databases are there on the system.

postgres=# select datname from pg_catalog.pg_database;

  datname  

-----------

 postgres

 template1

 template0

(3 rows)


There are total 3 databases on system.

You can check all tables on a database by querying information schema.

postgres=# select table_name from information_schema.tables limit 10;

      table_name       

-----------------------

 pg_statistic

 pg_type

 pg_foreign_table

 pg_authid

 pg_shadow

 pg_statistic_ext_data

 pg_roles

 pg_settings

 pg_file_settings

 pg_hba_file_rules

(10 rows)



We can do a lot more than this on Postgres this was just a small part about Postgres. We can get all information about all tables and databases just by using information schema. Docker can be very useful in this case when we don't want to install it on system and want to run Postgres inside container and can leverage the power of Docker.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...