Showing posts with label docker security. Show all posts
Showing posts with label docker security. Show all posts

Running Docker container as non root account

 Running docker container as non root account. 










Docker is revolutionary technology in the world of devops. Today docker is making application deployments is so easy and fast. But did you know when you start a docker container and when you log into your docker container, you login as root by-default. So today we are going to see how we login as root and how we can control this.

Sometimes your container needs some permissions so you do not restrict docker container to not to be root. But it's not always be the use case. Sometimes you just want to run your docker container and up some services ,that's it.

Let's do it step by step. I am assuming that you already have a docker installed. Just start with downloading the Alpine Linux image. I use it mostly in tutorials because it's small in size.

Run the below command :


docker pull alpine


this above command will download the alpine image to our system. One thing I want to mention here is that docker by default pull images from docker hub. Docker hub is repository of images.

Now, Let's run the image using below command :


docker run -it --name mycontainer alpine


This will login you to alpine shell.

docker run -it --name mycontainer alpine

/ # 


Now check you are root. Although '#' character says itself that it is root.

docker run -it --name mycontainer alpine

/ # whoami

root

/ # ls

bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var

/ # 


See you are root. But you think what's bad in it. I'll explain as we go on. Let's run another container from same image and give it a name mycontainer2.

docker run -it --name mycontainer2 alpine

/ # whoami

root

/ # ls

bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var

/ # 


Let's check how many containers are running:


docker ps

CONTAINER ID   IMAGE     COMMAND     CREATED          STATUS          PORTS     NAMES

ab9d6a6f8dd1   alpine    "/bin/sh"   31 seconds ago   Up 30 seconds             mycontainer2

78ccfe7b5275   alpine    "/bin/sh"   4 minutes ago    Up 4 minutes              mycontainer


So we have mycontainer and mycontainer2 both are running. Now I'll check the network by below command.

docker network ls

NETWORK ID   NAME      DRIVER    SCOPE

e152bd78da   bridge    bridge    local

7e9c316ea4   host      host      local

9eb8c364ec   none      null      local


By default docker uses a bridge network. When you create a container docker assigns an IPAddress to it.

Now I am going to check the IPAddresses of containers by inspect command.

docker inspect mycontainer | egrep "IPAddress"

            "SecondaryIPAddresses": null,

            "IPAddress": "172.17.0.3",

                    "IPAddress": "172.17.0.3",


Similar for mycontainer2 :


docker inspect mycontainer2 | egrep "IPAddress"

            "SecondaryIPAddresses": null,

            "IPAddress": "172.17.0.4",

                    "IPAddress": "172.17.0.4",



So I have IPAddresses of both containers. Now check I am able to ping from one container to other.

I tried to ping from mycontainer2 to mycontainer and result is below.

/ # ping 172.17.0.3

PING 172.17.0.3 (172.17.0.3): 56 data bytes

64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.306 ms

64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.323 ms

64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.349 ms

64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.321 ms

64 bytes from 172.17.0.3: seq=4 ttl=64 time=0.388 ms


Ok, docker creates a network and both containers are on same network so they are able to ping each other.
Let's do a ping from mycontainer2 to my host machines IPAddress.

# ping 192.168.X.X

PING 192.168.X.X (192.168.X.X): 56 data bytes

64 bytes from 192.168.X.X: seq=0 ttl=37 time=1.762 ms

64 bytes from 192.168.X.X: seq=1 ttl=37 time=1.640 ms

64 bytes from 192.168.X.X: seq=2 ttl=37 time=1.541 ms


Oh the container was able to ping host as well. Suppose if an Attacker got an access to your container which is running with root privileges. He can find a  hack around  to get inside the host system. 

So we'll just limit the container to run as non root. 

Let's create a user on mycontainer2 using adduser username command.

/ # adduser pd

Changing password for pd

New password: 

Bad password: too short

Retype password: 

passwd: password for pd changed by root

/ # su pd

$ whoami

pd

/ $ 



User has been added. Now type exit and mycontainer2 will be stopped automatically. Just run below command to restart the mycontainer2.

docker start mycontainer2

mycontainer2



Now run the mycontainer2 as non root user:

docker exec -it --user pd mycontainer2 /bin/sh

/ $ 

/ $ whoami

pd

/ $ 


Wow! we have just launched the container as non root user. Let's perform our final step to check if we are able to perform same task like ping other container or host machine.

/ $ ping 172.17.0.3

PING 172.17.0.3 (172.17.0.3): 56 data bytes

ping: permission denied (are you root?)

/ $ 


Ohh, I got permission issue. 

Success!

That's how we learned to manage the docker security.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Docker Security for containers



 Docker Security

Docker is the most popular containerisation technology. Upon proper use, it can increase the level of security (in comparison to running applications directly on the host). On the other hand, some misconfigurations can lead to downgrade the level of security or even introduce new vulnerabilities.There are common container security risks and vulnerabilities during a development cycle that can be exploited be attackers.


  1. Using insecure images

  2. Containers running with the privileged flag

  3. Unrestricted communication between containers

  4. Containers running rogue or malicious processes

  5. Containers that are not properly isolated from the host

Using insecure images:

Containers are built using images. Images are useful for building containers because you can reuse the various components of an image instead of building a container image from scratch. However, like any piece of code, images or their dependencies could contain vulnerabilities. 

You should always try to use images from trusted sources. What i mean by that is if I am going to download mssql image from docker hub then it should be from microsoft only. Others images also could be used for downloading mssql but you don't know which image could be containing vulnerabilities.


Containers running with the privileged flag

Anyone with even a modest knowledge of containers might know what a privileged container is. Containers running with the privileged flag can do almost anything a host can do, run with all capabilities and gain access to the host’s devices. This means that if an attacker breaches a container running with the privileged flag, then he can do anything on host's system.

Executing container engines with the --privileged flag tells the engine to launch the container process without any further "security" lockdown.

docker run --privileged -t -i --rm ubuntu:latest bash 

You can use cap_add instead.

 docker run --cap_add SYS_ADMIN ...


Unrestricted communication between containers

Docker containers are very similar to LXC containers, and they have similar security features. When you start a container with docker run, behind the scenes Docker creates a set of namespaces and control groups for the container.

Namespaces provide the first and most straightforward form of isolation: processes running within a container cannot see, and even less affect, processes running in another container, or in the host system.

Each container also gets its own network stack, meaning that a container doesn’t get privileged access to the sockets or interfaces of another container.

Running containers (and applications) with Docker implies running the Docker daemon. This daemon requires root privileges unless you opt-in to Rootless Mode, and you should therefore be aware of some important details. First of all, only trusted users should be allowed to control your Docker daemon. This is a direct consequence of some powerful Docker features. Specifically, Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container. This means that you can start a container where the /host directory is the / directory on your host; and the container can alter your host filesystem without any restriction. 

Containers running rogue or malicious processes

Tracking containers if one of them is running malicious code could be hard because there could be lots of containers. So you can use Docker’s CAP ADD feature to add only those Linux capabilities necessary for a container to run properly and achieve its goal and use CAP DROP to remove all unnecessary capabilities.


Containers that are not properly isolated from the host

Any misconfiguration to container can put the host at risk. 

Don’t share the host’s network namespace. Doing so could put you at risk of a container shutting  
down  the Docker host.

Don’t share the host’s process namespaces. Doing so would allow a container to see all of the processes running on a host system, which leaves the processes on the host at risk of being manipulated or shut down.

There are few things could be followed

  • Always Keep Host and Docker up to date.

To prevent from known, container escapes vulnerabilities, which typically end in escalating to root/administrator privileges, patching Docker Engine and Docker Machine is crucial.

  • Do not expose the Docker daemon socket (even to the containers)

Docker socket /var/run/docker.sock is the UNIX socket that Docker is listening to. This is the primary entry point for the Docker API. The owner of this socket is root. Giving someone access to it is equivalent to giving unrestricted root access to your host.

Do not enable tcp Docker daemon socket. If you are running docker daemon with -H  tcp://0.0.0.0:XXX or similar you are exposing un-encrypted and unauthenticated direct access  to the Docker daemon, if the host is internet connected this means the docker daemon on  your computer can be used by anyone from the public internet.

  • Set User


Configuring the container to use an unprivileged user is the best way to prevent privilege escalation attacks. This can be accomplished in few different ways as follows:


During runtime using -u option of docker run:


docker run -u 4000 alpine         


During build time. Simple add user in Dockerfile and use it. For example:

FROM alpine   

RUN groupadd -r myuser && useradd -r -g myuser myuser      

  • Add –no-new-privileges flag

Always run you docker images with --security-opt=no-new-privileges in order to prevent escalate privileges using setuid or setqid binaries.

In kubernetes, this can be configured in security context using allowPrivilegeEscalation field. 

  • Disable inter-container communication (--icc=false)

 

By default inter-container communication(icc) is enabled - it means that all containers can talk with each other using (docker0 bridged network). This can be disabled by running docker daemon with --icc=false flag. If icc is disabled (icc=false) it is required to tell which container can communicate using --link=container_name or id:ALIAS option.

Conclusions

Docker containers are, by default, quite secure; especially if you run your processes as non-privileged users inside the container. You just need to handle the configuration of containers carefully.

Note: If you think this helped you and you want to learn more stuff on devops, then I would recommend joining the Kodecloud devops course and go for the complete certification path by clicking this link

Quantum Computing: The Future of Supercomputing Explained

  Introduction Quantum computing is revolutionizing the way we solve complex problems that classical computers struggle with. Unlike tradi...