Docker.sock bind mount not preserving host ownership

With docker desktop v2.47, bind mounting /var/run/docker.sock would preserve ownership of root:group inside the container.

But since v2.48 and v2.49 the ownership of docker.sock (inside the container) changes from root:group to root:root.

For example,

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker:cli ls -l /var/run/docker.sock
srw-rw---- 1 root root 0 Apr 10 19:39 /var/run/docker.sock

Interestingly, if I change ownership to root:docker inside the container, the change persists even when creating new containers. Re-starting docker desktop, however, resets the behaviour again.

What has changed since v2.48 to cause this?

Letā€™s see if I understand the problem.so the problem is that when you mount the Docker socket, you expect it to be owned by the docker group, but it is owned by the root group, right?

I donā€™t know what the old group was, but root seems to be the new group. It is not changed when you mount it. The socket is inside the virtual machine and it doesnā€™t have to be owned by any group. In fact, using the docker group is actually insecure or at least makes it harder to find out who and what executed when they can run the docker command and get root access without using sudo. sudo commands are logged but the docker commands are not.

Regarding what changed, I donā€™t know, but it seems to me they removed the containerd client from the virtual machine which helped me to get more information about the system and debug. It is still possible, but it will be harder.

When you change the ownership inside the container, it means you change it inside the virtual machine. So other containers will mount the same, already changed socket.

On my host machine /var/run/docker.sock is owned by root:docker which I believe is correct. To be able to run docker commands on the host machine (without sudo) I add the nonroot user to the docker group. Everything works fine on the host.

My use case is I need to be able to run docker commands inside certain containers. The so called ā€œDocker outside of Dockerā€ scenario. To achieve this I bind mount /var/run/docker.sock into the container. This works because my containers run as a non-root that are also part of the docker group (same group id as host machine).

But since 2.48, the ownership of docker.sock is no longer root:docker but root:root. I now either need to run the containers as root or chown docker.sock back to root:docker on entry.

Maybe Iā€™m going about this wrong for my use case.

So first of all, I guess you opened your topic in the wrong category, as you write about Docker on the host as if it were a Linux host. On Windows, there is no /var/run/docker.sock, only in a WSL2 distribution if the WSL2 integration is enabled. Either way, when you mount the docker socket, it is mounted from inside the virtual machine, not from the physical host when you use Docker Desktop. So which platform are you using and what kind of Docker?

Iā€™m running Docker Desktop on Windows 11 with WSL2 enabled. When I say host Iā€™m referring to WSL.

Regarding forum category choice it was an update to Docker Desktop for windows that caused the issue. Apologies if I got the category wrong but Iā€™m not sure where the issue lies.

When you say virtual machine are you refering to WSL (via hyper-v) or another docker engine virtualisation layer perhaps?

To clarify, when I run ā€œls -la /var/run/docker.sockā€ on the WSL host, the ownership is root:docker. When I run it in the container with v2.47, I get the same root:docker. But after upgrading to v2.48+, I get root:root instead.

I thought that bind mounting files form the host to the container always preserved the ownership. Iā€™m guessing this is might be a new security feature included in docker desktop or maybe the underlying docker enginee?

I referred to it in general. In your case it is WSL2, but the actual socket is in the container of the Docker daemon. There are multiple layers of containers and virtualization in Docker Desktop, especially with WSL2 in which a distribution is actually a container too, and then there are containerd containers and in one of them there is the dockerd. What you see in your WSL2 distribution is probably not the same file. What happens when you mount it, I should test, but I canā€™t right now. If you see the owners are different, it canā€™t be the same file. When you run the docker commands from the windows host in Powershell for example, the socket is definitely mounted from the virtual machine. Maybe it worked differently in WSL2 before and they changed.

You can test it by running ls -i /var/run/docker.sock in the WSL distribution and in the container. If you get a different inode number, the file is not the same.

Yes, but bind mounting is not a simple bind mount when you use Docker Desktop. It often canā€™t be. Docker Desktop can bind mount files from a WSL2 distribution into a Docker container, but how it is implemented in case of special files like the docker socket can change.

1 Like

Thanks for the detailed explanation.

Iā€™ve just run the inode test on Docker Desktop v2.47 and v2.48.

On v2.47, the inodes are the same. On v2.48 they are indeed different.

Knowing this, Iā€™m not sure of the best way forward. Is there a way to change the ownership of the docker.sock that actually gets used in the container? I could just prefix all docker commands from now on with sudo, but that feels wrong.

Another option, I suppose, is to run chown root:docker as an entry point script. But this feels wrong too as itā€™s making a change outside of the container and could affect other containers.

You could try socat which can redirect a TCP socket to a unix socket or the other way around. I donā€™t think I ever tried, but I guess it could redriect a unix socket to another unix socket. One with the ā€œrightā€ owner and one with root, but Iā€™m not sure it would solve the problem.

Here is another example for socat

Or this one which I used in a CI/CD pipeline

This way the CI container could use the Docker socket without actually mounting the file. If I remember correctly, I needed it because the user in the ci container was not root. If the two container shares the network namespace or both are using the host network, the TCP socket can listen on localhost so it is not accessible by remote users.

Hi @rimelek !

I am facing the same problem as the OP, and your solution actually worked just fine! It is the only one that persists when restarting Docker engine.

But it seems to me that both solutions - binding hostā€™s docker.sock directly to the container or creating a bridge container - are basically the same regarding security.

Am I right to assume that privilege escalation would be easy in both cases and my host machine would be compromised if any malicious software would run in the containers?
I am running Jenkins as non-root user, but the bridge container runs as root. I donā€™t know what this implies. Also, OPā€™s approach, making the jenkins user part of the docker group would also imply that anyone that gains access to the container can reach the host.

My use case is for my graduationā€™s final paper, which Iā€™m using Jenkins and Docker to create a robust CI/CD pipeline for a project. But I would really want to elaborate on the security matters of my decisions. I want to know what can be a potential risk and what is safe, so that I can know what to do in a production environment for a big project. Could shed some light on this topic for me? Thanks in advance!

I believe my comments confused you, because I wrote about different things. My suggestion to use socat was not to have a better or more secure way to make the Docker socket available in containers.

The above statement is true in general. Accessing the Docker socket on the host from a container is a more special case. In a container you donā€™t have sudo usually, but even if you have, using that sudo would probably add logs to the auth log file in the container, so it wouldnā€™t help.

You donā€™t want to change the owner of the socket on the host, because there can be only one owner, and there is no guarantee that the group id of the ā€œdockerā€ group in the container doesnā€™t belong to another group which should not have access to the socket, so when you need the socket in a container, you can have more problems than when you need it outside containers.

To answer the question, yes, you are right. If you have access to the Docker socket, it doesnā€™t matter if it is a unix socket or TCP socket. Depending on how security features like SELinux or AppArmor is set, you could do harm on the host, but for full root access, you need privileges that you donā€™t have by default. Of course someone with access to the socket can run a privileged container.

Yes, if you have no policy enforcement and you allow to do everything on the Docker API. Of course, if you can still mount the unix socket and have no problem with using the docker group, at least not all users would have access to the socket in the container. I already mentioned what the problem is with the docker group in containers.

The level of security risk also depends on your software architecture. If there is one process that needs access to the Docker socket, you can run it in a container that runs nothing else just the process that will manage containers and an API server that receives the requests and rejects what you donā€™t want to allow, donā€™t forward ports to it from the outside, allow access on a specific Docker network or share network namespace as I did in my examples if you use TCP sockets.

You could implement your way or use Open Policy Agent