Docker Community Forums

Share and learn in the Docker community.

Looking for a Workaround to restrict users in a host from accessing containers


Needed your kind suggestion to know if there is a workaround to restrict user A from accessing containers created by user B (both users A and B sharing the same host machine). Docker is running as root. I understand this is a Docker feature, was curious to know if this can be solved if I use rootless docker. I have never used rootless docker before and unsure if it can help to isolate users in a host.
Any other workaround too will be highly appreciated.


I’m assuming your dev machine is the container host

My machine is not a container host, it is a windows 10 dev machine with installed docker for windows, it has only 10.0.75.x interface related to docker, no 172.x.x.x interface to be able to communicate with 172.x.x.x addresses directly. Host machine is linux that runs on Hyper-V, called MobyLinuxVM.

As I’ve mentioned, this will solve the issue:

route /P add MASK
If I was using linux (I never used with docker), but I asume my dev machine would be also a docker host, I could access docker internal network 172.x.x.x. directly without any specific manually added routes to route table.

What I want is a comment about this issue from docker team, and if they are going to make integration between windows 10 dev machine and docker internal networks deeper.

Thanks for your reply. I am using a Linux host where my Docker daemon is running. This Linux host is being shared by multiple users. From my research so far, it looks like once a user can run docker commands on a host, there is no way to restrict the user from accessing any container in the host. This is not a docker
I would still welcome suggestions to workaround this/if rootless docker would help.


Ignore lewish95, its a bot that may or may not respond with something related to your original post.Seems like its AI so far has non or a terrible semantic recognisation.

As the docker engine always runs as root, there is only a global context. Who ever has granted access to the docker.sock will be able to do whatever they want in this context. Though with you could restrict what a user is able to do.

On the other hand, you can take a look at podman, which is a drop-in replacement for docker. Most commands are identical (even the bloody --format options) and as far as I have seen looks like it provides a per user context. For me the most challenging part with it is.

if you want a clean and polished solution: take a look at kubernetes. It does everything you are looking for effortles. Though, the learning curve is steep. While learning Docker is basicly comparible to learning how to ride a children’s balance bike, learning Kubernetes is more like learning to fly multiple types of planes. In this comparision learning Docker Swarm (no, swarm does not solve your problem) would be something like learning how to ride a bike.

1 Like

Thank you for spending time on this and your suggestions. I will give podman and kubernetes a try, but for my current project, much has been done and deployed with docker. I understand there is no way to restrict users (even if there is a way, that can always be broken as long as docker.sock access is present).

Take a look at Sysbox. It’s a new type of runc that works under Docker and allows you to deploy rootless containers inside of which you can run docker (both the CLI and daemon, and even systemd and k8s too) in total isolation from the underlying host. E.g.,:

docker run --runtime=sysbox-runc -it nestybox/ubuntu-bionic-systemd-docker

deploys such a container. These are not privileged containers, they are protected via the Linux user-namespace (i.e., rootless).

You can deploy multiple of these on the same host and assign them to your users. Users can access them via ssh or maybe by exposing the inner dockerd port to host ports … whatever you think is best. This way each users gets a dedicated docker environment, totally isolated from the host docker and from each other.

Hope that helps!