I’m running a CI pipeline and using a VS Code devcontainer to execute commands. The pipeline runs inside an agent container, and the repository is checked out there.
The problem I’m facing is related to volumes: I can’t get the repository to be available inside the devcontainer. I’d prefer not to rely on host-mounted volumes.
Are there any alternatives to using host volumes in this setup?
Is it possible to mount or share the filesystem from the agent container into the devcontainer, or otherwise make the repository accessible inside the devcontainer?
This is your second topic with a title containing “DinD”, but I realized i’M no sure you actually using DinD = Docker in Docker. Can you clarify where your Docker daemon is running? If it is running in the devcontainer and you don’t see containers started in the devcontainer on the host, that is Docker in Docker. If you only have the docker client in the devcontainer and that connects to the Docker daemon running on the host to run all containers on the same host where devcontainer is running, that is like a remote Docker daemon. The solution depends on which kind of environment you have.
In case of Docker in Docker, you would need to share a folder with the agent (if that is running in the devcontainer). There is no mounting from a container, but if your user in the devcontainer has access to the docker daemon’s data root, you can actually see all files in containers.
If the agent is not in a container running in the devcontainer but next to it, you will definitely need a common folder. For example named volume. That doesn’t have to be directly available from the host if you use Docker r Desktop for example, you just need to mount it to both containers that need to share files.
I have a agent container that start a devcontainer, and i cant mount from a container to container.
So, i think is Docker in Docker.
But i can see from host the devcontainer, so i need to share the same folder.
You mention:
How can i do this? doing a mount of /etc/docker/daemon.json?
I don’t see how you came to the conclusion of using DinD based on something that never works. Unless you were just not specific enough. What do you mean by mounting from container to container? Mounting a devcontainer to another that was started with a docker command excuted in the devcontainer? It doesn’t matter where the client (docker command) is. It either connects to a local daemon or a remote one. In this context I mean local as running inside the devcontainer and remote as running on the same host on which the devcontainer is running. If you hav DinD, that means you should be able to mount a folder to the agent container running inside the devcontainer, but you couldn’t mount a folder from devcontainer to another container running NEXT TO devcontainer and not inside it.
Thre nos nothing to mount here. I menitoned the filename only as a way to confirm that you are using DinD. That file can be (but not always is) on the host where the Docker daemon is running. You can also check if you see anything under /var/lib/docker in the devcontainer. If you see that, you probably have DinD, but the best check if you can see a dockerd process running in the devcontainr. If "pidof` command in the devcontainer, you can try running
pidof dockerd
If that returns a number, you have dockerd in the devcontainr, so you have DinD. Than you could show us the output of docker ps to show that the agent is running in that devcontainer. After that we will have a better understanding on how your environment looks like.
Let me clarify my setup and what I meant.
Topology
There is an agent container that starts the devcontainer.
The agent container is running with --network=host.
Docker commands are executed from inside the agent container.
What I meant by “mounting container to container”
I’m not trying to mount between arbitrary containers.
What I want is to mount the workspace from the agent container into the devcontainer.
DinD confirmation
Inside the devcontainer, running: pidof dockerd
returns: 24
Now I think I understand it. Thank you for the clarification. So you indeed have Docker In Docker, but not because you cannot mount from the agent container to the devcontainer. That wouldn’t work even without DinD. But now I know that the agent container is not running in the devcontainer. It just works as a Docker client connecting to a daemon. Am I correct to assume that the agent container starts the devcontainer by connecting to the Docker Daemon on the host, and then the same agent container connects to the daemon in the devcontainer to run containers there?
If the goal is to use the agent container to run another container in the devcontainer and share data between the two, you would indeed need to share the data between the devcontainer and the agent container by mounting the same volume to both. Then the devcontainer could share it with the containers running in the devcontainer. A docker volume created in the devcontainer itself would not help as the agent container would not have access to that.
shared-data volume or bind mount created on the host
shared-data volume or bind mount mounted into the agent and the devcontainer as well as “/mnt/shared-data”.
The docker client in the agent container mounts “/mnt/shared-data” to anywhere in the container that it creates in the devcontainer. The shared data would not be mounted from the agent container, but you could use the same source path as both the agent and the devcontainer would have the data in the same folder.
Since the devcontainer is a remote “machine” in the sense you are using it, there is no way to directly mount from the client to the remote.
There is one more thing you could do, but it depends on your pipeline and the size of data. If you want to share data, you could sync it with: Use Compose Watch | Docker Docs
I neer tried compose watch like that though. The other way would be using “docker cp” to copy the data to a container in the devcontainer. docker container cp | Docker Docs That container could be a helper container created only to be able to copy data to a volume through it. And the third option could be building the image that already contains the data, since “docker build” sends files from the local machine to a remote.