I want to the mount the home directory of a parent container into the home directory of a child container using docker in docker (e.g. -v $HOME:$HOME).
From what I have seen, it seems that, because the parent container is using the hosts docker daemon - it is mounting the hosts home directory instead of the parent container home directory.
I tried a “true” docker in docker setup where I ran the docker daemon in the parent container - and mounting the parent containers home directory into the childs home directory worked fine. However, I would prefer to avoid this - as because it seems it will mean we need to download the same image every time we launch the child.
I did try using the --volumes-from my-parent-container option which seemed to have change the home directory - but the files I wanted are still not visible…
Does anyone know of anything obvious I might be missing - or alternative options?
There is no “true” and “false” Docker in Docker. You are not using Docker in Docker, you are just using a docker client inside a docker container to manage other containers on the host. Is there any benefit of doing it? Is it some kind of automated build pipeline?.
What you are actually trying to do is mounting a folder from another container. The closest solution to your issue is what @deanayalon suggested, but I still don’t understand why the first container is needed if you still run containers normally on the host. Maybe it is a dev container or something. If that’s the case, then yes, you need a volume for the dev container’s home.
The terminology is confusing. “docker in docker” (dind) is a docker image which has a docker client and ability to start its own docker daemon. In that case, calling it a “true” docker in docker seems to make sense, as you can run your own docker daemon - and own containers within that container.
The bit I didn’t mention was that I wanted to use this in GitLab - who recommend their own “docker-in-docker” setup. But what they are actually doing is what is described above - creating sibling containers using the hosts (gitlab runner) daemon rather than the containers own daemon.
The solution I found was indeed as described by @deanayalon . Although my setup in GitLab had a few other caveats… They are described in an answer I managed to find elsewhere:
Im not sure the title should have been changed. The question is very much related to docker-in-docker. Thats what I would have searched for.
The term “Docker in Docker” is misleading … You are running a Docker daemon inside a Docker container. Since calling it “Docker daemon in Docker container” would be long and wouldn’t sound as good, we call it “Docker in Docker” in short.
So the client can be anywhere. That has nothing to do with dind.
If you saw it in a Gitlab documentation, this would not be the only mistake they made
Maybe. But since you started the topic with the statement that you would not like to use the “true” dind, I think it was the best not to confuse people indicating we were talking about dind when we were not.
But that is all not important now. What is important that you found a solution. Also thank you for sharing the related links.
If it’s about retaining and reusing job artifacts from one job in later jobs: GItlab has support for pipeline artifacts and caches. You high likely want a non-tainted build environment for each job container, so relying on volumes might defeat the purpose of non-tained build environments, except of course if the volume is to store caches. Or is used by the Gitlab server to store artifacts.