Mounting home directories from another container

Hi.

I want to the mount the home directory of a parent container into the home directory of a child container using docker in docker (e.g. -v $HOME:$HOME).

From what I have seen, it seems that, because the parent container is using the hosts docker daemon - it is mounting the hosts home directory instead of the parent container home directory.

I tried a “true” docker in docker setup where I ran the docker daemon in the parent container - and mounting the parent containers home directory into the childs home directory worked fine. However, I would prefer to avoid this - as because it seems it will mean we need to download the same image every time we launch the child.

I did try using the --volumes-from my-parent-container option which seemed to have change the home directory - but the files I wanted are still not visible…

Does anyone know of anything obvious I might be missing - or alternative options?

How about creating a Docker volume, and mounting it to both containers?

At first, the volume is empty, when assigned to the first container, it populates itself with the default image contents for the path

Then, simply mount to the second container

There is no “true” and “false” Docker in Docker. You are not using Docker in Docker, you are just using a docker client inside a docker container to manage other containers on the host. Is there any benefit of doing it? Is it some kind of automated build pipeline?.

What you are actually trying to do is mounting a folder from another container. The closest solution to your issue is what @deanayalon suggested, but I still don’t understand why the first container is needed if you still run containers normally on the host. Maybe it is a dev container or something. If that’s the case, then yes, you need a volume for the dev container’s home.

By true, I assume they mean docker:dind, with it enabling its own daemon, rather than running using the host’s docker daemon (Docker-out-of-Docker)

I know, but it is important to know that this is not Docker in Docker, although the title indicated that. I changed the title since then.

Thanks all.

The terminology is confusing. “docker in docker” (dind) is a docker image which has a docker client and ability to start its own docker daemon. In that case, calling it a “true” docker in docker seems to make sense, as you can run your own docker daemon - and own containers within that container.

The bit I didn’t mention was that I wanted to use this in GitLab - who recommend their own “docker-in-docker” setup. But what they are actually doing is what is described above - creating sibling containers using the hosts (gitlab runner) daemon rather than the containers own daemon.

The solution I found was indeed as described by @deanayalon . Although my setup in GitLab had a few other caveats… They are described in an answer I managed to find elsewhere:

Im not sure the title should have been changed. The question is very much related to docker-in-docker. Thats what I would have searched for.

I totally agree and I stated that too in my blog post:

https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn#docker-in-docker

The term “Docker in Docker” is misleading … You are running a Docker daemon inside a Docker container. Since calling it “Docker daemon in Docker container” would be long and wouldn’t sound as good, we call it “Docker in Docker” in short.

So the client can be anywhere. That has nothing to do with dind.

If you saw it in a Gitlab documentation, this would not be the only mistake they made :slight_smile:

Maybe. But since you started the topic with the statement that you would not like to use the “true” dind, I think it was the best not to confuse people indicating we were talking about dind when we were not.

But that is all not important now. What is important that you found a solution. Also thank you for sharing the related links.

If it’s about retaining and reusing job artifacts from one job in later jobs: GItlab has support for pipeline artifacts and caches. You high likely want a non-tainted build environment for each job container, so relying on volumes might defeat the purpose of non-tained build environments, except of course if the volume is to store caches. Or is used by the Gitlab server to store artifacts.

Please ignore my post, if I got it wrong.

Yes, I also found out about this thanks to your question.