As you can see from the projectdocker file, I’m pretty far along with my ros2 development container. The point where I need to go into the workspace to do the final installation steps. Do you know what to do?
Indeed, but without knowing what you tried and what error message you got, it would have been hard to help.
The VOLUME is just a metadata in the Dockerfile until the container starts. You still need to create the folders in build time. It could happen by copying a file into the image using the COPY instruction or using mkdir in the run instruction.
I also don’t recommend using the VOLUME instruction. Many image maintainers already stopped using it as volumes defined in a Dockerfile can’t be undone and users are forced to override the mount point if they don’t want many unused anonymous volumes.
So to address a few points in your comment. I use the corresponding Dockerfile as part of my gitlab project. Here I need these VOLUME commands so that the built and started container is connected to the correct folders in my host environment. I edit my created packages with Neovim, which exists on my operating system and I need the container so that I can build the packages correctly.
Parts of the ros2 installation process take place directly in the created folders. I think that’s stupid too. Nevertheless, I have to intercept this somehow. Do you happen to know what I have to do?
Defining the volume in the Dockerfile is not necessary, and if you want to share your image, not even recommended. You can still define the volume paths when you start the contanier and have the same result. Except, you will have more control over where the files are. Since you mentioned editing the files, I have a blogpost which was inspired by people who wanted to edit files on volumes nd they asked where the volumes were on the filesystem. You should not edit anything under the docker data root where the default anonymous volumes and even named volumes are, but there are solutions: Everything about Docker volumes - DEV Community
That was just a sidenote, not the answer to your main question.
It would help, if you could share the error messages or we just guess a lot.
So my guess is that you got an error that the source path didn’t exist. Or the variable didn’t exist. The source path of the COPY instruction is always relative in your build context. Even if you use a variable in the path, that must be defined in ARG or ENV. It will not use $HOME existing on your host machine. Even if the variable is defined, if it is absolute, that will not work. Just imagine if I could create a Dockerfile and ask you to build the image and the docker build copied your home folder with all your secrets into the image and send it to somewhere. Everythigng has to be in the build context.
I see you changed the RUN instruction to use mkdir. If you don’t have anything to copy, that is the easiest and the right way.
So what you’re telling me is, that I don’t even need the volume command inside the dockerfile, right?
So at the moment I’m running the installation of the ros2 environment in several steps, because running the installation steps completely in the Dockerfile didn’t work. First, the corresponding repositories have to be loaded into one of the created folders using my buildscript. Then I start the container and manually run colcon build; source install/setup.bash which sets up the ros2 environment properly. If I install everything in the Dockerfile, wouldn’t I have the problem that part of the content of the created folders exists in my host system and the other part in the container? Is that even possible?
Sorry, I’m not sure what you mean. If you describe the whole installation process in the Dockerfile, everything will be on the filesystem layers of the final image. Then when you start the container from the image, depending on where you mounted volumes, the content of some folders are copied to the host and the folder on the host is mounted into the container at exactly where the original files were copied from.
Also important to note that everything is on your host system as the container is mainly isolation. A process can’t see the whole host, but the process can be seen from outside the container. Including the files. You just mount some files into the isolated environment so the process can see that too.
Yes. You can define the volumes in a compose yaml file or as arguments of the docker run command.
That means ros2_debug_ws can’t be found on the host and $HOME is just ignored as an empty variable.
This is the solution that has been working for a few days now. However, it has several shortcomings.
Cloning the git repositories into a folder with a real and an imaginary - container-side - part just didn’t work. Maybe that’s for maths and not dockerfiles.
Either I configure colcon - a cross-compiling tool - or I use this self-implemented colcon command. configuring colcon and embedding the configuration in the dockerfile is a whole new chapter in itself, so I won’t do it now.
The COPY' command seems to be more tidy than the volume’ command. However, it seems that I don’t need the volume commands anymore.
It works though. So I’m done for now and have written some documentation about it.