Hi guys! I’m running a Docker Image containing an etlegacy server. Everything runs fine until I try to create a bind mount so that I can access the server config from the host machine to edit config files and add custom maps.
When I do this the container’s config directory is replaced with the host directory. I know this is the expected behavior but I’m writing to check if there is a workaround for this behavior. I only want to expose the containers config directory to the host, not the opposite. Here is my docker-compose.yml:
A container is an ecapsulated unit. If you feel the need to share ressources from the host with the container, you can do so. Regardless what you try, the other way arround is not possible.
Though, there might be a solution close enough to what you want…
if you use docker volumes ( -v volumename:/container/path) instead of binds (-v /host/path:/container/path), the default behavior is to copy pre-existing files from the container target folder back into the volume. This way you at least have a copy of the files in your volume, which then mounts the “complete” folder on top of the container folder (like binds do).
The fun part is: you can create volumes backed by a bind, and they still have the copy-content-on-first-use mechanism.
I am not sure if the copy action is only done for paths declared as VOLUME in the Dockerfile and whether the host path used in device needs to pre-exist (I assume it does).
Solved by creating an entrypoint.sh script and moving the files to the bind mount on runtime instead of build time. Now the files are showing up in ./data on my host.
Actualy, Its a good practice to mimik the volumes copy-on-first-use mechanism.
I missed that you have full controll of the Dockerfile. Of course adding this behavior to the entrypoint script makes life easier for the end user, especialy for beginners that do not yet understand the difference between bind mounts and volumes