Docker compose volumes not working

I just got a new laptop and wanted to migrate my docker setup from my old laptop.
But somehow the way I previously worked is not functioning with the this fresh install.

So what is working today on my old laptop is the following workflow:

I create docker-compose files and when I need volumes I use following syntax ( example here is with the blackbox image )

  blackbox:
    image: prom/blackbox-exporter:v0.22.0
    container_name: blackbox
    restart: unless-stopped
    ports:
      - 9115:9115
    volumes:
      - /var/lib/docker/volumes/otel/blackbox.yml:/etc/blackbox/blackbox.yml
    command: --config.file=/etc/blackbox/blackbox.yml

For this to work I need to create an otel folder and place the blackbox.yml file in it, all from following windows directory

\\wsl$\docker-desktop-data\data\docker\volumes

So the compose syntax /var/lib/docker/volumes/otel will be mapped to \\wsl$\docker-desktop-data\data\docker\volumes\otel

But on the new laptop with same Ubuntu and docker desktop setup, the mapping is not working. As soon as I try out the docker-compose, Ubuntu will actually create a folder inside /var/lib/ instead of using the virtual docker-desktop-data mapping.

What am I’m missing here?
Or do I still need to do some extra configuration?

Why is that strange solution? Never write anything manually or using scripts in /var/lib/docker! Let only Docker do that!

If you need to mount a file to the container just place it anywhere on your machine. In case of one yaml file, especially if you don’t want to write it from the container it would not make the container slower.

    volumes:
      - ./blackbox.yml:/etc/blackbox/blackbox.yml

update:

If you still want to save the file on the Linux filesystem, you can create a custom entrypoint or command and generate the blackbox.yml from environment variables when the container starts.

Thx for the input… the question is more, why do my other 2 windows machines map this folder automate?
I do know I can map any folder if I want but those of docker-desktop will already have the correct permissions so it os super easy.

And super bad. I can’t answer the original question, because I would never try something like that. Even if you mount a folder from the WSL distribution, do not change /var/lib/docker and do not bind mount a file from a folder which is for volumes.

By the way Docker volumes would have an other subfolder called _data and if you change the filesystem manually it will not be added to the database of volumes so you can create ana ctual volume and then delete it also deleting the file that you created manually.

If you have problem with permissions then you need to solve that problem but not this way. You can do what I recommended in my previous post or create a volume with docker volume create (or better to define it in the compose file) and mount it to the container. Your /etc/backbox folder will be copied to the volume automatically (it is useful if the folder contains other files as well) and you can use docker cp to copy the backbox.yml to the volume mounted in the container. Note that you can copy files into stopped containers as well in case the container stops because of the lack of the correct configuration file.

I mention it only as an alternative, but I would prefer my previous recommendation, the not super easy, but better way, which is creating a custom entrypoint or command and generating the config file. That way your config file would be the correct one immediately and you would not need to restart the container.

That is my whole point, I do want docker to handle it. On my 2 other windows machines docker does it too but if it is hosted with wsl2 that auto mapping should be in place that you can access the files from your windows environment.

It feels like my new install is not complete.
How is your windows setup?

As long as you write anything in the docker data root from the host, you don’t let Docker to handle it. The WSL distribution is not Docker. Docker Desktop just uses WSL2 to have a virtual Linux OS.

I use Docker Desktop on my Windows machine only when I try to help someone. Otherwise I install a WSL2 Ubuntu distribution, install Docker CE on that Ubuntu, save my files on Ubuntu’s filesystem, for example in my home like: /home/rimelek/projects/projectname and connect to WSL from Visual Studio Code as a Remote Host. Or sometimes I do mount the folder on Windows and work locally, although I prefer the full remote host solution and use the mounted WSL folder on Windows only for browsing some files outside of my project.

So I don’t know what is the difference between your two machines, but maybe this is the first sign that you should not use that method.

The goal of Docker Desktop is to give you the same experience on each platform. What you do know would not work on macOS or Linux. I strongly recommend you to use platform independent solutions. It is not always possible, but it is in this case. I get that it seemed to be a good idea and maybe you will be able to find out what went wrong compared to your other machine, but I still don’t recommend it.

I know the feeling when something looks so easy and you don’t want to use something that looks 5 times harder, but I also know the feeling when you realize how much trouble an easy but not recommended solution caused you and then how easy a 5 times harder solution feels like when finaly everything works and everything is predictable and portable. :slight_smile:

I do agree on the ‘not same effect’ on each platform.
But was still hoping that someone could point out why I have a different behaviour on the other machines.
I do think I did the same setup/install.

Maybe to try out your solution. Any tips on making the links folders writable for my Windows host?
So that I can copy the files over through windows ( rather then linux cp command line )?

Sorry for disappearing. I had to check my “unread” list to find the topic again.

I don’t know what that “links” folder is. Everything depends on the exact usecase. Windows has different way to manage permissions, so when you mount a folder from the host, that folder is world writable from the container. I didn’t remember what happens when you create a file from the container in the mounted folder, so I tried now, and I could edit it from the host. This is again not something that would work everywhere, but at least you don’t have to write a basically system folder.

On linux, you culd make the files writable by a group, so even if the owner is for example UID 33 (usually www-data), but the group is GID 1000 (on linux it is the main group of the first user), the group could write the files so your user could write the files.

I already gave you an idea how you could generate a yaml when the container starts, but if you need a whole folder, then you

  • need proper permissions, groups, ownerships
  • save everything on a volume which you can mount in an other container which could run an FTP or a samba server. If you choose to run a samba server, you could mount the volume as a shared folder on Windows if you could configure it properly.

These are not always easy solutions, but working with containers is not always easy :slight_smile: it just helps a lot when you know enough to get through these issues.

I never edit anything generated in containers. If I need it for debugging, I I set the permissions manually or do it as root from the container.

An other idea is using a container as development environment. Visual Studio Code can use a container as a remote host, run a small server component in the container so you can connect to it from VSCode and work as if it were your local machine. You can even use the terminal and every GUI feature like browsing and editing files.

Let’s say you run “blackbox” as the container that runs your application. “blackbox” has a “data” (or links??) volume so everything is on a Linux filesystem. You want to edit files on that volume, so you run a “devenv” container like:

devenv:
   image: bash
   volumes:
      - data:/app/data
   user: 33
   init: true
   command:
     - sleep
     - inf

run visual studio code, choose “Remote Explorer”, then “containers” and open
“/app” in the devenv container. If you run the devenv container as the same user that generates files in the blackbox container, you can edit those files.

Now I didn’t try it so my above compose file example can be wrong, but something similar should work. I created a quick screenshot.

I ran only one container as an example without compose, but compose would just make it easier.

Or you could use the “official” Dev environment provided by Docker in Docker Desktop, but that is for an other purpose, although the idea is the same. Working on a Non-Linux environment and still using Linux as a light weight development environment.

Note that you still need to set file and folder permissions if you want to edit files created by different users.