Create local volume with custom mount options

I would like to create a volume in my docker-compose.yml file with custom mount options (uid set to the host user).

  my_volume:
    driver: local
    driver_opts:
      #type: ""
      #device: ""
      o: "uid=${UID:-1000}"

However, I have no clue what to use for type and device. The only documentation I could find on the topic uses either tmpfs or nfs, but I just want a local volume and it should not get deleted together with the container.

Where can I find documentation on these options or what should I use in this case? Thank you.

If you just need to mount a directory with its own content, you can use a simple bind mount after you set the owner manually:

services:
  yourservice:
    volumes:
      - ./path/to/your/host/dir:/path/inside/the/container
sudo chown -R 1000 ./path/to/your/dir
docker-compose up -d

If you want the content inside the container to be copied back to your host, you can do what you statrted to do except I don’t know a way to set the UID as you tried, But I think you don’t have to, since you can change the owner of the folder inside the Docker image so when you mount the volume, it will change the owner on the host.

volumes:
  my_volume:
    driver: local
    driver_opts:
      type: none
      device: "/path/to/your/host/dir"
      o: bind

I actually learned this way here on the forum not long ago.

Relevant part of the Dockerfile:

FROM yourbaseimage

RUN chown -R 1000 /path/inside/the/container

If you want it to work with any ID, you can you can set the ID as an argument:

FROM yourbaseimage
ARG content_owner=1000
RUN chown -R  $content_owner /path/inside/the/container

build args in docker-compose.yml

services:
  yourservice:
    build:
      context: .
      args:
        content_owner: 1000

You could even use an environment variable instead of 1000

Bind mount is mentioned in the docs but I haven’t met that “named bind mount” version either. That could be my fault or the fault of the documentation. I don’t know.

1 Like

To add some further notes:

A named volume backed by a bind can not mount the filesystem with a different user id, like it possible with a remote share. I am afraid the container process uid/gid will be used to access the local filesystem.

If you desperately want it, then a feasible workaround could be to expose the folder(s) using nfs, and then do what you want with a nfs backed volume.

1 Like

Thank you both! That makes sense.

Creating an empty folder and setting the owner correctly before mounting the volume into the container worked.

But I think you don’t have to, since you can change the owner of the folder inside the Docker image so when you mount the volume, it will change the owner on the host.

This did not work for me. The volume gets mounted when the container starts, not when the image gets built. So in order to change the permissions I would have to create a script specifically to set the permissions of the volume, switch user and then run my app.

Changing the permissions of the empty folder the volume gets mounted into doesn’t have any effect.

To conclude:

  • binding a host directory with either the volume syntax or the driver options creates the directory with root owner on the host (if it doesn’t exist)
  • to avoid having to change the owner to use the volume with a non-root user inside the container, one has to manually create the host directory and set the correct permissions

That’s true but the permissions on the host folder will be set when you start the container. It doesn’t work with the usual bind mount, yes. But it works with the named bind mount when it is initialized. At least with Docker 20.10.10 which almost the latest version but I don’t think that matters. Maybe you misunderstand me so I give you an example to try:

Run on the host

mkdir $HOME/volumes/test

Dockerfile

FROM ubuntu:20.04
RUN mkdir /app && chown -R 33:33 /app

Note: I intentionally used 33 for UID and GID to make the change more noticable

docker-compose.yml

version: "3.7"

volumes:
  test:
    driver: local
    driver_opts:
      type: none
      device: "$HOME/volumes/test"
      o: bind

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - test:/app

Start the project:

docker-compose up -d --build

Check the permissions:

ls -la $HOME/volumes/test

so i have a docker-compose and container that builds a linux image with a windows mount. in the container im running into an issue where terraform is trying to do a chmod on some provider files it installs - which is failing, because of according the documentation chmod is not supported for such things with a windows mount.

Docker for Windows currrently implements host-mounted volumes based on the Microsoft SMB protocol, which does not support fine-grained, chmod control over these permissions.

this may be a solution: Allow chmod on CIFS mount - Super User

is it possible to set ‘noperm’ from a docker compose?

Interesting. I can’t find the source of the quote anywhere. Where did you find it? Wasn’t it an old source? Maybe HyperV backend uses SMB protocol, I don’t know, but I don’t remember when I used HyperV backend for Linux containers. WSL2 is the recommended and default backend. As far as I know Docker Desktop for Windows (with WSL backend) uses drvfs and you can change permissions but only from the container’s point of view. so you can make a script file executable for the container when you are in are in the container. You can also change the owner, but it won’t change the file ownership on the host.

CIFS is not the only problem by the way. Permissions on Windows and Linux are different so normally you can’t set the flags on the host and make Linux understand it or the other way around. I’m not entirely sure how drvfs does its job, because I almost never use Docker Desktop on Windows.

We discussed in multiple topics why it is better to store the data on a Linux filesystem, so I’m not going to go into details here, but you can create default named volumes in WSL2 distribution of Docker Desktop or even in another WSL distribution. If you turn on WSL intergation in Docker Desktop and open the WSL distribution, save the data on the filesystem of WSL (not on a mounted folder from Windows) and use bind mount from the WSL distro.

Since the topic is about custom parameters, if your Docker Desktop uses drvfs, the problem doesn’t even exist, and if you are not using it, I don’t know the answer, but if setting noperm is possible, I would guess it would be an option like in my previous post

except the you would have something like this (not tested at all):

volumes:
  test:
    driver: local
    driver_opts:
      type: cifs
      device: "//$IP_ADDRESS/sharename"
      o: noperm,$otheroptions

but I also think that even if Docker Desktop uses cifs, it wouldn’t be used by Docker CE inside its vitual machine which just mounts the folder that is already available in the virtual machine so you can’t change the cifs parameters.

I second that! WSL2 itself does not use cifs to access the windows host’s filesystem. Binding a windows host path to docker won’t use a volume, so it’s more likely to happen on the distribution level. If it would use a volume, we would be able to see it with docker volume ls.

The ´metadata` mount option in the distribution’s setting in /etc/wsl.conf allows to enable metadata support for Windows filesystems. I assume docker desktop has this enabled in their distribution:

[automount]
...
# DrvFs-specific options can be specified.  
options = "metadata,uid=1003,gid=1003,umask=077,fmask=11,case=off"...

Further details on Drvfs:

Note: the article is older, there is no need to unmount and remount the Windows drive, the wsl.conf setting already takes care of it.

1 Like

so in the case of trying to mount a windows filesystem (for users of Docker Desktop), are you saying we should add the following to the wsl.conf on windows? currently mine is empty…

[automount]
...
# DrvFs-specific options can be specified.  
options = "metadata,uid=1003,gid=1003,umask=077,fmask=11,case=off"...

I am currently using Docker Desktop w/ WSL2…and when doing a terraform init…
terraform is able to install the modules and providers, but it at some point tries to make the provider’s lib/binary executable by doing a chmod (even though the file and all mounted files already have permissions set to 755), this fails with ‘operation not permitted’.

Found these articles discussing the issue and the default 755(or 777?) perms that are broadly applied to windows mounts given the lack of fine-grain permissions support.

Docker for Windows shows permission errors on shared volumes | TechTarget

Changing file permissions from a mounted file from inside the docker container (Windows 10) - Stack Overflow

So you shouldn’t need to enable it and you definitely shouldn’t try to manually change anything in Docker Desktop’s WSL distributions… I checked the distributions and docker-desktop-data has no wsl.conf while docker-desktop has this:

[automount]
root = /mnt/host
crossDistro = true
options = "metadata"

Well, there is a pretty fine-grained permission setting on Windows, but it is for Windows and not for Linux operating systems so Linux will not understand it.

About the links: both of the links you shared are about older versions. The TechTarget page is 5 years old, the other one 2 years old. The documentation they refer to in the answer has changed since then and does not contain the quoted part.

It an happen when the user in the container is not root. This is the only case I could reproduce the issue with the same error message. Normally it would be “Permission denied”, but I get “Operation not permitted” when the folder is mounted from Windows and the user in the container is not root.

Do you have the latest Docker Desktop version?

i do have the latest desktop version and i am running in the container as root. if i touch a file for instance, the owner is shown as root.
where are you seeing the two wsl.conf files for docker-desktop-data and docker-desktop? i see only a .wslconfig in my home on windows.

could you drop this file in a folder that is a windows mount/volume in a linux (debian 12 in my case) container and try a terraform init in that folder?

providers.tf:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=3.53.0"
    }
    azapi = {
      source  = "Azure/azapi"
      version = "~>0.4.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "azapi" {}

If you execute wsl -d docker-desktop cat /etc/wsl.conf in a Windows terminal, you should get this output:

[automount]
root = /mnt/host
crossDistro = true
options = "metadata"

Since docker-desktop-data is only to store persistent data, the command won’t work for it.

By default TF_DATA_DIR is set to .terraform, where terraform downloads the providers and if used remote modules. You could try if pointing the TF_DATA_DIR to another folder in the container filesystem solves the problem, if so, you could mount a named volume (not a windows host path) to persist its content as a workaround.

Did I worked without any problem. I created a “terraform” folder in my user home so all files were owned by me.

It worked but I had to install terraform with its dependencies. Have you tired the official terraform image?

hashicorp/terraform

I just opened the distributions root folders from the file browser clicking on the “Linux” icon under “Network” and then clicking on “docker-desktop” and “docker-desktop-data”.

Interesting. so u have a windows folder mounted in a linux container, from where you ran terraform init.
ok then i really need to look at this permissions - maybe im missing something there.

and yes, I installed terraform the official and latest version of terraform in the container as well as on my desktop from hashicorp.

This is a good idea - think it should work. i could probably redirect it to a container local path using TF_DATA_DIR.
it would mean i would have to terraform init each time i bring the container up though…as i dont have a linux filesystem on this laptop to mount

That’s why I mentioned to mount a named volume to that container path. It will be managed by Docker and will be on a Linux file system. If the named volume is mounted to the container path TF_DATA_DIR points to, there is no need to init every time the container is run, except of course you modify the provider or add a new resource.

This part I don’t get. Doesn’t this require having a Linux volume to mount…where u can point the TF_DATA_DIR? as mentioned that’s not my use case…I don’t have a native Linux volume to mount in this case.
If I did, I’d just work off that and not have this issue at all I think

Docker manages named volumes for you and stores them on it’s linux filesystem in the docker-desktop-data distribution. You don’t need to do anything except define the named volume in your compose file and use it in the volume mapping.

I used tf_cache as example:

services:
  myservice:
    ...
    volumes:
      - tf_cache:/.terraform
    environments:
      TF_DATA_DIR: .terraform
volumes:
  tf_cache: {}

ah. got it. makes sense now. let me try this