Docker Community Forums

Share and learn in the Docker community.

Docker Volume File permission

0

I am going through the articles about docker volumes and understood there are different types like bind mounts and volumes etc …

When it comes to the underlying file model, is it that a single copy is shard or there will be multiple copies - I mean, say there 100 containers sharing the same file system, does each container will be seeing a shard file or each container will have a copy of file ? And how the updates are handled - using some locks managed by the docker?

I am afraid, you are confusing how volumes and containers work.

Each container created from an image will share the files backed in into the image layers of this particular image and have a copy on write layer on top to write data. If a container modifies data, it will be on its private cow layer. Though, this is not related to volumes.

Volumes are used to map an external ressource into the containers filesystem. The ressource itself will be managed outside the container.

Would you mind elaborating this further ? Do you mean, these details are left to OS ?

It depends what type of volume you use. Docker commes with a default volume plugin that allows to use cifs/nfs remote shares or local folders. Others allow to use block devices, cloud native storage or whatever…

Depending on the volume plugin, a volume can be mounted to only a single container (the block device kind), others like nfs/cifs remote shares or local folders can be bount to many containers.

Usualy a volume plugin mounts a resource into /var/lib/docker/{volumename}/ and when it’s used in a container, under the hood it uses mount --bind src dst to make the folder accessible inside the container.

Ok, it means it is NOT completely left to the OS and storage driver such as https://docs.docker.com/storage/storagedriver/select-storage-driver/ . What do you think in terms of they might have some control on locking permissions as in - say there 100 containers sharing the same file system , how the updates are handled ? each container will have its own local copy or is it a softlink to one file and shared by every container … ?

Your questions are somewhat ambigous to me… why would you want 100 containers to share the same volume? It depends on the filesystems/remoteshares ability underneath. But if you are refering to containers and images, then you get it wrong all along.

Ok, say there are 100 containers with which are mapped to same host volume like

docker run -it img -v myvol:/myvol1 alpine sh
docker run -it img -v myvol:/myvol2 alpine sh

… docker run -it img -v myvol:/myvol100 alpine sh

so , when i say sharing earlier - i mean all these 100 containers share the same host volume “myvol” with different mounted paths . This mounting is controlled by storage drivers as i believe it should be ? And also, if all of these containers doing the updates at the same time / inserts - does docker have any thing to with managing these updates ?

Yes and no. Though, if you let docker implicitly create the volume myvol than it will be from type local created with the bind option. Thus the storage driver will simply “mount --bind” and leave everything else after that to the filesystems implementation in the OS. If you created the volume beforehands with a different storage driver, then the storage driver will be responsible.

Dock itself and volume plugins are not involved in how the volumes are used. You are aware that a docker container is just an isolated process, running in its own namespace, relying on devices in their own namespace and some funny iptables magic, aren’t you? Docker is just the glue to make your container experience very userfriendly.

it seems,it eventually leads us to the boring answer “it depends” :slight_smile: . Thanks for taking the extra time to put your answers, i really appreciate it.

So, in the above , can it be said there are 100 different copies as all containers are isolated and one of them is modified , it is the responsibility of docker (in turn OS) to have it updated in the writable layer using storage drivers ( it depends again :slight_smile:

I am also seeing “overlay2” is storage driver backed by “extfs”, so the locking must be at the filelevel

Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true

Overlay2 is used for images and containers, not for volumes.
A container uses all layers of the image as read-only layers and puts a private copy on write layer on top. Whenever something in a container is changed (deleted, added, altered), it will happen in each container’s cow layer. If you have 100 replicas of the same container, the changes will happen in each one of them.

When it commes to volumes: it is not much different than sharing a folder. If you have 100 processing trying to change a file and there is no file lock, the outcome is random. Even with file lock the outcome would be still random as the order in which each of these containers will modify the same file can not be determined. You need to take care of this in your application code :wink:

thanks a lot for clarifying this, I didnt find this anywhere in the documentation. I got it now, the volumes got nothing do with storage drivers. As changes are stored in top writable layer ( which is controlled by storage drivers .

yep, we got it :slight_smile: