Created a Swarm with 3 hosts (1 manager and 2 workers). The following is executed on the manager
On all 3 hosts is installed nfs-common
Mounting the NFS share directly from the hosts terminal works fine: sudo mount 192.168.10.4:/volume1/docker_volumes/portainer ~/nfs
Step to reproduce
Create a docker volume to my NFS share: docker volume create --driver local --name portainer --opt type=nfs --opt device=:/volume1/docker_volumes/portainer --opt o=addr=192.168.10.4,rw,nolock
Inspect the volume configuration to find the mount point on the host: docker volume inspect portainer
Two observations:
– The “Mountpoint” in docker inspect indicates that you are using the snap version of docker.
– The path in the mount command in “Setup” does not match the path in the docker volume create command or the docker volume inspect output.
Thank you for pointing the snap version out. I didn’t thought of that actually. Would you recommend not to use the snap version?Are there differences between the snap version and apt? I could try to use the apt version instead.
Sorry it was my bad. I pasted from two different test. I’ve corrected it in my initial post, so it correspond to the test case.
It was unfortunately not the inconsistency that is the problem.
I tried to remove the docker installation from snap and followed the guide you linked to. Now it’s installed via apt and the apt repository in the official docker guide.
I experience the same issue.
I did the following:
Try with exact same swarm setup on the apt installation as I did with the snap installation.
Removed all worker nodes, so there’s only the manager.
A docker volume is merly a handle that knows how to mount a (remote) volume. The remote share will be mounted on a node when a container uses the volume and is unmounted on the node again when the container is stopped. On the node the container is running on, the volume will be mounted in ${docker data root}/volumes/${volume name}/_data (by default data root is /var/lib/docker/ for most linux distros).
I can tell from experience that the local driver for the type nfs works regardless wether the package nfs-common is installed or is not (I am actualy supprised that it does!).
Instead of creating volumes from the cli, I prefer to create my volume declarations in docker compose files, thus they get created during swarm stack deployments:
Note: even though this uses the local driver, the volume will be created on each node where a consuming service task (which creates the container) is scheduled the first time. Once created, volume declarations are immutable, regardless wether you apply changes to the volume in the docker compose file or not - they won’t be updated. In order to get the changed values, you will need to delete the volumes (manually on each node!) and let a stack (re-)deployment take care to create them again.
Can you share how you start the service/stack that uses the volume?
Two more things:
– Seems we both run our nfs server on a synology.
– You should definitly loose the nolock option, as this is a call for trouble with any sort of payload that relies on file locks (e.g. postgres, etcd3,…)
Thank you for all your inputs. I managed to get it up running.
I was naive and though the NFS would be mounted as soon as I did a docker volume create. It makes sense you need to have a container using the volume before it takes up the ressource.
I also managed to work out a docker compose yml file, and I have removed the nolock option too.