Docker (Swarm) Volume problem with NFS

OS: Ubuntu 20.04.1
Docker: 19.03.11, build dd360c7
Setup:

  1. Created a Swarm with 3 hosts (1 manager and 2 workers). The following is executed on the manager
  2. On all 3 hosts is installed nfs-common
  3. Mounting the NFS share directly from the hosts terminal works fine: sudo mount 192.168.10.4:/volume1/docker_volumes/portainer ~/nfs

Step to reproduce
Create a docker volume to my NFS share:
docker volume create --driver local --name portainer --opt type=nfs --opt device=:/volume1/docker_volumes/portainer --opt o=addr=192.168.10.4,rw,nolock

Inspect the volume configuration to find the mount point on the host:
docker volume inspect portainer

{
    "CreatedAt": "2020-12-06T07:30:11+01:00",
    "Driver": "local",
    "Labels": {},
    "Mountpoint": "/var/snap/docker/common/var-lib-docker/volumes/portainer/_data",
    "Name": "portainer",
    "Options": {
        "device": ":/volume1/docker_volumes/portainer",
        "o": "addr=192.168.10.4,nolock,rw",
        "type": "nfs"
    },
    "Scope": "local"
}

Create a test file in the mounted volume:
touch /var/snap/docker/common/var-lib-docker/volumes/portainer/_data/test.txt

Actual result:
The test file only persist locally on the hosts mounted Docker Volume.

Expected result:
That I can see the test file on the NFS share on the NFS server or when I connect from other hosts to the NFS share.

Two observations:
– The “Mountpoint” in docker inspect indicates that you are using the snap version of docker.
– The path in the mount command in “Setup” does not match the path in the docker volume create command or the docker volume inspect output.

  1. Thank you for pointing the snap version out. I didn’t thought of that actually. Would you recommend not to use the snap version?Are there differences between the snap version and apt? I could try to use the apt version instead.

  2. Sorry it was my bad. I pasted from two different test. I’ve corrected it in my initial post, so it correspond to the test case.

I would highly recommend to use the packages from the official docker repos: Install Docker Engine on Ubuntu | Docker Docs.

So the inconsistancy wasn’t the problem? I can only speak for Docker on ubuntu 18.04: been using nfs v4 volumes since ages without any issue.

It was unfortunately not the inconsistency that is the problem.

I tried to remove the docker installation from snap and followed the guide you linked to. Now it’s installed via apt and the apt repository in the official docker guide.

I experience the same issue.

I did the following:

  • Try with exact same swarm setup on the apt installation as I did with the snap installation.
  • Removed all worker nodes, so there’s only the manager.
  • Tried with both the nfs4 and nfs option.

None of it worked. :frowning:

I ran the following:

docker volume create --driver local --name portainer --opt type=nfs4 --opt device=:/volume1/docker_volumes/portainer --opt o=addr=192.168.10.4,rw,nolock

And got the following in the inspect:

$ docker volume inspect portainer 
[
{
    "CreatedAt": "2020-12-06T21:49:42+01:00",
    "Driver": "local",
    "Labels": {},
    "Mountpoint": "/var/lib/docker/volumes/portainer/_data",
    "Name": "portainer",
    "Options": {
        "device": ":/volume1/docker_volumes/portainer",
        "o": "addr=192.168.10.4,rw,nolock",
        "type": "nfs4"
    },
    "Scope": "local"
}
]

I still see an empty directory here:

$ sudo ls -la /var/lib/docker/volumes/portainer/_data
total 8
drwxr-xr-x 2 root root 4096 Dec 6 21:49 .
drwxr-xr-x 3 root root 4096 Dec 6 21:49 ..

Could there be anything I haven’t installed? I followed the official installation guide.

Shouldn’t I be able to see the content of the NFS share in the directory /var/lib/docker/volumes/portainer/_data ?

A docker volume is merly a handle that knows how to mount a (remote) volume. The remote share will be mounted on a node when a container uses the volume and is unmounted on the node again when the container is stopped. On the node the container is running on, the volume will be mounted in ${docker data root}/volumes/${volume name}/_data (by default data root is /var/lib/docker/ for most linux distros).

I can tell from experience that the local driver for the type nfs works regardless wether the package nfs-common is installed or is not (I am actualy supprised that it does!).

Instead of creating volumes from the cli, I prefer to create my volume declarations in docker compose files, thus they get created during swarm stack deployments:

volumes:
  nfs:
    driver_opts:
      type: nfs
      o: addr=192.168.200.19,nfsvers=4
      device: :/volume1/docker_volumes/application

Note: even though this uses the local driver, the volume will be created on each node where a consuming service task (which creates the container) is scheduled the first time. Once created, volume declarations are immutable, regardless wether you apply changes to the volume in the docker compose file or not - they won’t be updated. In order to get the changed values, you will need to delete the volumes (manually on each node!) and let a stack (re-)deployment take care to create them again.

Can you share how you start the service/stack that uses the volume?

Two more things:
– Seems we both run our nfs server on a synology.
– You should definitly loose the nolock option, as this is a call for trouble with any sort of payload that relies on file locks (e.g. postgres, etcd3,…)

Hi @meyay

Thank you for all your inputs. I managed to get it up running.

I was naive and though the NFS would be mounted as soon as I did a docker volume create. It makes sense you need to have a container using the volume before it takes up the ressource. :slight_smile:

I also managed to work out a docker compose yml file, and I have removed the nolock option too.