Docker swarm nfs share for data?

Maybe a dumb question but I have a swarm with 3 workers and 1 master, on the master I changed /lib/systemd/system/docker.service and added the dataroot ‘ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --data-root=/opt/docker’ so my data is on an nfs share. Do I need to do something at the workers to? (I just mounted the nfs share on all machines on /opt/docker)

Hello,

You need nfs-common package installed on the other nodes.

/etc/exports file should have an entry for each swarm node ip, hostname, the subnet containing the swarm nodes you want to use (e.g. xx.xx.xx.xx/24), or wildcard the domain (e.g. *.domain.com)

Not sure about running docker service create with flags, but a docker-compose yaml file will support NFS, a basic setup will include the following in your config file:

volumes:
  nfs-data:
    driver: local
    driver_opts:
      type: "nfs"
      o: "addr=INSERT_HOSTNAME_OR_IP,rw"
      device: "INSERT_HOSTNAME_OR_IP:/exported/filesystem"

Where /exported/filsystem is the folder you export in /etc/exports, rather than the target folder inside the container.

Then reference ‘nfs-data’ in the configuration of the service that requires it.

You can skip mounting the NFS share onto the individual swarm node filesystems, meaning that each swarm node does not need to be able to access the NFS at a location such as ‘/mnt/dataonnfs’ ( or /opt/docker from your original post); however, if you do mount the NFS drive onto each swarm node, it may require a different configuration.

The flags you use on the ‘addr’ line in the docker-compose yaml, and the flags you use in /etc/exports for the exported nfs filesystem will have significant impacts on security and performance of the service utilizing the nfs.