Best practice for persistent data using swarm

Hi -

I currently have a small swarm setup and running a variety of applications. I have an NFS server that is providing a shared space for the containers that need it (mounted at /shared). The containers then bind mount sub-folders within the /shared NFS volume for their needs. Is this generally a good or a bad thing - to have all of the containers using the same NFS mount and binding off of a subfolder? OR should I break this down into smaller NFS exports and use the volume and the local driver to mount the export as needed? Thanks in advance!


If your Data need not be shared across many containers then use individual exports so that one thing does not affect the other. Currently, if you unmount the NFS shared resource all containers will be affected. If you configure NFS mounts for each container ( assuming they are not sharing the same data) you have more isolation/security/granularity.

If you have lots of such individual mounts you might have to write a script to monitor and manage
If you go with the current setup configuring disk quotas for NFS would be a nice idea.


It is generally not a good thing to share a single NFS mount. For example, security is a potential concern.

If your container is possible to move among nodes, you should use a single export and have containers bind mount sub-folders. If you break down into smaller NFS exports, when the container moves to a new node, you need the way to mount its NFS exports.

While, NFS is probably not the best practice for data persistence in swarm. would be ok for a small swarm setup and the performance is acceptable to the application.

1 Like

So what are the best practices? :slight_smile:

The distributed file system such as GlusterFS, Ceph, etc, would scale better than NFS. Also could use the volume directly.

Thanks for replying.

Do you have a link about that? I spent yesterday trying to compare the different solutions and finally went for NFS because:
1 - I already know it, easy to set up
2 - Didn’t find any resources putting the other solutions above it (except your comment)


For a small cluster, NFS will work well.

For GlusterFS, you could google swarm + glusterfs. For example, using glusterfs with docker swarm.

For using volume, the discussion in Data(base) persistence in docker swarm mode may help.

Regarding the NFS solution:

Is it better to share a folder on the different hosts and then create local volumes on the shared folder, OR to directly create the volumes using the nfs driver?

My guess is that in case of a network problem, the first solution will prevent the containers from crashing, while it might also create data corruption since the data exposed to the containers is no longer the same.

Didn’t look into how nfs driver handles volumes. Suspect nfs driver would probably allow the multiple mounts to the same volume. In case of a network problem, if nfs driver allows the volume to be mounted to another node, while the volume is still mounted to the previous node. 2 nodes will write to the same file.