Persistent data across swarm

You have to choose between:
– Remote file share (CIFS/NFS)
– Storage cluster (Ceph/GlusterFS)
– Container native storage (Portworx, StorageOS)
– provider specific storage (AWS/Azure/Storage Vendor)

The remote file shares are easy to use and can be used with the local driver. Everything else needs an installed volume plugin and the installation of kernel drivers or setup of the storage cluster itself. If the bandwidth, latency and iops of the NFS/CIFS share are sufficient… there is nothing wrong with NFS or CIFS (though NFSv4 is recommended over CIFS)

You might want to take a look at Ceph and use the Rex-Ray volume plugin. Portworx looks promissing, it consist of kernel drivers, a command line tool and takes care of clustering containers accross the local storages of the nodes - you get local blockdevice speeds and still data replication. If a container dies and respawns on a different node, the replica on the node will become the master and sync its changes to the other replicas.

I personaly use StorageOS for my development environment, even though it is not supported to work with Docker Swarm - it was a headache to setup and works stable most of the time. I use it with a free developer license, which allows to create a storage cluster with a total of 500gb storage.

In productive environments databases are operated outside docker. In test environments it is not uncommon to operate the database in its own Docker Stack.