Docker draining a swarm node losing data

Hello, I’m fairly new to docker so excuse my ignorance.
I’m running a swarm setup with a manager and two workers, the manager is running portainer. I’m trying to run jenkins as a service so that I can have it be able to drain between nodes (I think this is something that should work) I have it setup to use a volume for persistent data, but when it gets drained from one node to another it boots back up and puts me back to the ‘Unlock Jenkins’ screen meaning its lost its data. If I drain it back to the original node it gets its data so i assume the volume is not moving with it, I did read and confirm on my server that when you create a volume it creates it on all nodes, is there a point to that if it doesn’t move the data with it?

I might be misunderstanding how this should work but is there anyway to get a Jenkins container as a service be able to move to a new node and keep its data? I’ve tried google this issue for hours but I can’t seem to find anything relating to draining a jenkins container as a service.

OS: RHEL 7
docker version:
Server:
Version: 17.11.0-ce-rc2
API version: 1.34 (minimum version 1.12)
Go version: go1.8.3
Git commit: d7062e5
Built: Wed Nov 1 22:12:05 2017
OS/Arch: linux/amd64

A standard volume is local to each node. If you want the volume to be shared when the container changes node, you need to use a volume plugin with some form of central storage, e.g. NFS. There is a list at https://docs.docker.com/engine/extend/legacy_plugins/#network-plugins

Persistent shared volumes seems to be something that Swarm struggles with. Most of the volume plugins are vendor specific, I’m not aware of any that allow you to simply mount and NFS share or something. I think you might be able to mount NFS using the standard volume driver, but I’ve never tried,