Managing Docker swarm mode & persistent data of a database?

What’s best practice to deal with persistent data in Docker swarm mode services?

I can start a Docker swarm with Elasticsearch as service, but I assume as soon as I upgrade to a newer version of the container image, the docker container is killed and a new one is created, deleting all persisted data.

With Elasticsearch specifically, I could upgrade only half of the containers, leave some time to re-sync, then upgrade the rest. But that leaves an unknown time limitation on the full upgrade process of all containers.

So how should I manage Docker swarm services with persistent data like a database?

I know that at least in a docker standalone engine, the ES container will generate a volume and persist the data there (I remember I collaborated on the official image for this a while ago). If you upgrade that container with a new version, it will still use the original volume, so no data loss over there.

I don’t know how if that would be the case on a docker swarm mode engine, tho. But I know that Elasticsearch includes sharding and quorum management functionality when configured to run in cluster mode. If so, I’d assume that you could use these capabilities and stop worrying about shards getting deleted when containers are terminated… Elasticsearch would replicate the data into the rest of the ElasticSearch nodes.