What’s best practice to deal with persistent data in Docker swarm mode services?
I can start a Docker swarm with Elasticsearch as service, but I assume as soon as I upgrade to a newer version of the container image, the docker container is killed and a new one is created, deleting all persisted data.
With Elasticsearch specifically, I could upgrade only half of the containers, leave some time to re-sync, then upgrade the rest. But that leaves an unknown time limitation on the full upgrade process of all containers.
So how should I manage Docker swarm services with persistent data like a database?