Hi, I’ve got my stack running the way I want on my swarm, but one of the main apps in my set of services is a big stateful app with long-term tcp connections (it’s a game lobby server). I’d like to be able to push new config files to these containers without restarting them. Is there a way to do this? Using the new config feature would seem to be what I want, but doing an update on the service brings the containers down and then back up, which is what I need to avoid.
It looks like kubernetes can do this with its configmaps? At least from this medium post, it’s hard to tell from their documentation: https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc
Obviously locally on each node I can just use docker cp and stuff the files directly into the container.
Is there a way to push files to the swarm from a manager without restarting the containers in the services?
As a workaround, I can map a local volume on each of the machines and update that with ssh, or use an nfs volume to the manager or some file sync thing, or store the shared files in s3 and poll them, but I’m wondering if there’s an official docker way to do this?