Docker Community Forums

Share and learn in the Docker community.

Migrate Docker Swarm to new ip net

Hey Yall,

I recently started a project to simply my network design and with that my internal network ip changed from to

But it looks like docker swarm and all of the worker/manager nodes have not automatically migrated to that net network. IE when I run docker node ls on a manager it shows all workers as offline.

And also the join token is no longer correct as it still looks something likes this:

docker swarm join --token tokengoeshere

Is there a way to migrate all of the worker/manager nodes to the new network (

Docker swarm does NOT like if you change hostnames or ips on swarm nodes. If you had a 3+ node manager setup earlier, you pretty much broke your swarm.

You can try following:
– on each node: docker swarm leave to make the nodes leave the swarm, except one of the master nodes, which has to remain.
– on the remaining master node, remove each node using docker node rm ${nodename} (of course you need to replace ${nodename} with actual node names)

Once your master node is the only node left, rejoin your nodes:
– on the remaining manager node: docker swarm join-token manager to get the join-token for manager nodes
– on the remaining manager node: docker swarm join-token worker to get the join-token for worker nodes

Then paste the join-line from the manager node in your future worker/manager nodes to make them join the swarm. After that you should be good to go.

Another option is to force all nodes to leave the swarm:
– run docker swarm leave --force on all nodes and then initialize the swarm on one of your future swarm manager nodes with docker swarm init.

Hey @meyay ,

Would ya know if existing containers/volumes will persist or will they be lost?

hard to tell. Swarm ressources should be completly gone if a node leaves the swarm. If you force the swarm node leave on all nodes, you will loose swarm aware ressources like secrets, configs and (overlay) networks.

If you used the local driver to create named volumes, they should stay untouched, as they are local to the machine and not swarm aware resources. I am pretty sure that containers created thru swarm service tasks will be gone. Usualy containers are created using docker stack deploy. They are easy to re-create with a new stack deployment. Containers are ment to be disposable… why would you need to retain yours?

Please be more specific regarding your volumes. Can your share the output of `docker volume ls? (anonymize the name column if you like, but leave the rest untouched please).

If you want to be on the safe side, you might thing about reverting the ip’s to the old ones. creating backups of everthing and then perform your ip changes again.

No Reason for containers to persist. I was more concered about local volumes:
sample output

sudo docker volume ls
local a
local b
local c
local d

Good for you, those are created using the local driver. As such they are not swarm resources and should survive node leave.

You can still archive their content (of course the containers should be stopped state for this) : tar cvzf container_a.tar.gz /var/lib/docker/volumes/a/. Better safe then sorry :slight_smile:

I am quiet certain that you won’t need the backup - but still sometimes surprises happen and it’s better to have a security net


I will recreate my swarm then as I extracted all of the stacks from current manager node.


If you only have a single manager. Just leave the worker nodes and let them rejoin. No need for extra violance ^^

Sadly that doesnt work as the docker swarm init command dosent change the listen address

Ya, you are right. I did forget about the adertise address, which is backed in into the configuration.

I would use the docker swarm leave -force approach then and create a new cluster without the -force-new-cluster parameter. Both will result in a new cluster, though this is the less clean brother of a new cluster.

Lol yeah, Last time I am doing a major network overhaul… I knew I forgot something. Thanks for the help :slight_smile:

I made the same mistake on a Docker Enterprise cluster used for integration tests… and of course close to the due date of a release cough

UCP and DTR have such a tight coupling that a broken swarm layer pretty much messes up the whole environment. That day I lost a lot of images in the DTR and needed to repush a lot of images… An existing DTR can not be joined to a new installed UCP…

lesson learned :slight_smile: