Docker Swarm overlay network - containers can't communicate

Hello everyone,

Tl;dr: Docker swarm containers using overlay network can’t ping eachother

I am trying to create a swarm, which includes my Desktop (Windows which Docker Desktop) and a Ubuntu VM (Swarm manager, on my Desktop PC).

Creating the swarm and joining it works fine. I proceded to create an attachable overlay network, which should span all my nodes using the overlay driver. I have created a random nginx service, using global mode and my overlay network to make the network available on all my nodes.

To test the connectivity I created two ubuntu containers (One on Ubuntu, one on my Windows) and joined them to the network. The Problem is that the containers can’t ping eachother.
I have also opened the required ports for swarm (2376, 2377, 7946, 4789) on my Windows and Ubuntu VM.

Do you know how I can make the connection work?

Could you share thehow are you deploying these services or your docker-compose file? It would help to understand your problem and reproduce it.

Did you tried to usetasks.<service-name> instead of using service-name when you ping the other services?

Container discovery

For most situations, you should connect to the service name, which is load-balanced and handled by all containers (“tasks”) backing the service. To get a list of all tasks backing the service, do a DNS lookup for tasks.<service-name>.

Moreover, if you are trying to ping or use the service from outside the network, you will have to publish the needed ports from containers: And then you should be able to ping to any service from any public IP from the swarm cluster.

1 Like

I didn’t use services to test the connectivity. I created normal containers and tried to ping the IP of the other container, which only worked if they were on the same host.

I only created a global nginx service to make the overlay network available on all nodes.

If I understand you, you have created a swarm with an attachable network. This swarm swarm has an Nginx service runing in global mode. And you created 2 containers outside the swarm using the same attachable network. Is it correct?

Yes. The only purpose of the two containers is to ping eachother over the swarm network, which stretches accross two pcs.

And the only purpose of the nginx service is to make the overlay network available on all nodes. As nodes don’t seem to see all swarm networks automagically.

You shouldn’t need Nginx service to make the overlay network available on all nodes when you are working on swarm mode. On swarm mode by default, all nodes use the routing mesh which does that for you and any swarm service you create will be attached to the overlay ingress network if you do not connect it to a user-defined overlay network. See:

The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.

Use the --publish flag to publish a port when you create a service. target is used to specify the port inside the container, and published is used to specify the port to bind on the routing mesh.

To do so, you should create the service using docker service create:

$ docker service create \
  --name <SERVICE-NAME> \
  --publish published=<PUBLISHED-PORT>,target=<CONTAINER-PORT> \

This will create your services inside the swarm and attach them to the overlay network ingress, but you can also specify other overlay networks. Although, I would not create containers outside the swarm using the same overlay network if there isn’t a good reason for it. Did you tried to deploy these 2 containers inside the swarm (one on the worker node and other on the manager node) and then ping between them? You can use placement constraints for that.

I would also check that any extra port you open (apart from the ones reserved for the swarm) has proper security rules. For example, if you want to ping, you will need to allow any ICMP connection on your Windows and Ubuntu VM.

If it doesn’t work, I would check towards the network policies and system used to run the VMs. Some virtual machines use the port 4789 or other swarm ports and blocked the embedded VMs as a result. If it is the case you can do something like this to change the swarm port:

docker swarm init < MANAGER-IP > --data-path-port 5789

Even though, it seems you are using a VM machine and Desktop PC to create 2 nodes on the same Desktop PC. Did you consider to use different docker machines instead?

docker-machine create node1
docker-machine create node2
docker-machine ssh node1
docker-machine ssh node2
1 Like

This whole setup is just for testing purposes only. Later on I will create a proper setup.

Creating the two containers via a service doesn’t work either.

When I use two linux vms on my desktop the connection works. Super strange.

Which containers are you using?
Are you using Windows containers on your Desktop PC and Linux containers on your Ubuntu VM?
If it’s so, could you try to use Linux containers on your Desktop PC (windows) instead? Sometimes, it works.

From the Docker Desktop menu, you can toggle which daemon (Linux or Windows) the Docker CLI talks to. Select Switch to Windows containers to use Windows containers, or select Switch to Linux containers to use Linux containers (the default).