In SWARM not able to access service from worker node

Hi All,

I am new to the docker world. During learning I have created the below setup:
1.Virtual machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.106. I am able to access the internet from this VM(say it VM1) and able to ping that system from my physical system OS( Windows 10)
2.Virtual Machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.105. I am able to access the internet from this VM(say it VM2) and able to ping that system from my physical system OS( Windows 10)
3. Now I have created the swarm as follows from VM1:
sudo docker swarm init --advertise-addr 192.168.0.106:2377 --listen-addr 192.168.0.106:2377
4. Then I added the VM2 in this swarm as follows:
sudo docker swarm join --token SWMTKN-1-4i56y47l6o4aycrmg7un21oegmfmwnllcsxaf4zxd05ggqg0zh-9qp67bejerq9dhl3f0suaauvl 192.168.0.106:2377 --advertise-addr 192.168.0.105:2377 --listen-addr 192.168.0.105:2377
5. After that I checked the swarm details:
sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ogka7rdjohri9elcbjjcpdlbp * ubuntumaster Ready Active Leader 19.03.12 7qu9kiprcz7oowfk2ol31k1mx ubuntuslave Ready Active 19.03.13
6. Then deployed the nginx service as follows from VM1:
sudo docker service create -d --name myweb1 --mode global -p9090:80 nginx:1.19.3
7. Service status:
sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ke1o9cbm3e0t myweb1 global 2/2 nginx:1.19.3 *:9090->80/tcp
8. Service details:
sudo docker service ps myweb1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
gd8oliwngf3 myweb1.ogka7rdjohri9elcbjjcpdlbp nginx:1.19.3 ubuntumaster Running Running 14 minutes ago
1o4q8dlt94jj myweb1.7qu9kiprcz7oowfk2ol31k1mx nginx:1.19.3 ubuntuslave Running Running 14 minutes ago
9. Now I am able to access the nginx from VM1 using URL: 192.168.0.106:9090 and localhost:9090. But I am not able to access nginx from VM2 using URL: 192.168.0.105:9090 and localhost:9090. My understanding that the nginx is running on both the VM and can be accessible on both.
10. in both the VMs I am able to see the nginx container is running.
VM1 :

sudo docker container ls

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a4e13e49dfd nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 15 minutes 80/tcp myweb1.ogka7rdjohri9elcbjjcpdlbp.egd8oliwngf35wwpjcieew323

VM2:

sudo docker container ls

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999062110f0 nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 16 minutes 80/tcp myweb1.7qu9kiprcz7oowfk2ol31k1mx.1o4q8dlt94jj4uufysnhsbamd

Please guide me on this if I am doing any mistakes.

TIA,
Deb

Hello Deb,

what is the result of docker node ls? I see you posted only one node.

As per the service the create service command with --replicas 2 does not match with the docker service ls that reports service in “global” mode.

As well as docker service ps does not show the 2 tasks that I would expect showing also in which node are running.

First fix docker node ls: you must see 2 nodes = 2 managers or 1 manager and 1 worker depending on the token you use

Second deploy you service again and check it with docker service ps to see where it’s running, if you use mode global the service should be running on all nodes.

Regards
Giacomo

There were some errors while copying the details. I have corrected it accordingly.

Try this:

  1. check if on VM2 there is a process running on port 9090 ( something like this: netstat -tulnp | grep 9090 )

  2. If the port is open then try the curl from the VM1 towards VM2.

If it works maybe you should check the “proxy” configuration on VM2.

If the port is not opened in Listening on VM2 then try to deploy some another service using this https://hub.docker.com/r/containous/whoami on another port

Regards
Giacomo

In my case, I have 2 physical nodes, on manager node the application is running perfectly, but on the worker node, swarn does not assign the port
for example
MANAGER —> 9090:8080
WORKER ----> —:—