Docker Community Forums

Share and learn in the Docker community.

Cannot expose port of a container attached to an overlay network to a private ip adress

I have a problem with docker swarm. I have published a port of a container attached to an overlay network of my swarm but the port is only reachable from within each host of my swarm.
In fact, i cannot access the port from outside the swarm

I have 4 machines :

  • host 1 : member of swarm
  • host 2 : member of swarm
  • host 3 : member of swarm
  • host 4 : not member of swarm

and one container :

  • vault container exposing the TCP port 4200 ; Below the part of configuration of the docker-compose file

    image: vault:1.3.2

    • “8200”
      VAULT_API_ADDR: http://vault:8200
      “ui”: true,
      “backend”: {
      “file”: {
      “path”: “/vault/file”
      “listener”: {
      “tcp”: {
      “address”: “”,
      “tls_disable”: 1
      “default_lease_ttl”: “168h”,
      “max_lease_ttl”: “720h”

When i start my stack, docker create my container and an overlay network associated to it.

The result of the command docker stack services return

    a**@alaska:~$ docker stack services test
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
rhjg9jc0guyy        test_vault          replicated          1/1                 vault:1.4.2         *:30000->8200/tcp

When i run telnet 30000 on host1, host2 and host3, it return a success response, but i have a timeout when i run telnet 30000 from host4.

I have read the documentation of docker and it seems that my swarm will expose the port only on public IP adress. Is there a way to allow the exposure of the port of my container on a private IP adress ?

I have the exact same problem.


  • docker run is accessible from other host (v17 and v19)
  • docker service create on docker 17 is accessible from other host
  • docker service create on docker 19 is NOT accessible from other host

I have been running a stack on Docker version 17.05.0-ce, build 89658be for some time.

Due to a bug I have upgraded to Docker version 19.03.11, build 42e35e61f3 and found the exact same problem described by this thread, a stack that was previously working, is now no longer accessible outside of the host.

I have removed my own stack from the equation and simply use nginx as an example:
docker run --publish published=8080,target=80 nginx is accessible from my jumphost

docker service create --name nginx --publish published=8080,target=80 nginx is not accessible from the jump host (but is accessible from on the host via curl

I am running on docker 19 too. I think that for now, i’ll downgrade docker to 18 or 17

During the time that no ingress network exists, existing services which do not publish ports continue to function but are not load-balanced. This affects services which publish ports, such as a WordPress service which publishes port 80.

Inspect the ingress network using docker network inspect ingress, and remove any services whose containers are connected to it. These are services that publish ports, such as a WordPress service which publishes port 80. If all such services are not stopped, the next step fails.

Remove the existing ingress network:

$ docker network rm ingress

WARNING! Before removing the routing-mesh network, make sure all the nodes
in your swarm run the same docker engine version. Otherwise, removal may not
be effective and functionality of newly created ingress networks will be
Are you sure you want to continue? [y/N]
Create a new overlay network using the --ingress flag, along with the custom options you want to set. This example sets the MTU to 1200, sets the subnet to, and sets the gateway to

$ docker network create
–driver overlay

I’ve been doing some experimenting…

Starting with docker-ce-18:

centos@swarm4 ~]$ yum list installed | grep docker                1.2.13-3.2.el7             @docker_ce_stable
docker-ce.x86_64                    3:18.09.9-3.el7            @docker_ce_stable
docker-ce-cli.x86_64                1:19.03.11-3.el7           @docker_ce_stable
[centos@swarm4 ~]$ docker service create --name nginx --publish published=8080,target=80 nginx
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

curling from another machine works.

Upgrading docker to 19, the service still works:

centos@swarm4 ~]$ yum list installed | grep docker                1.2.13-3.2.el7             @docker_ce_stable
docker-ce.x86_64                    3:19.03.11-3.el7           @docker_ce_stable
docker-ce-cli.x86_64                1:19.03.11-3.el7           @docker_ce_stable
[centos@swarm4 ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
72e09861c746        nginx:latest        "/docker-entrypoint.…"   14 seconds ago      Up 7 seconds        80/tcp              nginx.1.wf51l1q5mb96pv331vuk24bs2

Even deleting and recreating the service still works.

However - if you leave and destroy to swarm, and re init it on (on version 19) then it is not accessible.

It appears that a swarm created in pre 19 is accessible. But one created 19 is not accessible…

I have filed bug

You can work about this by installing version 18, init the swarm, then upgrade to version 19.

I have tried to create a service with a published port on all “19.03” docker minor versions.
The service created is reachable on docker 19.03.04. But from 19.03.05, the service is not reachable.