Docker Swarm running on Ubuntu 24.04 on 3 VM created on ProxMox not accepting http request

I am unable to get a docker swarm configured to accept HTTP connection to an nginx service.

I created 3 brand new Virtual Machines on my Proxmox install. None are running any firewalls via Proxmox or Ubuntu. Each VM has Ubuntu Server 24.04 installed with no further configuration and no firewall.

root@nuc-c1:~# ufw status
Status: inactive

The IP addresses are 10.0.0.231, 10.0.0.232, 10.0.0.233. I’ve completed an apt update, apt upgrade on each and rebooted. I’ve installed Docker as described by the official docker documentation. I can also run nginx on each node and access it with no problem. I’ve removed nginx after running this test.

I am able to set up the docker swarm (manager = 10.0.0.231) and the other two are workers (10.0.0.232, 10.0.232).

root@nuc-c1:~# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
ejtb52tjhotqo61rivm8tqty2 *   nuc-c1     Ready     Active         Leader           28.3.2
uhq6mf9unpshfrk24w1ztp3hf     nuc-c2     Ready     Active                          28.3.2
qhfj14hjlz8rceb3hkv8t7gp8     nuc-c3     Ready     Active                          28.3.2

I create an nginx service and scale it:

root@nuc-c1:~# docker service create --name nginx --publish published=80,target=80 nginx
h218git04bc1kagd9lm3xytgj
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service h218git04bc1kagd9lm3xytgj converged

root@nuc-c1:~# docker service scale nginx=3
nginx scaled to 3
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service nginx converged
root@nuc-c1:~# docker service ps nginx
ID             NAME      IMAGE          NODE      DESIRED STATE   CURRENT STATE                ERROR     PORTS
vbxqrf6xwz4r   nginx.1   nginx:latest   nuc-c1    Running         Running about a minute ago
c685thb5htjv   nginx.2   nginx:latest   nuc-c3    Running         Running 38 seconds ago
tx61wknxm6s3   nginx.3   nginx:latest   nuc-c2    Running         Running 38 seconds ago

root@nuc-c1:~# docker container ls
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
2cb26a4ce621   nginx:latest   "/docker-entrypoint.…"   14 minutes ago   Up 14 minutes   80/tcp    nginx.1.vbxqrf6xwz4rkdm38r0r9ahbs

When I try to access nginx from my laptop browser, my connection times out:
HTTP://10.0.0.231

I am seeing TCP traffic on 10.0.0.231 on port 80 when I use tcpdump:

** tcpdump -i ens18 dst port 80**

From 10.0.0.231:

I can curl from 10.0.0.231:

root@nuc-c1:~# curl http://10.0.0.231
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

But I can’t curl to any of the other nodes (times out):

curl HTTP://10.0.0.232
curl HTTP://10.0.0.233

Here is what the docker networks look like:

root@nuc-c1:~# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
7297ed406809   bridge            bridge    local
67597c64fdd3   docker_gwbridge   bridge    local
5d484ccded12   host              host      local
dv00l5t3qcgq   ingress           overlay   swarm
27801b5b1e48   none              null      local

And here is what the ingress network looks like:

root@nuc-c1:~# docker network inspect ingress
[
    {
        "Name": "ingress",
        "Id": "dv00l5t3qcgq79mghlxcy8zz8",
        "Created": "2025-07-18T12:50:59.894891478Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": true,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "2cb26a4ce62142bc7124bd6d3d8e1eddd5ffec38dc6071ad4ac6d50e2a2c4732": {
                "Name": "nginx.1.vbxqrf6xwz4rkdm38r0r9ahbs",
                "EndpointID": "01667c27411f707ae88f6451bc6b41c7587dd72bf2ef9adab1df81ac9c38b995",
                "MacAddress": "02:42:0a:00:00:ae",
                "IPv4Address": "10.0.0.174/24",
                "IPv6Address": ""
            },
            "ingress-sbox": {
                "Name": "ingress-endpoint",
                "EndpointID": "c71318a6646a31410b4384fec5a94f1b0066904f010a85eb97a39f5b22600f02",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "f7a8192a9430",
                "IP": "10.0.0.231"
            },
            {
                "Name": "167df137fe24",
                "IP": "10.0.0.232"
            },
            {
                "Name": "95c2e18cf098",
                "IP": "10.0.0.233"
            }
        ]
    }
]

And the docker_gwbridge:

root@nuc-c1:~# docker network inspect docker_gwbridge
[
    {
        "Name": "docker_gwbridge",
        "Id": "67597c64fdd389ffb7018d4b879bec3a5e08691c2d3cdef1d936120495ceacff",
        "Created": "2025-07-17T21:07:09.559971447Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "2cb26a4ce62142bc7124bd6d3d8e1eddd5ffec38dc6071ad4ac6d50e2a2c4732": {
                "Name": "gateway_25632968d00c",
                "EndpointID": "cfef0de29bc74db35f52ea81aaa82fa923a579174abcee65466e80894210113b",
                "MacAddress": "8a:32:74:fe:31:18",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "ingress-sbox": {
                "Name": "gateway_ingress-sbox",
                "EndpointID": "88dbfe8c29f304b4e52b247c9685149542c40abe3c59fc2ccf691ab902619266",
                "MacAddress": "32:1c:f1:62:f9:65",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        },
        "Labels": {}
    }
]

FWIW, here is what nmap is showing:

root@nuc-c1:~# nmap 10.0.0.231
Starting Nmap 7.94SVN ( https://nmap.org ) at 2025-07-19 20:26 UTC
Nmap scan report for manager-1 (10.0.0.231)
Host is up (0.0000040s latency).
Not shown: 998 closed tcp ports (reset)
PORT   STATE    SERVICE
22/tcp open     ssh
80/tcp filtered HTTP

root@nuc-c1:~# nmap 10.0.0.232
Starting Nmap 7.94SVN ( https://nmap.org ) at 2025-07-19 20:28 UTC
Nmap scan report for worker-1 (10.0.0.232)
Host is up (0.000030s latency).
Not shown: 998 closed tcp ports (reset)
PORT   STATE    SERVICE
22/tcp open     ssh
80/tcp filtered http
MAC Address: BC:24:11:A8:E6:FC (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 1.29 seconds

root@nuc-c1:~# nmap 10.0.0.233
Starting Nmap 7.94SVN ( https://nmap.org ) at 2025-07-19 20:28 UTC
Nmap scan report for worker-2 (10.0.0.233)
Host is up (0.00023s latency).
Not shown: 998 closed tcp ports (reset)
PORT   STATE    SERVICE
22/tcp open     ssh
80/tcp filtered http
MAC Address: BC:24:11:56:0A:7B (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 1.27 seconds

And finally, here is the iptables:

iptables -S
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-BRIDGE
-N DOCKER-CT
-N DOCKER-FORWARD
-N DOCKER-INGRESS
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-FORWARD
-A DOCKER ! -i docker_gwbridge -o docker_gwbridge -j DROP
-A DOCKER ! -i docker0 -o docker0 -j DROP
-A DOCKER-BRIDGE -o docker_gwbridge -j DOCKER
-A DOCKER-BRIDGE -o docker0 -j DOCKER
-A DOCKER-CT -o docker_gwbridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-CT -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-FORWARD -j DOCKER-INGRESS
-A DOCKER-FORWARD -j DOCKER-CT
-A DOCKER-FORWARD -j DOCKER-ISOLATION-STAGE-1
-A DOCKER-FORWARD -j DOCKER-BRIDGE
-A DOCKER-FORWARD -i docker_gwbridge -o docker_gwbridge -j DROP
-A DOCKER-FORWARD -i docker_gwbridge ! -o docker_gwbridge -j ACCEPT
-A DOCKER-FORWARD -i docker0 -j ACCEPT
-A DOCKER-INGRESS -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-INGRESS -p tcp -m tcp --sport 80 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-INGRESS -j RETURN
-A DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP

I have no idea why this may not be working, I am sure that I did something basic incorrectly because there is no way that this is so broken. Please let me know if you see something or need more information on anything.

Your commands look good to me.

Though, what doesn’t look good is that the vm’s subnet iip range is within the default ip range docker uses for overlay networks, which is 10.0.0.0/8

You can configure it in the file /etc/docker/daemon.json. If the file doesn’t exist, or the file exists, and the configuration block is missing means that the default values are used.

You can check the current settings using this command: docker info --format '{{json .Swarm.Cluster.DefaultAddrPool}}' (prefix with sudo if required)

The default setting in /etc/docker/daemon.json would look like this:

   {
      "default-address-pools": [
        {
          "base": "10.0.0.0/8",
          "size": 24
        }
      ]
    }

You could change it to a different subnet:

   {
      "default-address-pools": [
        {
          "base": "192.168.0.0/16",
          "size": 24
        }
      ]
    }

I think you will need to remove existing overlay networks, so they can be re-created using the new range.

You might find this discussion helpful:

So I remove the swarm entirely. I added the following daemon.json file:

root@nuc-c1:/etc/docker# cat daemon.json
{
  "default-address-pools": [
    {
      "base": "172.24.0.0/13",
      "size": 24
    }
  ]
}

Rebooted the machine. I ran docker info and it looks like the daemon.json did get included.

root@nuc-c1:/etc/docker# docker info
Client: Docker Engine - Community
 Version:    28.3.2
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.25.0
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.38.2
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 28.3.2
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 CDI spec directories:
  /etc/cdi
  /var/run/cdi
 Swarm: active
  NodeID: 2u6191cksk83ldbfb3tnvwr5b
  Is Manager: true
  ClusterID: qf207u0fv12njwfuu8gld0gmc
  Managers: 1
  Nodes: 3
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 10.0.0.231
  Manager Addresses:
   10.0.0.231:2377
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05044ec0a9a75232cad458027ca83437aae3f4da
 runc version: v1.2.5-0-g59923ef
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.8.0-64-generic
 Operating System: Ubuntu 24.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.755GiB
 Name: nuc-c1
 ID: 1b737159-35d6-4384-b96c-275b7759d629
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  ::1/128
  127.0.0.0/8
 Live Restore Enabled: false
 Default Address Pools:
   Base: 172.24.0.0/13, Size: 24

However, the ingress still looks like it is within the 10.0.0.x range:

root@nuc-c1:/etc/docker# docker network inspect  ingress
[
    {
        "Name": "ingress",
        "Id": "kifl2mnxyyw4etxct98nhh4pv",
        "Created": "2025-07-20T00:01:31.290725587Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": true,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "ingress-endpoint",
                "EndpointID": "55f1a6b0c873d033594743f6c29b4cc74501f8eae02cf813204ecf39676e78b6",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "d8e89b13072f",
                "IP": "10.0.0.231"
            },
            {
                "Name": "da2d89bf0588",
                "IP": "10.0.0.232"
            },
            {
                "Name": "8cdadb67cdea",
                "IP": "10.0.0.233"
            }
        ]
    }
]

And when I go ahead and create the service using this command, I get the same failed result.

docker service create --name nginx --replicas 3 --publish published=80,target=80 nginx
root@nuc-c1:/etc/docker# docker network inspect  ingress
[
    {
        "Name": "ingress",
        "Id": "kifl2mnxyyw4etxct98nhh4pv",
        "Created": "2025-07-20T00:01:31.290725587Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": true,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f": {
                "Name": "nginx.3.levzote77ewzm3ypx96w1m67m",
                "EndpointID": "3570a2b027ce9d365ef346393ab74f535b2ff5557ab31891b750f1749f2d0a12",
                "MacAddress": "02:42:0a:00:00:10",
                "IPv4Address": "10.0.0.16/24",
                "IPv6Address": ""
            },
            "ingress-sbox": {
                "Name": "ingress-endpoint",
                "EndpointID": "55f1a6b0c873d033594743f6c29b4cc74501f8eae02cf813204ecf39676e78b6",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "d8e89b13072f",
                "IP": "10.0.0.231"
            },
            {
                "Name": "da2d89bf0588",
                "IP": "10.0.0.232"
            },
            {
                "Name": "8cdadb67cdea",
                "IP": "10.0.0.233"
            }
        ]
    }
]

And here is the docker container inspect:

root@nuc-c1:/etc/docker# docker container ls
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
3c871402ace9   nginx:latest   "/docker-entrypoint.…"   51 seconds ago   Up 50 seconds   80/tcp    nginx.3.levzote77ewzm3ypx96w1m67m
root@nuc-c1:/etc/docker# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "abd21b501e4f85b69f8eb42638af31f52f1da71a24c8162aae884fa90d27c285",
        "Created": "2025-07-19T23:53:30.978085352Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.24.0.0/24",
                    "Gateway": "172.24.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
root@nuc-c1:/etc/docker# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
abd21b501e4f   bridge            bridge    local
67597c64fdd3   docker_gwbridge   bridge    local
5d484ccded12   host              host      local
kifl2mnxyyw4   ingress           overlay   swarm
27801b5b1e48   none              null      local
root@nuc-c1:/etc/docker# docker network inspect docker_gwbridge
[
    {
        "Name": "docker_gwbridge",
        "Id": "67597c64fdd389ffb7018d4b879bec3a5e08691c2d3cdef1d936120495ceacff",
        "Created": "2025-07-17T21:07:09.559971447Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f": {
                "Name": "gateway_8b61786bd2b8",
                "EndpointID": "13bc614e6f326c925da4113df7998ba4d37ae4b3f9b238b4a17dac63674916a7",
                "MacAddress": "16:40:06:b6:10:8e",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "ingress-sbox": {
                "Name": "gateway_ingress-sbox",
                "EndpointID": "8280c31c6e88e3e59c26f25fe317d1abac51d514fef4a098d74f798cbb92422a",
                "MacAddress": "3e:db:95:cf:08:b1",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        },
        "Labels": {}
    }
]
root@nuc-c1:/etc/docker# docker container help
docker: unknown command: docker container help

Usage:  docker container

Run 'docker container --help' for more information
root@nuc-c1:/etc/docker# docker container --help
Usage:  docker container COMMAND

Manage containers

Commands:
  attach      Attach local standard input, output, and error streams to a running container
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  exec        Execute a command in a running container
  export      Export a container's filesystem as a tar archive
  inspect     Display detailed information on one or more containers
  kill        Kill one or more running containers
  logs        Fetch the logs of a container
  ls          List containers
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  prune       Remove all stopped containers
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  run         Create and run a new container from an image
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  wait        Block until one or more containers stop, then print their exit codes

Run 'docker container COMMAND --help' for more information on a command.
root@nuc-c1:/etc/docker# docker container ls
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
3c871402ace9   nginx:latest   "/docker-entrypoint.…"   10 minutes ago   Up 10 minutes   80/tcp    nginx.3.levzote77ewzm3ypx96w1m67m
root@nuc-c1:/etc/docker# docker container inspect 3c871402ace9
[
    {
        "Id": "3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f",
        "Created": "2025-07-20T00:54:48.851538951Z",
        "Path": "/docker-entrypoint.sh",
        "Args": [
            "nginx",
            "-g",
            "daemon off;"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 4286,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2025-07-20T00:54:49.092628412Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:22bd1541745359072c06a72a23f4f6c52dbb685424e0d5b29008ae4eb2683698",
        "ResolvConfPath": "/var/lib/docker/containers/3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f/hostname",
        "HostsPath": "/var/lib/docker/containers/3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f/hosts",
        "LogPath": "/var/lib/docker/containers/3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f/3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f-json.log",
        "Name": "/nginx.3.levzote77ewzm3ypx96w1m67m",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "docker-default",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "bridge",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "ConsoleSize": [
                0,
                0
            ],
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "private",
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "Isolation": "default",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": null,
            "PidsLimit": null,
            "Ulimits": [],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/interrupts",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware",
                "/sys/devices/virtual/powercap"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ],
            "Init": false
        },
        "GraphDriver": {
            "Data": {
                "ID": "3c871402ace92541f02b6a2e3f7d192ad1c62e94b6c1f5a0a6b6918d2979274f",
                "LowerDir": "/var/lib/docker/overlay2/ec78d8dc6af94ad3ed106e66c19da806a3f2a77900977acb400ea1e0b9b0e7df-init/diff:/var/lib/docker/overlay2/d457d47f4e336d46bdd7af027991988e25ce8b73231ee343c56e0cd6a6f34dc5/diff:/var/lib/docker/overlay2/bc291cff888b66726f28ca47d983965771e919024feb7a5075107fda21af7890/diff:/var/lib/docker/overlay2/c90bff7465f3f9fbfa1fc885739e27c142b3ffeb5ad79c9f6fb0c0f431ded563/diff:/var/lib/docker/overlay2/9a00d8a9c77b4fb555d1fa73fb24dd4630fa1f341691ad801ae8b4bf8fe05d88/diff:/var/lib/docker/overlay2/8d80fa5073def9aac1deddb56cfb19380cfb8886cefa7e2fcd348157f7708316/diff:/var/lib/docker/overlay2/1c3f345aefd291962cb424e05d76d30a9456e4afd955f4a2b068c749aba4d3ed/diff:/var/lib/docker/overlay2/048c685205ac5c7ed43d1bfdcb77cbd8b6984169bef43e528210e28f9fd0cd75/diff",
                "MergedDir": "/var/lib/docker/overlay2/ec78d8dc6af94ad3ed106e66c19da806a3f2a77900977acb400ea1e0b9b0e7df/merged",
                "UpperDir": "/var/lib/docker/overlay2/ec78d8dc6af94ad3ed106e66c19da806a3f2a77900977acb400ea1e0b9b0e7df/diff",
                "WorkDir": "/var/lib/docker/overlay2/ec78d8dc6af94ad3ed106e66c19da806a3f2a77900977acb400ea1e0b9b0e7df/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
            "Hostname": "3c871402ace9",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "80/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "NGINX_VERSION=1.29.0",
                "NJS_VERSION=0.9.0",
                "NJS_RELEASE=1~bookworm",
                "PKG_RELEASE=1~bookworm",
                "DYNPKG_RELEASE=1~bookworm"
            ],
            "Cmd": [
                "nginx",
                "-g",
                "daemon off;"
            ],
            "Image": "nginx:latest@sha256:f5c017fb33c6db484545793ffb67db51cdd7daebee472104612f73a85063f889",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {
                "com.docker.swarm.node.id": "2u6191cksk83ldbfb3tnvwr5b",
                "com.docker.swarm.service.id": "cgmc6itushqu1sw29nt0gcm90",
                "com.docker.swarm.service.name": "nginx",
                "com.docker.swarm.task": "",
                "com.docker.swarm.task.id": "levzote77ewzm3ypx96w1m67m",
                "com.docker.swarm.task.name": "nginx.3.levzote77ewzm3ypx96w1m67m",
                "maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"
            },
            "StopSignal": "SIGQUIT"
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "8b61786bd2b84a4083863ee0964cf2aeb4f31c985b8796fdb2c305d96527ba2e",
            "SandboxKey": "/var/run/docker/netns/8b61786bd2b8",
            "Ports": {
                "80/tcp": null
            },
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "ingress": {
                    "IPAMConfig": {
                        "IPv4Address": "10.0.0.16"
                    },
                    "Links": null,
                    "Aliases": null,
                    "MacAddress": "02:42:0a:00:00:10",
                    "DriverOpts": null,
                    "GwPriority": 0,
                    "NetworkID": "kifl2mnxyyw4etxct98nhh4pv",
                    "EndpointID": "3570a2b027ce9d365ef346393ab74f535b2ff5557ab31891b750f1749f2d0a12",
                    "Gateway": "",
                    "IPAddress": "10.0.0.16",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "DNSNames": [
                        "nginx.3.levzote77ewzm3ypx96w1m67m",
                        "3c871402ace9"
                    ]
                }
            }
        }
    }
]

Should I be creating an overlay network and using it when I create the service?

I should have written it more specifically: you need to remove existing networks, as their configuration is immutable, and configuration changes will not affect them post-creation.

Something along the lines should do the trick:

# delete old ingress 
docker network rm ingress

# create new ingress based on new address pool
docker network create \
  --driver overlay \
  --ingress \
  ingress

I gathered the infos from here: https://docs.docker.com/engine/swarm/networking/#customize-ingress

Not there yet, still something wrong. I removed the swarm and the ingress network. I reboot the Ubuntu instance. When I create swarm, it creates the ingress network but seems to be ignoring the daemon.json config that specifies the subnet. Here you can see in the commands that there is no ingress network. After I create the swarm, the ingress network still seems be working on the 10.0.0.X network:

root@nuc-c1:~# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
08313bc75302   bridge            bridge    local
67597c64fdd3   docker_gwbridge   bridge    local
5d484ccded12   host              host      local
27801b5b1e48   none              null      local
root@nuc-c1:~# docker swarm init --advertise-addr 10.0.0.231
Swarm initialized: current node (sy9jtcl5xnipl8ajx62eoatrr) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-255yjusqafzfa1lizeulkecbs3ghjcb8thdoyr0hazvd28d6zu-8a81b1nzsaf9gis598iueopii 10.0.0.231:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
root@nuc-c1:~# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
08313bc75302   bridge            bridge    local
67597c64fdd3   docker_gwbridge   bridge    local
5d484ccded12   host              host      local
z1qgwemf00h7   ingress           overlay   swarm
27801b5b1e48   none              null      local
root@nuc-c1:~# docker network inspect ingress
[
    {
        "Name": "ingress",
        "Id": "z1qgwemf00h7j6z666v04b4jb",
        "Created": "2025-07-20T11:45:23.211616806Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": true,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "ingress-endpoint",
                "EndpointID": "6b0409caa60e1aeeea27783f6f6c21a7710214ba6044b2ec6e4cb0b4296df357",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "c802d11bd6b1",
                "IP": "10.0.0.231"
            }
        ]
    }
]

Also, I can not create an ingress network before I create the swarm.

And if I try to create the ingress network after the swarm I get an error that ingress already exists. Or even if I try to create another ingress network with a different name, I get an error that ingress already exist.

It looks like the bridge network is using the daemon configuration:

root@nuc-c1:~# docker network inspect bridge 
[
    {
        "Name": "bridge",
        "Id": "08313bc753026d428ff1497b9f388d14f89a046d84af8ce0e4eefc09772d62f3",
        "Created": "2025-07-20T11:43:04.957236856Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.24.0.0/24",
                    "Gateway": "172.24.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Update…

When I rm the ingress and create a new one with this command:

root@nuc-c1:~# docker network create   --driver overlay   --ingress   --subnet=10.11.0.0/16   --gateway=10.11.0.2   ingress 

And then create the service it seems to work.

Question:

  • Why do I have to create a custom ingress network? I’ve read a ton - official docker documentation, many Q&A posts, etc… Nobody seems to talk about this as being a fundamental step to getting a swarm working. Why doesn’t the default install and default ingress work out of the box?
  • There must be a daemon.json configuration that informs docker to use a subnet and gateway as I have done with the command line so that when the swarm starts, it configures a useable ingress network?

Do you remember what I wrote in my first post:

Your network intersects with the ingress network, and potentially other subnets from that range if your host is connected, or routes to it.

Nobody talks about it, because it works out of the box for everyone that don’t share your lan-ip range. For instance since Docker 1.13 I know and use the swarm mode, I never had to change it in any environment I worked with.

I would have assumed that setting the default address pool on every node, and then initiating the swarm should create an ingress within that range. It is a surprise to me that the ingress doesn’t seem to respect the setting. I consider this a bug. You can report it in the upstream project https://github.com/moby/moby/issues which is the foundation of what docker packages as docker-ce.

update: it seems it can be provided as argument when the swarm is initiated:

docker swarm init \
   --default-addr-pool 10.11.0.0/16 \
   --default-addr-pool-mask-length 24