Docker exposed ports are not accessible from all remote networks

Hi, I’m a new starter.

I found an odd behavior. My setup is the following:

My client sits on LAN1.
The Docker Linux server sits on LAN2.
Between those 2 LAN’s, there is a OPNsense firewall.
The OPNsense Firewall is running a VPN client to a VPN provider.
I got a port forwarding from the external VPN provider IP to my Docker Linux Server. This works fine for any local (non Docker) ports.
“netstat -tuplen” lists all expected ports on the Linux Server (Docker and none Docker).

LAN 1 to LAN2 Communication:
If I connect to a port hosting a web interface, on let’s say port TCP 3333 it works fine.
Any communication from LAN1 to LAN 2 works without any issue.

LAN1 to VPN IP
Communications to ports not hosted by Docker is fine, ports hosted by docker behave odd. If I do a “telnet external VPN IP 3333” the port appears to be open and accepts TCP connections. Trying to access the very same web interface results in a timeout. I can see that the traffic is forwarded fine trugh OPNsense and reaches the Linux server on the correct port, but I can’t access the web interface. If I redirect the same rule to a port hosting a native installed web interface, everything is fine.

So my feeling is that something is blocking this communication on the Linux server, but I got no clue what is happening here.

In Short: LAN1 to LAN2 communications work. Communications from the Internet though the VPN only works for native ports and not for Docker Ports.

ANY help is much appreciated.

Please share how you create those containers (either the exact docker run commands, or if compose is used the content of the compose file). Furthermore, if the containers are attached to a network that is created with docker network create or is external:true in the compose file, please also share the output of docker docker network inspect <networknam>. Lastly, we need the output of docker info.

Make sure to use code blocks (</> icon, or three backticks ``` in separate lines before and after the output) when you share the outputs.

Hi @meyay , thanks for the swift response.

First, I use docker compose for all of my containers and here is the compose file for one container which I used for testing:

services:
  bitmagnet:
    image: ghcr.io/bitmagnet-io/bitmagnet:latest
    container_name: bitmagnet
    ports:
      # API and WebUI port:
      - "3333:3333"
      # BitTorrent ports:
      - "3334:3334/tcp"
      - "3334:3334/udp"
    restart: unless-stopped
    environment:
      - POSTGRES_HOST=postgres
      - POSTGRES_PASSWORD=postgres
    #      - TMDB_API_KEY=your_api_key
    volumes:
      - ./config:/root/.config/bitmagnet
    command:
      - worker
      - run
      - --keys=http_server
      - --keys=queue_server
      # disable the next line to run without DHT crawler
      - --keys=dht_crawler
    depends_on:
      postgres:
        condition: service_healthy

  postgres:
    image: postgres:16-alpine
    container_name: bitmagnet-postgres
    volumes:
      - ./data/postgres:/var/lib/postgresql/data
    #    ports:
    #      - "5432:5432" Expose this port if you'd like to dig around in the database
    restart: unless-stopped
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=bitmagnet
      - PGUSER=postgres
    shm_size: 1g
    healthcheck:
      test:
        - CMD-SHELL
        - pg_isready
      start_period: 20s
      interval: 10s

I did try it with

networks:
  host:
    external: true

at the end, but this did not have an effect.

The output of the default created network is this:

[
    {
        "Name": "bitmagnet_default",
        "Id": "5b230620e5edecdb4c1603890cb12fdae0488c1ad427365bfc87a0f06a4f4659",
        "Created": "2025-05-02T14:38:50.223475345+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.19.0.0/16",
                    "Gateway": "172.19.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "03af13daec95e914a37cbae0320c46e929bd98d6b78f9dfa8aada035bdb57ca3": {
                "Name": "bitmagnet",
                "EndpointID": "5905bc007d8fe95a2a5a6ffd4f551eaf9fe56ff11faf394a82f24833821231bd",
                "MacAddress": "aa:d5:4d:ab:6b:fc",
                "IPv4Address": "172.19.0.3/16",
                "IPv6Address": ""
            },
            "5e1c1fd459a273edb12858d542088c3f0e692bc5ba93dc8ac1440080cc10ae17": {
                "Name": "bitmagnet-postgres",
                "EndpointID": "d017578a2659867c8515d808346b73a603b275727f831f6b6c481b5ba72ed9c1",
                "MacAddress": "9e:03:62:33:74:82",
                "IPv4Address": "172.19.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.config-hash": "0d2035fd2be972420d0b35a699a00c1e8df43c57b81eb4a0c3828a3b3b24e5af",
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "bitmagnet",
            "com.docker.compose.version": "2.35.1"
        }
    }
]

and here is the output of the info

Client: Docker Engine - Community
 Version:    28.1.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.23.0
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.35.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 5
  Running: 5
  Paused: 0
  Stopped: 0
 Images: 9
 Server Version: 28.1.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05044ec0a9a75232cad458027ca83437aae3f4da
 runc version: v1.2.5-0-g59923ef
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.0-34-amd64
 Operating System: Debian GNU/Linux 12 (bookworm)
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 31.05GiB
 Name: Deb2
 ID: a72346ea-0a39-4590-8061-f108f9c1431c
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  ::1/128
  127.0.0.0/8
 Live Restore Enabled: false

Do you think it’s the same issue as this: Exposed ports with net=container: - #5 by sycolth

Sounds like it.

Thank you for sharing the details.

Now we see that you use a user defined bridge network and publish ports that are bound to all ips on the host.

So either the ip range of the vpn subnet collides with one of the docker user defined networks, or indeed the return route for the responses is missing either on the host itself or the default gateway that the host uses. IT could be very well the same issue sycolth had.

This doesn’t even work for me.

If you want a container to use the host networking, then network_mode: host is the way to go (See: https://docs.docker.com/reference/compose-file/services/#network_mode). Note: you can either attach containers to one or more docker network, or use network_mode. Both are mutually exclusive.

Thanks, I’ve tried this for Both container and just the main one, the issue is that it can’t connect to the postgres container.

It only works if I add

networks:
  host:
    external: true

to then compose file

I don’t know how to identify what the issue is.
The route back should be fine, at least from the LAN2 IP, not sure from inside the container, as it’s working fine with natively installed apps.

Ive checked the route from LAN 2 IP to 8.8.8.8 and there is no network passed which uses the 172 range. The VPN itself uses 10 range networks and is just one hop to the public IP.

To me, this looks like some sort of blocking if the request does not come from a private IP address range. Or so?

If a container uses network_mode: host, the postgres container must publish a port, so that the container using the host network can access it using localhost:<published host port>. Or you use host networking for the postgres container as well, then its localhost:5432.

Note: dns-based service discovery does only exist for user defined networks. With network_mode: host, the container can’t be attached to any user defined network.

Docker does not block anything. People complain about the opposite, that it opens ports in the firewall as soon as you publish a port. It doesn’t care where ingress traffic originates from.

Bridge networks are private natted networks. Egress traffic is masqueraded so that the container ip gets replaced by the host ip. and becomes like traffic originating from the host: It will use whatever routes exist in the routing table and interfaces that allow to reach them.

Ingress traffic is handled pretty much the same way as home grade routers handle port forwarding from a wan port to lan ip:port: instead of a wan ip, you use the docker host’s ip. Instead of port forwarding to a lan ip:port, you publish a container port to a host port.

Sorry, you lost me on this. I got no Idea what to alter.
Can you adjust the compose file and I check if this runs better?

Can you put in your own words what you understood, and what part you didn’t understand, so I can try to fill in the gap?

I don’t think your docker networking is the general issue, it’s more likely that routes back to the vpn subnet are missing.

It depends on what you want.

So, it seems I do not understand what you are suggesting I should do.

As soon as I add network_mode: host to the bitmagnet container, it can’t reach the postgred DB. Adding network_mode: host to the postgres container, does not help. I think you understood this and proposed doing something with localhost:5432, but I don’t know where to add it or how.

As well, the formatting of the yml drives me mad. To publish the ports as listed in the example (5432:5432), yml does not work whatsoever. I got no clue how to format this properly.

Here is you updated compose file (I added comments where I changed things):

services:
  bitmagnet:
    image: ghcr.io/bitmagnet-io/bitmagnet:latest
    container_name: bitmagnet
    network_mode: host # added by meyay
    # meyay: commenting out port publishing, as there is no port publishing when network_mode is used!
    # ports:
    #   # API and WebUI port:
    #   - "3333:3333"
    #   # BitTorrent ports:
    #   - "3334:3334/tcp"
    #   - "3334:3334/udp"
    restart: unless-stopped
    environment:
      # meyay: modified to talk to the postgres container with network_mode: host
      - POSTGRES_HOST=localhost
      - POSTGRES_PASSWORD=postgres
    #      - TMDB_API_KEY=your_api_key
    volumes:
      - ./config:/root/.config/bitmagnet
    command:
      - worker
      - run
      - --keys=http_server
      - --keys=queue_server
      # disable the next line to run without DHT crawler
      - --keys=dht_crawler
    depends_on:
      postgres:
        condition: service_healthy

  postgres:
    image: postgres:16-alpine
    container_name: bitmagnet-postgres
    network_mode: host # added by meyay
    volumes:
      - ./data/postgres:/var/lib/postgresql/data
    #    ports:
    #      - "5432:5432" Expose this port if you'd like to dig around in the database
    restart: unless-stopped
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=bitmagnet
      - PGUSER=postgres
    shm_size: 1g
    healthcheck:
      test:
        - CMD-SHELL
        - pg_isready
      start_period: 20s
      interval: 10s

Note: I have no idea what bitmagnet does, but it looks strange that you specify POSTGRES_DB on the database, but the application container has not configuration for it. I assume it’s the default configuration.

Thank you very much, now I understand what you mean.

I think I understood. Will play around with it now.

Welcome!

Good idea! That’s how things end up in the mid and long term memory :slight_smile:

1 Like