Port forwarding into container on overlay network with docker swarm does not work

Generally I treat the use of port forwarding to a container an anti-pattern unless:

  • you have a specific service that can only be single replica that’s configured to stop-first.
  • it’s global

The problem with doing something like that would be what you are experiencing where there are port conflicts. Same with Docker Swarm - Zero Downtime

What I would recommend you do is instead plop in a reverse proxy for your dashboards and database UI which I am presuming to be web based.

I would recommend adding a simple Caddy image with a config to do the reverse proxy and mark it as global and you can expose it to a different port from your main web app server.

Something like


services:
  ...
  dashboards:
    image: caddy
    configs:
      - source: dashboards-caddyfile
        target: /etc/caddy/Caddyfile
    volumes:
      - dashboards-caddy-data:/data
      - dashboards-caddy-config:/config
    deploy:
      mode: global
      resources: # Limit the resources no need to let it go wild.
        reservations:
          cpus: "1.0"
          memory: 256M
        limits:
          cpus: "1.0"
          memory: 256M
    ports:
      - target: 80
        published: 12345 # assuming no HTTPS
        mode: host # if you want the real IP sent
    cap_add:
      - NET_BIND_SERVICE
  ...
configs:
  dashboards-caddyfile:
     ...
volumes:
  dashboards-caddy-data:
     ... # can be an EFS/NFS mount
  dashboards-caddy-config:
     ... # can be an EFS/NFS mount

The Caddyfile would look something like (see reverse_proxy (Caddyfile directive) — Caddy Documentation for reference)

:80 {
  reverse_proxy /yubi/* yubi:15443
  reverse_proxy /dashboard1/* dashboard1:7000
  reverse_proxy /dashboard2/* dashboard2:9000
}

Note this also assumes that your backends have the notion of a PREFIX otherwise you’d have to create additional ports for each.

1 Like