Forbid docker-swarm node from publish ports on WAN interface

I have configured single node/manager docker-swarm + portainer on remote machine with WAN and internal VLAN interfaces. I’ve configured first stack:

version: '3.8'
services:
  django-postgresql:
    image: postgres:14.5-bullseye
    hostname: psql
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_ADMIN_PASSWORD}
    networks:
      - django-backend
    ports:
      - "127.0.0.1:5432:5432"
    expose:
      - 5432
    volumes: 
      - postgresql-django-data:/var/lib/postgresql/data
      - postgresql-django-scripts:/opt/postgresql-django-scripts

I DO NOT want my services accessible on WAN interface! How can i make docker-swarm bind ports on selected interfaces? (localhost or private vlan)?

Swarm doesn’t allow to map port using the host-ip:host-port:container-port, it only allows host-port:container-port. Swarm is intended as multi-node orchestrator. As mapping a host ip in this scenario doesn’t make sense, it was never implemented for swarm.

You have plenty of options:

  • use plain docker with docker compose, or run a single node Kubernetes cluster with a distro like k3s instead
  • do not publish ports, and create your own rules for iptables to map host ports to container ports (you are on your own here)
  • use host network for your containers and block traffic from the wan-interface (you are on your own here)

The easiest solution would be to use plain docker with docker compose and use host-ip:host-port:container-port mapping.

1 Like

Thanks. I’ve deployed few services in docker swarm cluster using following template:

services:
  postgres:
    # port 5432
    container_name: company-postgres
    hostname: company.postgres
    image: postgres:15
    restart: unless-stopped
    environment:
      POSTGRES_USER: root
      POSTGRES_PASSWORD: ${POSTGRES_ADMIN_PASSWORD}
      PGDATA: /data/postgres
    volumes:
      - postgres-data:/data/postgres
      - postgres-scripts:/opt/scripts
    networks:
      - postgres-network
    logging:
      driver: local
      options:
        max-size: "50m"
        max-file: "20"

networks:
  postgres-network:
    attachable: true

Access from outside overlay network is provided by nginx for http and haproxy for tcp traffic. These services are deployed via docker compose, so i could safely expose 443 on WAN interface and some port for haproxy on internal VLAN interface. Both containers are attached to few docker swarm networks and i resolve access by service hostname, for example company.postgres. It works until swam container restarts. It get’s new internal IP so nginx/haproxy running in docker compose can no longer access swarm service by it’s container’s hostname. A have to restart nginx/haproxy containers so they could re-attach to new swarm networks and it works again.

How can i overcome this issue?

FYI: this single-node docker swarm cluster is my sandbox to gain experience to run production environment with at least 3 machines: manager and two swarm nodes. It will host some internal services that should be only available to VLAN interface - that’s the reason why i’m playing with haproxy and nginx OUTSIDE docker swarm

Idealy nginx/haproxy are running in an attachable container overlay network, and the target services are attached to the same network. Btw. nginx supports layer4 traffic by declaring a stream block which could be used for tcp.

What you experience in nginx is caused by dns caching. You can mitigate dns caching like this: NGINX swarm redeploy timeouts - #5 by meyay

Though, I would highly recommend to use Traefik instead of nginx (and haproxy), as it updates the reverse proxy configuration based on (Swarm) service labels or container labels (non swarm services) whenever a task/container is created/deleted. I have never experience any dns caching issue with Traefik.

1 Like

Setting up DNS caching in haproxy and ngnix solved issues. Thanks.