Docker stack, ufw, nginx

Hi all!

My issue is that docker stack deploy -c compose.yml site still writes to iptables, even though I have {iptables: false} in my daemon.json file.

Does anyone know how to block docker stack deploy from writing my iptables to listen on public ports? Or am I missing something obvious?

I’m using nginx as a reverse proxy to forward all my docker services to my domain. However the ports are still opened by docker.

When I run a container using something like docker run -d site -p 5000:3000, 5000 is unavailable to the public. However with docker stack ... it is available.

I have a docker stack with a compose file like this:

version: "3"
services:
  site:
    image: repo/site
    deploy:
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 5
        window: 120s
    ports:
      - 5000:3000

I’ve allowed my containers to access the internet by manually adding the following. The second one is for gwbridge which I believe needs access for swarms to connect to the internet.

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
COMMIT

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker_gwbridge -s 172.19.0.0/16 -j MASQUERADE
COMMIT

My ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (OpenSSH)           ALLOW IN    Anywhere
443/tcp (Nginx HTTPS)      ALLOW IN    Anywhere
80/tcp (Nginx HTTP)        ALLOW IN    Anywhere
22/tcp (OpenSSH (v6))      ALLOW IN    Anywhere (v6)
443/tcp (Nginx HTTPS (v6)) ALLOW IN    Anywhere (v6)
80/tcp (Nginx HTTP (v6))   ALLOW IN    Anywhere (v6)

and my iptables -L | grep -A 5 -B 2 -i docker

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-INGRESS  all  --  anywhere             anywhere
ufw-before-logging-forward  all  --  anywhere             anywhere
ufw-before-forward  all  --  anywhere             anywhere
ufw-after-forward  all  --  anywhere             anywhere
ufw-after-logging-forward  all  --  anywhere             anywhere
ufw-reject-forward  all  --  anywhere             anywhere
--
ufw-track-output  all  --  anywhere             anywhere

Chain DOCKER-INGRESS (1 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:5000
ACCEPT     tcp  --  anywhere             anywhere             state RELATED,ESTABLISHED tcp spt:5000
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:50000
ACCEPT     tcp  --  anywhere             anywhere             state RELATED,ESTABLISHED tcp spt:50000
--
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Chain ufw-after-forward (1 references)
target     prot opt source               destination

So your containers are instances of a subdomain and you got an additional nginx container loadbalancing the request to the correct containers?

    _ A
   / 
LB -- B
   \_ 
      C

If so, you could just drop the ports part and use the DNS entry tasks.wwwA (wwwA being your site name) within the loadbalancer. it will be resolved to one of the healthy wwwA tasks.
If you want to be in control yourself, you could do a DNS lookup and get all the A entries for the particular service.

# nslookup tasks.myservice
Server:    127.0.0.11
Address 1: 127.0.0.11
Name:      tasks.myservice
Address 1: 10.0.0.13 myservice.1.ygx1zy6cuay9w3ownyplzux4w.proxy
Address 2: 10.0.0.15 myservice.2.iutlkj12kdu90s0dldkdk4dnl.proxy

That’s how Prometheus is able to scrape Docker Service btw.

scrape_configs:
    - job_name: 'myservice'
        dns_sd_configs:
        - names: ['tasks.myservice']
            type: A
            port: 8080

If I missunderstood your question, this might help you going in another direction. Within compose 3.3 you can use this:

version: '3.3'
services:
  httpcheck:
    image: qnib/plain-httpcheck
    networks:
        - outside

networks:
  outside:
    external:
      name: "host"

Which does not expose a port per service, but rather use the network of the host, thus ports are available on the host of the docker-engine.

docker run -ti --rm --net=host qnib/httpcheck curl localhost:8080
Welcome: 127.0.0.1

But as it uses the host network, you can only have one one container (task/process) on the host to try to bind to this port.

$ docker service update --replicas=2 http_httpcheck
$ docker logs <failed container>
[II] qnib/init-plain script v0.4.28
> execute entrypoint '/opt/entry/00-logging.sh'
> execute entrypoint '/opt/entry/10-docker-secrets.env'
[II] No /run/secrets directory, skip step
> execute entrypoint '/opt/entry/99-remove-healthcheck-force.sh'
> execute entrypoint '/opt/qnib/entry/00-delay.sh'
> Wait for 0s
> execute CMD 'go-httpcheck'
2017/10/02 09:49:40 listen tcp4 0.0.0.0:8080: bind: address already in use
1 Like

Thanks for your response.

I actually have a LB outside of docker, but I can try creating an nginx container and resolving via DNS. I will reply here when I have completed all the steps you suggest.