I’m not sure why dockerd is listening to :::80 and :::443?
My only exposed ports are 80,443 but on the load-balancer node (a worker node). So just curious why they are opened on a manager node.
By default a published port for a swarm service will use the ingress routing mesh. As such the port will be bound on every node in the cluster for the routing mesh.
Thank you @meyay for your quick response!
I read this: Use overlay networks | Docker Documentation but it’s not clear to me how to setup swarm for production.
Ideally i’d have 1 node with a load balancer container (in my case Caddy) with its port exposed to the internet on eth0 and the rest of the swarm on the private eth1.
I did init the swarm on eth1, I thought naively it would do the trick.
Is it even a problem that every nodes in the cluster are exposing the ports of the routing mesh?
I hope you used the private eth1 address as advertise address and used it to join the nodes to the cluster. You don’t want to expose the cluster backend to the internet…
By default Docker will bind a published port to all available ips - Swarm does not allow to lock down on which ip the ports gets publised, while plain docker containers allow to do so.
It realy depends on what you try to achive. Starting with the docker compose file version 3.2, you can use the long syntax to publish ports with mode: host, which will bind the port only on the node, the container is currently running on. Assumed you use a placement contraint that forces to run the container on one specific node, the port would only be bound on this node.
If you have a (cloud) loadbalancer in front of your swarm, you can simply assign a node label to one or more nodes, deploy the service in mode: global, publish the ports in mode: host using the placement contrainst that matches your node label, then you could use the machines with the node labels as target for the (cloud) loadbalancer to get a HA setup.
Added bonus: if you have no other loadbalancer in front of your caddy container, it will be able to see the clients ip address. Ingress will allways replace the client ip with a container network internal bridge’s ip (btw. the same holds true if you use plain docker containers in a bridge network and publish ports)
I hadn’t reach that problem yet but that’s great to know. I think i never noticed the issue because i have a WAF in front that forwards everything with XFF, so then the applications are able to use it (for HTTP at least).