I am using docker for Azure. My swarm has 1 manager and 3 worker nodes. I run two services, nginx and php, globally, so you get 4 instances of each across the swarm. (Docker manager also functions as a worker by default)
Docker for Azure automatically configures a load balancer and publishes ports via load balancer rules, which is great.
I see that the Azure external load balancer has a backend pool of the 3 worker nodes, but does not include the manager node.
In the case where a service is running on the manager node, how does traffic flow here. From what I can gather, a call to nginx, which has ports 80 & 443 published externally, hits the load balancer, which would only distribute the request across the worker nodes (even though the manager is also running nginx)
PHP seems different, since the request to nginx will then get passed to php, and uses the service name and docker network to distribute requests across the 4 instances of php.
Am I thinking about this correctly? If so, what is the best way to set something up that involves external ports published on the load balancer. Our swarm isn’t big enough to justify a manager only being a manager, so it might as well host tasks as well, but the automatic port publishing and backend pool configuration on the load balancer doesn’t totally seem to support this without manual intervention.