I have a working setup for docker swarm with a django app and Nginx (as a docker service). But I faced with the common problem, that I can’t get the client’s IP address. I read a lot about it and I have decided to try to use the host’s network for Nginx as many people have recommended that.
I can say from experience that using the long syntax to publish ports in host mode indeed does what you are looking for.
Though, as you obviously didn’t share a complete compose file, it is impossible to say whether you actually defined networks or not. I have never used a compose file for stack deployments without explicitly declaring and assigning the networks.
As the lack of networks is the only difference between from what I know works and your configuration, I suggest declaring and assign a network in your stack and your services.
Still, I would strongly recommend taking a look at Traefik and use it, instead of creating your self configured nginx reverse proxy. Note: with swarm deployments, the labels to configure the reverse proxy rules need to be service deploy labels and not container labels.
I attached the full compose config to the end of my reply (with the new ports and replaced some lines with …, but they don’t have any docker/network related information). But as you can see I don’t have network configuration at all. So It’s fully the default one for swarm. Compose file version number is 3.7. OS is Linux.
Traefik could be good, but I have already have a full configured Nginx with CI/CD integration, so I would not like to remove it. And yes, I use service name in Nginx conf
Perhaps can you share a working network configuration with me?
And one more important question: if I set the port mode to host, does it fully turn off the swarm load balancer? So in the domain configuration will I have to point to the manager’s or the worker server’s IP? (After the configuration will work of course)
Your example looks fine, though I would suggest to temporarily remove the driver opts of the network declaration and re-add the network attachment in the service definition.
I guess by “fully turn off” you mean if a published port with mode: host bypassed the ingress mesh loadbalancer? Of course, it does. Would you expect a different behavior from mode: host?
If you bind a host port, you bind the port on that particular host. If you have 3 worker nodes, but run a single replica, the port will only be bound on the host where the container task is running!
If you want your nginx service to stick to a specific worker node, you could add a node label and use it as placement constraint.
Add node label: docker node update --label-add mylabel=true {node name or id}
Add placement constraint:
If you want your nginx service to run on all worker nodes, you can configure your deployment to deploy a global service (=exact 1 instance per host). If you want your nginx service to be restricted to a subset of the worker nodes, you can use node labels as placement constraints.
Thank you, your detailed explanation made everything clearer.
And finally It turned out what was wrong. In fact the problem was the nginx build cache, so after I changed the ports to host, I cleared the cache (and containers, networks etc.) and rebuilt everything. That solved the problem.
My experience for other readers:
Enough to change the ports settings of the nginx service. There was no need for further network configuration.
Pay attention to the build cache
And don’t forget to change the IP address to point to the Nginx instead of the swarm balancer