I’ve enabled swarm configuration on my home docker host and I am not understanding how to give a swarm service a static IP address on my home network that is accessible to browsers on my home network.
I have the sample hello-world service running in a docker stack swarm that publishes port 80
Without the swarm configuration, using macvlan configuration I could assign the IP address to the container and the container would be available at the IP address
I’d like to do something similar within a swarm. The closest I could find via googling was this post from 2020 using shell scripts and iptables.
We had a need to publish separate docker swarm services on the same ports, but on separate specific IP addresses. Here’s how we did it.
Docker adds rules to the DOCKER-INGRESS chain of the nat table for each published port. The rules it adds are not IP-specific, hence normally any published port will be accessible on all host IP addresses. Here’s an example of the rule Docker will add for a service published on port 80:
I really would like to avoid hacking shell scripts and handle this with a custom configuration to a docker ingress network but have not yet been able to find examples.
Note: I enabled the swarm configuration to give me access to the Secrets functionality
Can I turn this problem on its head and somehow have physical machines on my home network participate in the swarm service discovery process? For example, can the swarm be an mDNS provider?
You can use extra_hosts: to inject name resolution, which only makes sense if your local dns server does not already provide name resolution for these hosts.
The container nameserver will use the configured nameservers from the host as upstream. Thus, usually a container can lookup the same dns entries as the host and can reach whatever host can reach.
Is there any chance you could point me to some documentation on this topic?
Name resolution from the physical workstations ( Windows, ChromeOS, macOS) is the problem I want to solve. The docker dnsrr service handles name resolution between the containers, I was just looking for a way to allow clients not participating in the swarm to also leverage the dnsrr service.
Maybe I need to put a software load balancer between the physical network and the swarm (single node) services as shown in this picture.
The client can not participate in the swarn. Outside clients are supposed to use the published service port, which then distributes traffic to the service tasks in dnsrr manor. That is if you configured the service to use the endpoint-mode: dnsrr, otherwise the service will use a vip and load balance the traffic itself.
What’s you exact objective? Do you want the clients to be able to reach services based on their domain name? If so, then you need to register those domains in your dns and resolve them to the ip of your swarm nodes, then you need a reverse proxy with rules that forward traffic for a specifc domain to a specific service. Though, this is on service level, not on service task (=container) level.
Some resolvers, like PiHole or Unbound allow to inject dns overrides, which can help with registering the domains and resolving them to the swarm node ips. Docker itself is not responsible for dns resolution for outside clients.
When it comes to the reverse proxy, using Traefik is highly recommended. Treafik allows configuring reverse proxy rules for swarm services using service labels. Updating the proxy rules is then bound to the lifecylce of the service/containers, so that rules will be applied when something is deployed and removed when the deployment is removed.
There is a lot in here for me to chew on there are two goals.
The first is for my home network just to enable swarm mode to familiarize myself and host services for my home physical computers to consume.
But, the ultimate goal is to present options for my employer to host legacy apps in a swarm backed by a docker cluster in an enterprise environment. The legacy apps are primarily used by end users on physical machines. The key here is we are an educational intuition that does not have the capital budget for expensive licensed solutions.
Whatever I can piece together in my home environment is what I will end up proposing to my employer.
I am quite sure you are running your own internal dns solution in your institution, so the name resolution part should be easy to cover. Just add the domains you want and point their a-records to the swarm cluster. In your home environment you can still use pihole or unbound to cover the dns resolution part, and use the organizations dns server in the target environment.
When it comes to Traefik. You don’t necessarily need to use Traefik EE. The free version is usually sufficient. You don’t have to use Traefik, you can create your own reverse proxy with nginx or apache. Though, personaly I like that it works very reliable with Traefik, as it doesn’t have dns caching issues that you might experience with self created reverse proxy rules on nginx or apache.
So the only cost you have will be the time you invest.
Perhaps not the same objective as the original poster, but in my case, I’d like to be able to access internal docker containers via a secure VPN. For example, to be able to do database maintenance without exposing the port or needing to know what host the db was deployed to. Right now, I can only seem to find some pretty hack-y approaches to this like using “split dns” with dockers internal resolver or always having to manually jump onto the container to figure out its IP address to be able to connect to it via VPN.
Giving it a static IP would solve this… though additionally, ideally this would work in conjunction with ip_range to allow for a segment of the subnet to be not allocated from … which also appears to not be implemented in compose v3 (now versionless? or something?) at least?
Anyways, any thoughts on alternative approaches for how to connect to the service without exposing it would be appreciated… we could hypothetically do this entirely with our own networking layer in front of docker swarm but feels duplicate to whats already done for the overlay network in the first place.
v3 was created with swarm in mind, and was meant to be used for swarm stack deployments, while v2 aimed for compose deployments. Compared to v2, some elements were either not supported (like ipv4_address) or moved to a different location and sometimes even got a new name (like restart vs. restart_policy). The new compose file reference unified the syntax, so that a compose file would work on a recent docker version, regardless whether it was deployed as compose project or swarm stack. The availability of a feature still depends on whether it is deployed as compose project or a swarm stack.
Since no release notes mentioned swarm services to be able to use fixed ip addresses, it is safe to say that it’s still not supported.
Thanks for the note but I think this solves a different issue – we do have VPN working already, the issue is name resolution of the IP address / container names within Swarm. That said, perhaps I missed some aspect of what wg-easy does.
I’m currently approaching this by writing a script that polls the current containers on a specific overlay network using the docker APIs and using a dnsmasq container to resolve names / ips but it feels like something I’d really like to be easier to do as it feels like something that would be a commonly desirable setup with Swarm for ongoing ops and maintenance of things like DBs