Assign first-class IP address with swarm-service

I’ve enabled swarm configuration on my home docker host and I am not understanding how to give a swarm service a static IP address on my home network that is accessible to browsers on my home network.

I have the sample hello-world service running in a docker stack swarm that publishes port 80

version: "3.9"
    image: nginxdemos/hello
      - 80:80

The service is accessible if I use the docker host’s IP address

but I’d like to have the service accessible at the address

Without the swarm configuration, using macvlan configuration I could assign the IP address to the container and the container would be available at the IP address

version: '3.9'
    image: nginxdemos/hello
         - 80:80
    mac_address: 00:10:FA:6E:38:8A 

    name: my_macvlan
    external: true

I’d like to do something similar within a swarm. The closest I could find via googling was this post from 2020 using shell scripts and iptables.

We had a need to publish separate docker swarm services on the same ports, but on separate specific IP addresses. Here’s how we did it.

Docker adds rules to the DOCKER-INGRESS chain of the nat table for each published port. The rules it adds are not IP-specific, hence normally any published port will be accessible on all host IP addresses. Here’s an example of the rule Docker will add for a service published on port 80:

iptables -t nat -A DOCKER-INGRESS -p tcp -m tcp --dport 80 -j DNAT --to-destination

I really would like to avoid hacking shell scripts and handle this with a custom configuration to a docker ingress network but have not yet been able to find examples.

Note: I enabled the swarm configuration to give me access to the Secrets functionality

this looks like the relevant docker issue

Static/Reserved IP addresses for swarm services

ipv4_address and ipv6_address was never implemented for swarm. It doesn’t make sense to have a single ip for replicated or global services.

If you need to use a static ip for the container, you will need to use single node docker compose deployments.

Thank you for confirming what I suspected.

Can I turn this problem on its head and somehow have physical machines on my home network participate in the swarm service discovery process? For example, can the swarm be an mDNS provider?

You can use extra_hosts: to inject name resolution, which only makes sense if your local dns server does not already provide name resolution for these hosts.

The container nameserver will use the configured nameservers from the host as upstream. Thus, usually a container can lookup the same dns entries as the host and can reach whatever host can reach.

Is there any chance you could point me to some documentation on this topic?

Name resolution from the physical workstations ( Windows, ChromeOS, macOS) is the problem I want to solve. The docker dnsrr service handles name resolution between the containers, I was just looking for a way to allow clients not participating in the swarm to also leverage the dnsrr service.

Maybe I need to put a software load balancer between the physical network and the swarm (single node) services as shown in this picture.

Sorry if I am asking silly questions.

The client can not participate in the swarn. Outside clients are supposed to use the published service port, which then distributes traffic to the service tasks in dnsrr manor. That is if you configured the service to use the endpoint-mode: dnsrr, otherwise the service will use a vip and load balance the traffic itself.

What’s you exact objective? Do you want the clients to be able to reach services based on their domain name? If so, then you need to register those domains in your dns and resolve them to the ip of your swarm nodes, then you need a reverse proxy with rules that forward traffic for a specifc domain to a specific service. Though, this is on service level, not on service task (=container) level.

Some resolvers, like PiHole or Unbound allow to inject dns overrides, which can help with registering the domains and resolving them to the swarm node ips. Docker itself is not responsible for dns resolution for outside clients.

When it comes to the reverse proxy, using Traefik is highly recommended. Treafik allows configuring reverse proxy rules for swarm services using service labels. Updating the proxy rules is then bound to the lifecylce of the service/containers, so that rules will be applied when something is deployed and removed when the deployment is removed.

1 Like

Thank you again for the feedback.

There is a lot in here for me to chew on there are two goals.

The first is for my home network just to enable swarm mode to familiarize myself and host services for my home physical computers to consume.

But, the ultimate goal is to present options for my employer to host legacy apps in a swarm backed by a docker cluster in an enterprise environment. The legacy apps are primarily used by end users on physical machines. The key here is we are an educational intuition that does not have the capital budget for expensive licensed solutions.

Whatever I can piece together in my home environment is what I will end up proposing to my employer.

Thank you again.

Don’t worry, you can setup those things for free.

I am quite sure you are running your own internal dns solution in your institution, so the name resolution part should be easy to cover. Just add the domains you want and point their a-records to the swarm cluster. In your home environment you can still use pihole or unbound to cover the dns resolution part, and use the organizations dns server in the target environment.

When it comes to Traefik. You don’t necessarily need to use Traefik EE. The free version is usually sufficient. You don’t have to use Traefik, you can create your own reverse proxy with nginx or apache. Though, personaly I like that it works very reliable with Traefik, as it doesn’t have dns caching issues that you might experience with self created reverse proxy rules on nginx or apache.

So the only cost you have will be the time you invest.

1 Like