Docker Community Forums

Share and learn in the Docker community.

How can I statically define or modify the internal swarm load balancer IP addresses?

I have a test environment where I deploy containers running an application (via docker run) which have static configuration, and the randomness of those load-balancers are overlapping the IPs defined earlier. The problem is those LB are not useful for my use case and are only active when a container is running in a certain node and it’s IP is randomly assigned. Before making any dirty solutions like running a dummy container in each docker swarm node and them build my configuration from the remaining IPs I want to know if it’s possible to define its IP or define a range of IP’s that those LB can use?

Thank You

It is not possible to define the ip address of the virtual ip in front of a swarm service, a service task or the resulting container.

What does that mean? Do you use ip’s for container to container communication ? :roll_eyes:

The endpoint_mode is either vip or dnsrr. If you don’t want to have a vip, then use dnsrr, which will return multi value dns entries for a service name instead of its vip ip…

Beats me, why you even have to care about the ip… When I still had swarm in production, I never cared about the container’s ip adresses. Never had to! If you do, you probably use something in a way it is not intendet to be used.

Hi, thank you for your answer. As I said before it’s a test environment that why I need the IPs of the containers and why I set them manually via docker run --network my-network --ip=“10.0.0.X”. I am assuming you are suggesting the use of container names to allow inter-container communication, but one of my fears was the initial latency of the DNS engine which was not suitable for my use case. My solution to this problem was to use a wider subnet mask which reduced the probability of IP collision between the Internal LB and my containers.

This is a little bit confusing, so my-network is a swarm scoped overlay network and you attach plain containers to it?

Now I am curious what load-balancers you are talking about… Virtual IP’s should belong to a swarm service and not to an overlay network. If you start containers with ‘docker run’ then you run plain containers and as such shouldn’t be affected by a swarm specific detail at all.

Something is not adding up here…

Ok, I can explain. First, yes you can attach plain containers which they call standalone containers to an overlay network which is swarm scoped via docker run --network my-network --ip=“10.0.0.X”:

NETWORK ID          NAME                DRIVER              SCOPE
<ID>                my-network         overlay             swarm

In this image Swarm Architecture you can see the swarm load balancers I was referring to. For my understanding of the swarm architecture, they are used for external traffic coming into the swarm services/containers. When a container or service is attached to the overlay network docker internally creates a LB in each node in the cluster, you can see in it when I run docker network inspect my-network:

“Name”: “my-network”,

"Id": "mo8rcf8ozr05qrnuqh64wamhs",
"Created": "2020-11-16T01:59:20.100290182Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
    "Driver": "default",
    "Options": null,
    "Config": [
            "Subnet": "",
            "Gateway": ""
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
    "Network": ""
"ConfigOnly": false,
"Containers": {
    "95b8e9c3ab5f9870987c4077ce264b96a810dad573a7fa2de485dd6f4b50f307": {
        "Name": "unruffled_haslett",
        "EndpointID": "422d83efd66ae36dd10ab0b1eb1a70763ccef6789352b06b8eb3ec8bca48410f",
        "MacAddress": "02:42:0a:00:01:0c",
        "IPv4Address": "",
        "IPv6Address": ""
    "lb-my-network": {
        "Name": "my-network-endpoint",
        "EndpointID": "192ffaa13b7d7cfd36c4751f87c3d08dc65e66e97c0a134dfa302f55f77dcef3",
        "MacAddress": "02:42:0a:00:01:08",
        "IPv4Address": "",
        "IPv6Address": ""

As a new “random” IP address was assigned to this lb-my-network in every node, the probability that it would collide with my static configuration was high because I was using a /24 subnet. My cluster has 16 machines running 100+ containers plus the 16 internal load balancers in a network with only 254 IPs I often ran into problems.

Implementing this feature is non-trivial for a number of reasons;

There’s two possible feature requests here;
Allow a static IP for the service (Virtual IP)
Allow a static IP for the container (task)
When looking at static IP-addresses for containers, things become complicated,
because a service can be backed by multiple tasks (containers). Specifying a
single IP-address for that won’t work; also, what to do when scaling, or updating
a service (in which case new tasks are created to replace the old one)
Just +1’s don’t help getting this implemented; explaining your use-case, or
help find a design to implement this on the other hand would be useful.

I feel like I can’t explain my problem, now solved. My use case is I need to run an application in a cluster of machines 16 physical, the machines lack support for virtualization at least at my scale (I need to run in hundreds of instances of my application). Docker was a good solution because It could isolate resources per container and ease other things I need to implement. Note that in my case there won’t be any communication outside the docker swarm nor any service needing replication. My application had static IPs configuration to talk to other machines, yes I could specify the container name in the configuration file facilitating the whole process, but at that time that solution didn’t pop up in my head. So what I did was defining each container with the static IP, note that this container is attached to the swarm via docker run and not via docker service create. The internal architecture of docker swarm uses an internal LB which was conflicting with the static IP addresses and I wanted to statically define its IP address. I don’t know if it’s possible or no but I already come up with a solution that works for my use case. I’m not requesting any features, just asking if it’s possible.

Swarm load-balancer is an ambigous term. What you see in the illustration is the ingress routing mesh, which will only be used when ports are published for swarm services. Each swarm service will have a vip (unless dnsrr is used) in front of the services. Though, non of this is related to you situation if you run plain docker containers. Neither the vip, nor the ingress routing mess is involed. Plain docker containers do have no option to configure vip or dnsrr,.

Would you mind sharing the chain of evidence that brought you to believe your containers would use a swarm load balancer?.

I guess you have a whole differnt problem: double spending of ips the swarm networks dhcp server assignes and your manually assigned ips. I would be surprised if you would suffer any penalties by using servicenames, because DNS resolution results are usualy cached in the network stack of the client…

Good luck!

Yes, I’m aware of that and that’s why I use plain containers. I didn’t needed any of those things like replication etc…

Yes, I can. I actually never told that, but from the output of docker network inspect my-network you can see a “thing” (don’t actually know if it is a container) called my-network-endpoint which has an IP address, and that IP address was conflicting with the containers I wanted to deploy. For instance: I deploy the first container with the IP in one node of the swarm and that my-network-endpoint is created in the swarm node as you can imagine if I accidentally tried to deploy a container with that IP, the container would timeout returning an error. I read the documents about the docker swarm and it’s because of the ingress network and the docker routing mesh as you stated.

Yes you are right, I will probably switch to launching standalone containers by its name or even creating the services.

Very much thank you for your opinion and help. I just wanted a way to deploy several instances of my application and see what happens in a LAN environment ideally I didn’t needed anything related to the ingress network just the part of the multi-host network provided by the overlay network driver. If you have a better suggestion would be welcome