I am running docker on a headles linux ubuntu installation across multiple servers where I have ignition instances running. Recently the customer requested we change the IPs docker uses to avoid IP conflicts. I added the following to all my docker compose files and took down and rebuilt all my docker instances. Now, we are seeing sporadic disconnections between the various ignition instances. Is there something I am doing wrong in my docker compose or a step I am missing?
Just out of curiosity: how do you run docker across multiple servers? It doesn’t look like you joined the servers to a swarm cluster. Furthermore, it doesn’t look like you are using a node-spanning overlay network.
Is it safe to assume that the ignition instances you refer to, are running as container on the docker hosts?
I assume you refer to the user defined docker networks.
Since you neither mention swarm, nor overlay network, I am curious how they communicate with each other? Using the published host port of each container on the docker host?
The only way they talk to each other is through ignition’s gateway network and that is the connection that is periodically disrupted, I apologize I realize that part was unclear. And yes, the ignition instances are each their own container.
Sorry I was typing out another message. For more context I have 4 servers, and outside of docker I can reach them at 10.10.10.10, 10.10.10.11, 10.10.10.12, and 10.10.10.13. For ignition each one of these servers has 2 containers that I reach by using the ports 8088 and 8089. This is also how they reach each other. Sometimes the primary instance at 10.10.10.10:8088 loses communication to any of the instances on other servers.
All the servers have the same docker defined network of 10.11.6.0/24 in my docker compose yaml file. Best I can figure right now, the problem may be that I need to use 10.11.6.0 only on one server and then use 10.11.7.0 through 10.11.9.0 for the other 3.
The bridge networks subnet range shouldn’t be relevant:
ingress traffic is addressed to the host ip and the published host port for the container port.
egress traffic is natted with the host ip, so that when it arrives as ingress traffic on another instance, it will only see the source host’s ip address, not the container’s ip address.
Though, what could be an issue is, if the nodes have a route to the subnet 10.11.6.0/24 or a broader subnet like 10.11.0.0/16.
So far I was only looking at the network (ip) and transport (tcp/udp) layer of the OSI model layers. Those are handled by docker.
Docker will not modify anythink beyond the network and transport layer.
So if the payload ignition sends to the gateway includes the ignition instances ip (which is the container ip), and the gateway relies on it, this might indeed cause a problem, when the same container ip is registered for more than one instance. I have no idea how ignition works and how it handles it: maybe it’s a problem, maybe it’s not.
Your idea to use different subnets on each docker host, is definitely worth trying. If not, I would recommend asking the ignition support.