Hello,
I’m a very beginner with docker and swarm.
I’m experimenting swarm on a Pi cluster.
Coming from a “classical” cluster software experience, I would like to understand if it’s possible reach a service to a virtual IP which i snot the node IP but it could be shared among the nodes in the case of failure.
I just did some tests and I managed to run a service which relayed on the routing mesh but of cours ein this case to use this server I have to use the node IP and, in case of failure, change that IP to another.
I would like to refer simply to an IP which could be moved among the swarm.
Is it possible and how?
The routing mesh works regardless which node you use to access a published port. If an overlay network is assign to your containers, it will be spun across all nodes that run at least one of those containers. Instead of working with IPs you should use the servicename or the network alias to access your container. Combined with a loadbalancer that checks the availability of the target nodes, this might be a solution. Though, it shifts the problem to the loadbalancer.
The question is: are you aiming for a virutal ip for the docker engine or for the container you are running?
There are docker containers that require NET_ADMIN capabilites using keepalived, to make the Docker Engine run on a static interface plus the additional virutal ip. Shifted from the master node to the slave node, once the mater is unavaible and back once it is available.
I would like to have a virtual IP for the container.
I am building up some home automations to manage my house using Home Assistant which is running in a container. The availability of this container becomes critical. So I thought to use two Raspberry Pi (the second one should take care of keep the service up if the first fails) with glusterfs to have a common FS with some configuration needed by Home Assistant. In this case any other application which talks to Home Assistant should point to an IP different from the IP od the raspberry…
Your requirements are contractictive to me. Docker does not play according machine virtualisation rules. Especialy Docker Networking is quite different comapred to bare metal or vm environments.
Just to be clear:
Can you share your compose.yml for the swarm stack? Or paste the commands you perform to create your containers?
And which ip are you trying to use? Docker Host? Or Container IP?
I’m figuring out how to do this.
My container is an image which I pull from docker hub.
My problem I think is that I still think in the “old” cluster-ware way where the are the nodes which have their own IP and some “clustered” instances reachable to their IP which is different from node IP. These instances should be active just to one node at once and, in case of failure, should become active on another node. In that way, if I point out to instance IP, I can reach it not depending on the node where it is.
In our test environment we run a tool in a swarm cluster, that specificly can have only a single replica. It is not cluster-aware in any way. The tool is created as a docker service, with a replica count of one. Whenever the node fails, the container is immediatly scheduled by swarm to run on a different node and is usally usable within a few seconds. Furhter replicas would corrupt the database files (it uses some sort of sqllite database).
Like I allread wrote, for swarm stacks/services, the ingress network is spun accross all nodes and routes the incomming traffic to the target node and container. Regardless on which node you entered the ingress network.
The problem, that your client application needs to know all nodes and must perform a health check (to not accidently send trafic to an unhealty node) can be eliminated by using a loadbalancer (nginx does layer 4 + 7). As far as I am concerned it should be even possible to run a keepalived container on all three nodes, and let it manage the addition virtual ip. This would at least provide a static entry ip into your Docker Swarm cluster. Once the incomming request reaches the cluster, the ingress network takes care of forwarding to the target container on the node it is running on.
What you describe as cluster-aware is a active-passive replica set. Clustered Software usualy allows to run all nodes activly, though most do elect a leader, and forward read/write operations to the leader or at least serve read operations themselvs (Raft, Zookeeper, Paxos). There are not many consensus algorithms that allows all nodes to write (“Blockchain”, Egaliterian Paxos, Hedera Hashgraph) at once. Thus said, if your software does not implement active-passive replia set itself (e.g. mysql does) I am confident that it can’t be run in Docker. If the application itself does support active-passive replica itself, the only thing a virtual ip does, is to prevent incomming traffic/events on the passive instance. Though, but if it still performs tasks and writes into the same filesystem, a data corruption is not unlickely. It realy depends on the application.
If you application already support active-active replica or real cluster mode, then operating it with Docker is feasable (we run Zookeeper, Kafka and Consul clusters in Docker Swarm).