Docker Community Forums

Share and learn in the Docker community.

Docker swarm high availability on nodes


(Peithegn) #1

I’m experimenting with docker swarm. With VirtualBox I’ve set up three ubuntu machines (each of them running Docker 17.12.0 CE) on my host system (windows 7).

I’ve created a docker swarm with the docker swarm init command on one of the ubuntu instances (the manager), and connected the two others as workers with the docker swarm join command. On the manager node I’ve created a service with docker service create --name test -p 80:80 --replicas 3 myuser/webapp.

My webapp is running fine in a container on all nodes in the swarm, and I can reach it in a browser from my host on the different ip-addresses assigned to the virtual ubuntu machines - e.g. 192.168.56.103/webapp on my worker2.

However if I shut down worker2, I obviously cant’t reach it anymore. What is the best way to ensure high availability here, so that if any of the nodes are down, my webapp is still reachable?

In a live environment the question would be, how do I avoid a “single point of entry” to my application (running in a swarm), when users access it via their browsers?


(Eldeberde) #2

Hi.

In production environment you can use a external load balancer for that.

For non-production environment i use to deploy a keepalived container in each host which manage a virtual IP. So when a node goes down, keepalived can move the virtual IP to other node:

sudo docker run -d --restart=always --name KeepAlived --cap-add=NET_ADMIN --net=host -e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:[‘NODE1_IP’,‘NODE2_IP’,‘NODE3_IP’]" -e KEEPALIVED_VIRTUAL_IPS=VIRTUAL_IP/32 -e KEEPALIVED_PRIORITY=65 -e KEEPALIVED_INTERFACE=NETWORK_INTERFACE osixia/keepalived:1.3.5

You need to change NODEX_IP , VIRTUAL_IP and NETWORK_INTERFACE to fix in your case.
Regards