We are trying to accomplish health-aware DNS load-balancing.
According to the conversation here https://github.com/moby/moby/pull/27279, this should be possible from docker 13 on. However, it does not seem to work on a setup using docker 13.1.
We are not using docker swarm, we start services via docker-compose and use an external overlay network. A ping to a service started twice (first being healthy and second unhealthy) resolves to both ips, while we would expect to receive only the healthy one. Why isn’t the unhealthy one removed from the DNS?
Thanks for any hint,
Since you are not using Swarm, how are you creating services with 2 containers?
How are you creating health check? Are you using Docker health check enable?
My understanding is that unhealthy containers are removed from load balancing only in Swarm. In non-swarm case, container is marked as unhealthy, but not removed from dns.
We are starting services with docker-compose, on 2 different servers. They are part of the same overlay network, therefore they see each other.
We are using docker-compose version 2.1, healtchecks are supported (https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck). Healthchecks work fine, docker ps clearly shows the correct status.
Regarding your remark: “My understanding is that unhealthy containers are removed from load balancing only in Swarm. In non-swarm case, container is marked as unhealthy, but not removed from dns.” - what does Swarm do differently? Does it use a real load-balancer (as opposed as a simple round-robin DNS)?
Swarm uses service IP mechanism and uses IPVS to do load balancing. Its very different from non-swarm scenario which does a simple round robin dns load balancing. Based on what you are seeing, i am pretty sure that unhealthy containers are not removed from dns which is something Docker is not supporting.
I think swarm supports both modes: vip and dnsrr
So DNS-based should work as well - what does Swarm do different when using this mode?
Roxana, I think you need to use swarm to have the VIP. My understanding is that docker-compose is limited to single server set-up.
docker-compose is NOT limited to a single server setup, whatever gave you that idea, it would be pretty limiting if that was true.
If you want to distribute load across a cluster, for whatever reason (resilience, scalability, etc.) then you need to be using an orchestrator of some sort (swarm, mesos, k8s, Rancher, et al). The point is, that regardless of whether you use one node or a thousand the abstraction is the same, you treat your thousand node cluster as if it were just one giant single node (yes you can segment it by using additional meta-data and constraints, but that’s not the point I’m making). If the supporting tools like docker-compose worked differently it would be a complete nightmare managing promotion of your services across non-production to production environments where again, the principal you want to follow is immutable server pattern.
So your issue is likely nothing to do with using docker-compose or not. If you want to be sure, try launching your service just using the regular docker client and, all things being equal, you should see identical behaviour ?
Hi Goffinf. What you say is true, but what I was saying is that when you have a cluster you should not use the docker-compose command to deploy your services but you should use the docker stack deploy command instead. Deploying services with the docker-compose command is designed for a single node (development for ex.) while in swarm mode you should use docker stack deploy (and this command support the familair docker-compose file format)
docker stack deploy -c [your-docker-compose-file] …
Also, not everyone is using docker swarm as their orchestrator and, some images do not support swarm mode (consul was an example of this a while back although this may have changed now).
Just trying to keep the advice balanced