Docker Community Forums

Share and learn in the Docker community.

Consul DNS round robin works for host but not for containers

(Craig1234) #1

Ive setup consul and registrator, both seem to be working well and all my containers are registering as services. consuls DNS is bound to the docker0 bridge IP and all containers point to this IP as their DNS servers.

I have one service that contains 2 containers and because i am using the -internal switch when i start registrator both of the container IPs are registered in consuls DNS. I can ping the service name (appserver.service.consul) from the host server as Ive set it to use consuls DNS as well and i get round robin responses as expected. however the containers do not behave the same - if i docker exec -it bash into them and ping appserver.service.consul i always get the same IP returned.

Ive installed dnsutils on one of the containers so i can dig the consul DNS server and i get both A records returned - but i dont understand why i never get a round robin response - does anyone have any ideas on why round robin isnt working for my containers?


(Jjohnston) #2

Did you ever figure this out? We have the same problem right now and the same setup. If we use nslookup or dig we get the ip of both services. But if we use ping it only brings back one ip. This is also a container for apache (httpd) and it only serves up one ip. We do have “disablereuse=On” when using ProxyPass and that should prevent any caching from apache. Researching this it looks like the only tools in Linux that do caching is nscd and dnsmasq, but neither of those tools are in this container.

The really strange thing for us it is only happening in this container. The container extends from the httpd image and only adds some apache configuration. My plan now is to try and keep stripping out this container and see if there is something special about it, but it looks pretty stripped down as-is. The httpd also extends from the same image as our other containers (debian:jessie) so I am not very hopeful.

(Nathan Le Claire) #3

I’d highly suggest opening an issue at with a detailed, minimally reproducible example, and CCing @mrjana and @mavenugo . They should be able to help you get sorted out if the problem is reproducible.

(Jjohnston) #4

@nathanleclaire, I will do that once I try out a few more things. It was starting to look like my global dns settings were the problem. With those settings in I could break it just by firing up a debian:jessie container and see ping always return one ip. Once I took out my global dns settings then my debian:jessie container would round robin with ping. But then firing up my httpd container it is still stuck only getting one ip (ping or apache). If I run that same container with --net=host then it round robins. I want to see what this behaves like on Redhat 7 as well. Once I have that I will throw together an example and hopefully it is reproducible. I keep checking nslookup and dig and it looks like Consul is totally fine.

(Craig1234) #5

No, I ended up parking consul and registrator for now. my workaround was to use dnsmasq on the host, add this as a DNS server for the containers, and update A records whenever a container spawned.

I’ve since moved to the latest version of docker & docker compose and use network aliases and the internal docker DNS to group my services (so I’ve dropped running dnsmasq on the host). Round robin doesn’t work with that setup either but at least if a container in a service disappears the DNS is updated instantly and the other containers can find the rest of the containers in that service. At this stage thats all I need for HA but it would be perfect to have RR working out of the box then i could use that to load balance the backend. Im using haproxy to load balance the front end

(Jjohnston) #6

I was able to reproduce this in a simple and reproducible manner. I created an issue ticket at

What we are going to do if we need to is repackage our containers using the heavier ubuntu image if we need to. We have our own base images so we can get away with doing that. Hopefully the Docker team is able to reproduce the error and it makes sense.

(Campbech) #7

Have you tried using

I know this is the solution gliderlabs had for doing proper load balancing across all of the service instances.