I am running into a weird error when I set net=info for a Redis cluster composed of Redis containers. All containers are up and running on the same host box and netstat -tunap shows that all containers are listening on the correct ports. I am able to configure the cluster with redis-cli when I am on the box.
However, when I attempt to access via redis-cli from another box, I get a no route to host error. I see the same thing when I telnet, so this is not due to a redis-cli misconfiguration. So it seems the Docker container is only accepting local connections. Note: when I create my own custom Docker bridge network (via docker network create) I don’t get these no route to host errors.
Has anyone else encountered this issue with net=host? Any ideas/thoughts would be greatly appreciated.
Maybe you’re binding to
127.0.0.1 in the containerized process, not
As a side note, I’d recommend avoiding
--net host unless you definitely know that you need it, it’s a bit overkill.
First of all, thanks a bunch for getting back to me so quickly–much appreciated!
Excellent question re: binding. I did confirm with netstat -tunap that the container is binding to 0.0.0.0, which is what makes this so puzzling.
I agree re: --net=host. I actually tried out the uber cool Docker SDN option introduced in 1.9. The problem is that these are Redis cluster Docker containers and, if I access a Redis container and the CRUD operation involves a key on another Redis container, Redis attempts to open up a connection to the downstream container and then I get a no route to host due to the forwarding is going to an IP address internal to my SDN. Note: if there is a fix for this, I will gladly use this instead of net=info
Again, thanks for getting back to me so quickly and I look forward to hearing what you think.
So you are running container with
Any chance you can provide a minimally reproducible example using something like
Yes, using net=host for now but would love to go to net= if possible. Sure, I can generate an example, will do ASAP.