NO ROUTE TO HOST network request from container to host-ip:port published from other container

it’s really weir things to me. I have a container A run with “-p 8080:80”, and a container B ( on the same host ) directly visit host-ip:8080, then log says: NO ROUTE TO HOST.
I test the following things:

  1. 1.from within container B, “telnet host-ip 8080” also print “NO ROUTE TO HOST”;
  2. 2.from my laptop, “telnet host-ip 8080” show success, meaning no firewall on host-ip;
  3. 3.with -link option, container A can visit containerB:80.

So, my question is this: as long as host-ip:8080 is listening, any program whether in other containers or other “physical machines” should communicate to that port with no differences, right?

Can anyone answer that? Thanks.

Could you please paste how you started the containers. In case you used docker stack deploy or docker-compose up, please paste the compose file as well.

In case you are using DockerForMac (or some similar setup, which uses a virtual machine), make sure that the IP you are refering to is the one of the docker-engines’ host; not the one on which the client is running.

The IP of the host can be found using this:

docker run -ti --rm --net=host qnib/httpcheck ip -o -4 add |grep eth
4: eth0    inet 192.168.65.2/24 brd 192.168.65.255 scope global eth0\       valid_lft forever preferred_lft forever

Thanks. Since the dev network can’t be reached from the Internet, so sorry for can’t paste the exact script.
I can describe the steps more specific:

1. server runs docker has ip like '192.168.10.4', name it 'appHost'
2. Since it is just for some test, I use some "docker run --name nginx_tmp -d -p 10000:80 nginx nginx -g 'daemons off;' " to start a temp nginx server, which has ip like '172.17.0.2';
3. I start the whole sys from docker-compose.yml, in which a service called 'app_tomcat' add host like '- some-srv: 192.168.10.4" in extra_hosts section. app_tomcat has ip like '172.18.0.1';
4. then inside 'app_tomcat', "telnet 192.168.10.4 10000" will be NO ROUTE TO HOST.

I solve this by adding tmpnginx in docker-compose, I just want to figure it out why…

Thanks again.

This is a “known” bug. Everyone can access in this port, except for container in the same host. You have to allow it with firewall (yes this is a firewall/docker issue).

1 Like

I have same problem.

I have three hosts.

192.168.50.40 - docker swarm manager
192.168.50.41 - docker swarm worker, PostgreSQL installed.
192.168.50.42 - docker swarm worker

In host 192.168.50.41, I’ve installed PostgreSQL(native, not docker).

Webapp containers that are running on other two hosts, that is, on 192.168.50.40 and 42 are able to connect to PG using 192.168.50.41:5432.

But containers on the host 192.168.50.41 are not.

# ping goes well
root@a7c1a37c3d54:/# ping -c 2 192.168.50.41
PING 192.168.50.41 (192.168.50.41) 56(84) bytes of data.
64 bytes from 192.168.50.41: icmp_seq=1 ttl=64 time=0.272 ms
64 bytes from 192.168.50.41: icmp_seq=2 ttl=64 time=0.096 ms

--- 192.168.50.41 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.096/0.184/0.272/0.088 ms

# but psql fails
root@a7c1a37c3d54:/# psql -h 192.168.50.41 -p 5432 -U postgres
psql: could not connect to server: No route to host
        Is the server running on host "192.168.50.41" and accepting
        TCP/IP connections on port 5432?

If this is a bug and I have to set up the firewall in host 50.41, how do I have to set up? (Especially in CentOS7 and Ubuntu 16.04)

Today I tried on CentOS7:

I added a rich rule in /etc/firewalld/zones/public.xml to include my container’s IP

   <rule family="ipv4">
     <source address="172.18.0.0/16"/>
     <accept/>
   </rule>

then I restartted firewalld: systemctl restart firewalld

After that, my container could connect to a service on the host.

3 Likes

thanks, It works for me

Thanks! This worked like a charm!

One thing that is worth to comment is that this rule is regarding to the default docker network (172.18.0.0/16).

If you have a container running on a different network, lets say 173.16.0.0/16 then it is necessary to add one rule for it as well.

1 Like

I’d changed my OS from CentOS to Ubuntu16.04 and I met this problem again… :’(

In my container,
I can ssh to any other host nodes in the swarm,
but I can not ssh to the host that is running this container.

# ssh 192.168.53.31
ssh_exchange_identification: read: Connection reset by peer

UFW(Ubuntu Firewall) is running but inactive. I tried to stop ufw service but it did not work.

Any help would be appreciated.

P.S.

Ooops,

The network administrator in my company set /etc/hosts.allow and hosts.deny to deny all sshd connection except several ip address range.

I added 172.17.(for docker run) and 172.18.(for docker swarm services) in hosts.allow and it worked. :slight_smile:

Can you please explain why we should whitelist 172.18.0.0/16?
In my case I’ve whitelisted 172.17.0.0/16(docker0 interface) and 172.18.0.0/16(docker_gwbridge interface). Only after whitelisting both It’s working fine.

The same didn’t work when I whitelisted 172.17.0.0 and 172.18.0.0.

Hello,

Do you ask the difference between 172.17.0.0/16 and 172.17.0.0(without /16)?

As far as I know, 172.17.0.0/16 means the network prefix is first 16 bits. This matches all IP addresses that begin with 172.17, for example, 172.16.20.30, while 172.17.0.0 matches only that address.

@gypark, You missunderstood the question. I’m trying to figure out why do we need to whitelist docker0 and docker_gwbridge interfaces’ subnet entirely.

That work for me! But I used the command line (Sorry, I hate de XML).

firewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address=172.17.0.0/16 accept' && firewall-cmd --reload

Thanks!

1 Like

Facing the same issue but none of the above suggestions are working for me.

I’m still having the same issue as well. Adding rich-rule didn’t help. I’m running docker on Centos 8.

This is my post I made regarding the issue I’m facing.

Any help would be highly appreciated.

I did the same config, but still does not work

  <rule family="ipv4">
    <source address="172.17.0.0/16"/>
    <accept/>
  </rule>

Any idea what else I could do?

Hello,
I had the same problem on CentOS 8. I turned off firewall “systemctl stop firewalld.service” and it’s working now. So it is the problem with the firewall configuration.

s13

1 Like

Thanks it helped me find the problem. In my case, it was with an existing setup using vagrant and two vms based on “centos/7” box + Docker that I migrated to “generic/centos7” box. This later box includes the firewalld.service by default, whereas the former doesn’t.