it’s really weir things to me. I have a container A run with “-p 8080:80”, and a container B ( on the same host ) directly visit host-ip:8080, then log says: NO ROUTE TO HOST.
I test the following things:
1.from within container B, “telnet host-ip 8080” also print “NO ROUTE TO HOST”;
2.from my laptop, “telnet host-ip 8080” show success, meaning no firewall on host-ip;
3.with -link option, container A can visit containerB:80.
So, my question is this: as long as host-ip:8080 is listening, any program whether in other containers or other “physical machines” should communicate to that port with no differences, right?
Could you please paste how you started the containers. In case you used docker stack deploy or docker-compose up, please paste the compose file as well.
In case you are using DockerForMac (or some similar setup, which uses a virtual machine), make sure that the IP you are refering to is the one of the docker-engines’ host; not the one on which the client is running.
The IP of the host can be found using this:
docker run -ti --rm --net=host qnib/httpcheck ip -o -4 add |grep eth
4: eth0 inet 192.168.65.2/24 brd 192.168.65.255 scope global eth0\ valid_lft forever preferred_lft forever
Thanks. Since the dev network can’t be reached from the Internet, so sorry for can’t paste the exact script.
I can describe the steps more specific:
1. server runs docker has ip like '192.168.10.4', name it 'appHost'
2. Since it is just for some test, I use some "docker run --name nginx_tmp -d -p 10000:80 nginx nginx -g 'daemons off;' " to start a temp nginx server, which has ip like '172.17.0.2';
3. I start the whole sys from docker-compose.yml, in which a service called 'app_tomcat' add host like '- some-srv: 192.168.10.4" in extra_hosts section. app_tomcat has ip like '172.18.0.1';
4. then inside 'app_tomcat', "telnet 192.168.10.4 10000" will be NO ROUTE TO HOST.
I solve this by adding tmpnginx in docker-compose, I just want to figure it out why…
This is a “known” bug. Everyone can access in this port, except for container in the same host. You have to allow it with firewall (yes this is a firewall/docker issue).
In host 192.168.50.41, I’ve installed PostgreSQL(native, not docker).
Webapp containers that are running on other two hosts, that is, on 192.168.50.40 and 42 are able to connect to PG using 192.168.50.41:5432.
But containers on the host 192.168.50.41 are not.
# ping goes well
root@a7c1a37c3d54:/# ping -c 2 192.168.50.41
PING 192.168.50.41 (192.168.50.41) 56(84) bytes of data.
64 bytes from 192.168.50.41: icmp_seq=1 ttl=64 time=0.272 ms
64 bytes from 192.168.50.41: icmp_seq=2 ttl=64 time=0.096 ms
--- 192.168.50.41 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.096/0.184/0.272/0.088 ms
# but psql fails
root@a7c1a37c3d54:/# psql -h 192.168.50.41 -p 5432 -U postgres
psql: could not connect to server: No route to host
Is the server running on host "192.168.50.41" and accepting
TCP/IP connections on port 5432?
If this is a bug and I have to set up the firewall in host 50.41, how do I have to set up? (Especially in CentOS7 and Ubuntu 16.04)
Can you please explain why we should whitelist 172.18.0.0/16?
In my case I’ve whitelisted 172.17.0.0/16(docker0 interface) and 172.18.0.0/16(docker_gwbridge interface). Only after whitelisting both It’s working fine.
The same didn’t work when I whitelisted 172.17.0.0 and 172.18.0.0.
Do you ask the difference between 172.17.0.0/16 and 172.17.0.0(without /16)?
As far as I know, 172.17.0.0/16 means the network prefix is first 16 bits. This matches all IP addresses that begin with 172.17, for example, 172.16.20.30, while 172.17.0.0 matches only that address.
@gypark, You missunderstood the question. I’m trying to figure out why do we need to whitelist docker0 and docker_gwbridge interfaces’ subnet entirely.
Hello,
I had the same problem on CentOS 8. I turned off firewall “systemctl stop firewalld.service” and it’s working now. So it is the problem with the firewall configuration.
Thanks it helped me find the problem. In my case, it was with an existing setup using vagrant and two vms based on “centos/7” box + Docker that I migrated to “generic/centos7” box. This later box includes the firewalld.service by default, whereas the former doesn’t.