I am struggling to perform a pathetic test that involves the communication between a server in a sandbox01 network and a Docker container that is running in my “Docker Host” server (this machine is in the same subnet as the other nodes in the sandbox01 network. i.e., it has an interface called ens34, on the 10.* address/range. It also has an eth0 interface, on the 9.* network, which allows it to access the outside world: download packages, docker images, etc. etc.).
Anyway, here is a little diagram to illustrate what I have: Diagram illustrating communication between networks and the Docker container
BLUE BOX = WORKS FINE
RED BOX = DOES NOT WORK (why?)
The problem: Cannot communicate between a node in sandbox01 subnet (10.* network) and the container. e.g., someserver.sandbox01 → mydocker2 : ens34 :: docker0 :: vethXXX → container
The mystery: After many tests, it was confirmed that the container can’t communicate with any other node in the 10.* network – it doesn’t behave as expected: it was supposed to produce a response through its gateway, docker0 (172.17.0.1), and find its way through the routing table in the docker host to communicate with “someserver.sandbox01” (10.1.21.59). It only works when we let it process the MASQUARADE in iptables. However, Docker automatically adds this rule: -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -c 0 0 -j MASQUERADE
**Note the " ! -o docker0" there, so Docker doesn’t want us to mask the ip addresses that are sending requests??? This is messing up the communication somehow…
The container responds ok to any communication coming through the IP 9.* (eth0) – i.e., I can send requests from my laptop – but never through the 10.* (ens34). If I run a terminal within the container, the container can ping ALL the IP addresses leveraging all the mapped routes, EXCEPT, EXCEPT!!! the IP addresses in the 10.* range. Why???
[root@mydocker2 my-nc-server]# docker run -it -p 8080:8080 --name nc-server nc-server /bin/sh sh-4.2# ping 22.214.171.124 PING 126.96.36.199 (188.8.131.52) 56(84) bytes of data. 64 bytes from 184.108.40.206: icmp_seq=1 ttl=117 time=124 ms 64 bytes from 220.127.116.11: icmp_seq=2 ttl=117 time=170 ms ^C --- 18.104.22.168 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 124.422/147.465/170.509/23.046 ms sh-4.2# ping 22.214.171.124 PING 126.96.36.199 (188.8.131.52) 56(84) bytes of data. 64 bytes from 184.108.40.206: icmp_seq=1 ttl=63 time=1.37 ms 64 bytes from 220.127.116.11: icmp_seq=2 ttl=63 time=0.837 ms ^C --- 18.104.22.168 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.837/1.104/1.372/0.269 ms sh-4.2# ping 10.1.21.5 PING 10.1.21.5 (10.1.21.5) 56(84) bytes of data. ^C --- 10.1.21.5 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 2999ms sh-4.2# ping 10.1.21.60 PING 10.1.21.60 (10.1.21.60) 56(84) bytes of data. ^C --- 10.1.21.60 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 2999ms
For some reason, this interface here doesn’t play well with Docker:
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.21.18 netmask 255.255.255.0 broadcast 10.1.21.255
Could this be related to the fact that the eth0 is the primary NIC for this Docker host?
The workaround: In mydocker2 we need to stop iptables and add a new sub-interface under ens34 →
service iptables stop
ifconfig ens34:0 10.171.171.171 netmask 255.255.255.0
And in someserver.sandbox01 we need to add a new route →
route add -net 10.171.171.0 netmask 255.255.255.0 gw 10.1.21.18
Then the communication between then works. I know… bizarre, right?
In case any of you wants to ask, no, I don’t want to use the " --net=host " option to replicate the interfaces from the docker host to my container.
So, thoughts? Suggestions? Ideas?