Accessing containers inside an overlay network from the host machine is unstable

I’ve set up a Docker Swarm Cluster v1.11.2 to which I deploy services via docker-compose. I’ve created an external overlay network and defined it as default network for each service in the docker-compose.yml files.

version: "2"
services:
    service1:
        image: my.reg:5000/service1:latest
        ports:
        - "10080:10080"
        dns_search:
        - .
        environment:
        - "affinity:container!=service1*" 

networks:
    default:
        external:
            name: myoverlay

I’m using my own Consul cluster outside of the Docker world for service registration, with one Consul agent running natively on each swarm node. What’s obviously necessary is that the Docker containers advertise themselves with their overlay address in Consul, so that they would be able to find each other that way.

As each container receives one address out of the overlay network for eth0 and one address from the local docker bridge subnet 172.18.0.0/24 on eth1, I decided to have all the containers connect to the Consul server processes via 172.18.0.1, as the native Consul processes listen on 0.0.0.0.

So: registration always to 172.18.0.1, advertising as some address from 10.0.0.0/16.

The registration works fine, access from the containers to the outside world is not an issue. I put a static route on each swarm node telling it

route add -net 10.0.0.0 netmask 255.255.0.0 gw 172.18.0.1

But the connections through this route are unstable. Sometimes it’s possible to ping a container within the overlay network from the host machine, then all of a sudden the connection can’t be established anymore. What I get then is

dennis@testing-01:~$ ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
From 172.18.0.1 icmp_seq=1 Destination Host Unreachable
dennis@testing-01:~$ ifconfig 
docker0   Link encap:Ethernet  HWaddr 02:42:18:4d:3f:56  
      inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
      inet6 addr: fe80::42:18ff:fe4d:3f56/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:5673 errors:0 dropped:0 overruns:0 frame:0
      TX packets:5179 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 
      RX bytes:897736 (897.7 KB)  TX bytes:4439952 (4.4 MB)

docker_gwbridge Link encap:Ethernet  HWaddr 02:42:28:14:0e:12  
      inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
      inet6 addr: fe80::42:28ff:fe14:e12/64 Scope:Link
      UP BROADCAST MULTICAST  MTU:1500  Metric:1
      RX packets:185952 errors:0 dropped:0 overruns:0 frame:0
      TX packets:269592 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 
      RX bytes:21059294 (21.0 MB)  TX bytes:30183068 (30.1 MB)

I don’t really get why this is such a big issue. Can anyone point me in the right direction?

1 Like

I want communication between docker container on overlay and a VM. How can I achieve this? Any solution.