Network Bridge Docker0 can't connect to local network

Hello,

I have a problem with docker desktop environment on my macbook.
I can’t connect to our enterprise network within a running container.If I for example try to call a Rest Service hosted on a server within the enterprise network, the container isn’t able to reach the system.

Yesterday I figured out the problem. The Docker0 Bridge is by default using the subnet 172.17.0.0/16. In our enterprise we have the same Subnet Mask.
The Container which tries to connect the server runs in a different, by docker-compose created, network.

So after changing the Docker0 Subnet to 172.26.0.1/16, I was able to ping the server and even to call the service.

Today morning, after returning to work, the solution didn’t work any longer.

So my Question, what am I doing wrong? After inspecting the network it seems like nothing changes.

Greets
Chris

A network declared in docker-compose is private to the stack. If the stack is "down"ed, the network will be removed and a new one with a random ip-range, but the same name will be created.

Unless the ip ranges colide with your real networks, there is no need to bend docker networks.
If you want containers of different stacks to communicate with each other, create an external network, declare it in your compose file as external and assign it to your services - and use the {container name} to adress the other containers. If {container name} alone does not work, try {container name}.{network name}.

Why do you even want to mix containers of your compose stack with containers from Docker0?

Hey Metin,

thanks for your reply. I don’t have the problems with the communication between docker-compose stacks. My problem is, that the container (running in a stack) isn’t able to ping a Server within or enterprise network.
The stacks run within a network with the range of 172.20.0.0/16 for example.
Even from those containers it isn’t possible to ping or reach a server running in the 172.17.0.0/16 network of or enterprise.

If I ping against the DNS Name e.g. server1.example.com the IP e.g 172.17.1.44 is resolved. Then the container tries to ping the IP resulting in something like “Host not reachable”…

So after changing the IP Range of the docker0 Bridge yesterday evening, everything was working fine. It seems to me that the daemon is searching for the IP within his own network. We used wireshark to analyse the traffic on my mac. When I pinged e.g. google.de, Wireshark reported the traffic. After that we pinged an internal server and there was no traffic reported within wireshark,

For me this looks like there is no communication from the daemon to the requested system because the daemon is searching in his network.

yep, there is a collision between your local network(s) and the docker networks. Changing the default bridge is the right way to go.

See: https://success.docker.com/article/how-do-i-configure-the-default-bridge-docker0-network-for-docker-engine-to-a-different-subnet

Of course replace the value of bip with a CIDR that fits your environment.

This is strange. when you updated the bip value to 172.20.0.0/16, the posible networks should not colide with your real network. Is the default gateway of your host pointing to an ip within the new range? Did you restart the docker service? It should clean up the old iptables and route entries that prevented the communication and replace them withe new rules matching your new network range.

Yes of course I restarted everything. Look at the given networks :

Docker0:

[{
    "Name": "bridge",
    "Id": "e5ac96f3406b02eb0e075a4f53aefc55569909effa6777dd2f875770b65f12f9",
    "Created": "2019-07-17T10:13:51.044271198Z",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
        "Driver": "default",
        "Options": null,
        "Config": [
            {
                "Subnet": "172.26.0.1/16",
                "Gateway": "172.26.0.1"
            }
        ]
    },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
        "Network": ""
    },
    "ConfigOnly": false,
    "Containers": {},
    "Options": {
        "com.docker.network.bridge.default_bridge": "true",
        "com.docker.network.bridge.enable_icc": "true",
        "com.docker.network.bridge.enable_ip_masquerade": "true",
        "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
        "com.docker.network.bridge.name": "docker0",
        "com.docker.network.driver.mtu": "1500"
    },
    "Labels": {}
}] 

Docker Compose Network:

[
{
    "Name": "web_default",
    "Id": "4f93b4419641a319332bae8c74287a27d3cf5b582926af8eab827eae6aec0cff",
    "Created": "2019-07-16T14:50:03.749649496Z",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
        "Driver": "default",
        "Options": null,
        "Config": [
            {
                "Subnet": "172.18.0.0/16",
                "Gateway": "172.18.0.1"
            }
        ]
    },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
        "Network": ""
    },
    "ConfigOnly": false,
    "Containers": {
        "006a5cfd822b66cbaf9d77163a665c466886c1e982488f01b28ecda740f96b70": {
            "Name": "web_nginx_1",
            "EndpointID": "5738999bce1b7336fba83d98881289ad59c0e60b05deb68ef165617dad585000",
            "MacAddress": "02:42:ac:12:00:05",
            "IPv4Address": "172.18.0.5/16",
            "IPv6Address": ""
        },
        "34ada97feaef9a257111ec8b48c497afb74cd5c3a66dbdb672b3e6c719e35a8e": {
            "Name": "web_redis_1",
            "EndpointID": "92b66e1442478943d4d5db74f20e4a5629715cd8b28cdbd43a505590eb088686",
            "MacAddress": "02:42:ac:12:00:03",
            "IPv4Address": "172.18.0.3/16",
            "IPv6Address": ""
        },
        "64a46e3035afeb1bf731a40266b01f8d926b5e3202bb9f1b3b14215b0b977fad": {
            "Name": "web_mysql_1",
            "EndpointID": "f35fff8103d41b14cf5896b913a5f67e90196413b9b01b95bd72b994b4c45dac",
            "MacAddress": "02:42:ac:12:00:02",
            "IPv4Address": "172.18.0.2/16",
            "IPv6Address": ""
        },
        "a81291693d97105835993e69e1b3571143ed222fb5efdd6649a68e6331af7bd8": {
            "Name": "web_php-fpm_1",
            "EndpointID": "f001e95dc6eabd96a6414a267879f18f3a0b69d9f885de44cba54a0ba16dfb08",
            "MacAddress": "02:42:ac:12:00:04",
            "IPv4Address": "172.18.0.4/16",
            "IPv6Address": ""
        }
    },
    "Options": {},
    "Labels": {}
}

]

There was also a strange behaviour while changing the BIP to 172.18.0.0/24. After this change the daemon doesn’t start and I had to reset my docker environment.

The configurations look correct to me.
Seems like routes and/or iptables configurations have not been cleaned up proberly.

Though, isn’t 172.18.0.0/16 usuly reserved for docker_gwbridge network?

Maybe it is, I didn’t know that.

Ok, so you also don’t have an idea what else can be the reason?

From my perspective: I would check the output of ‘route’ first to see if no matching rule exist, then check if some bogus iptables rules exist that shouldn’t exist. Appart from that, check the chain from your hosts nics to their gateway and check whether the gateway itself include routes that overlap with your container ip ranges.

I assume if there is a firewall that all swarm specific ports are opended or the firewall is disabled.

Can you give me a short introduction how to do that? How can I check the ‘route’ ?

Thanks a lot.

on linux, route is actualy the command :slight_smile:

You might want to ask your (linux/unix) ops guys in the company for assistance,

Would be nice if we have one :laughing:

I was using laptop1 with Win 8 with wifi connection to bridge to another laptop2 via LAN (network bridge), and it was working properly.

Now laptop1 is upgraded to Win 10, however the network bridge doesn’t work anymore.

Tried to disconnect and recreate the bridge, but still fail.

Hi Christian, did you manage to find the root cause ?
Its happening several times for us and the only way we get over this problem now is to kill the VM and run docker stack on a new VM. Problem doesn’t go away even after setting a non-conflicting IP range for containers, recreating the /var/lib/docker folder etc

Hi Sree,
yes I found a work around that solved the problem. I changed the daemon.json to the following.

{
  "experimental" : false,
  "debug" : true,
  "default-address-pools" :
	[
	   {
             "base":"172.26.0.0/16",
             "size":24
           }
	]
}

On mac you can find this file here

~/.docker/daemon.json

Hope that helps you.

Cheers Chris