Host excluded from bridge network

Expected behavior

Host should be able to access the containers by their IP addresses.

Actual behavior

On a bridge network setup I am not able do any operations using the container’s IP addresses.


On v1.11.1-beta11 I cant access any of the containers from the host by using their IP addresses. I can execute commands and get into the boxes by running bash through docker exec but whenever I try to access the containers by their IP addresses those operations will fail.

Essentially I’m creating a bridged network and adding 4 containers to it which have some ports exposed. Then on a linux host, I don’t have any issues pinging those containers from the host and accessing their open ports. Whereas on Mac, I’m not able to ping or do anything on them. One difference I noticed is that on Mac I don’t have a docker0 interface.

pinata diagnose -u
OS X: version 10.10.5 (build: 14F27) version v1.11.1-beta11
Running diagnostic tests:
[OK] docker-cli
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x
Docker logs are being collected into /tmp/20160512-110629.tar.gz
Most specific failure is: No error was detected

Host is OS 10.10.5

Steps to reproduce the behavior

  1. create a bridge network with “docker network create samplenet”
  2. create a container and add it to the network with --net=samplenet
  3. ping the container’s IP address

Some extra information:

$ docker network ls
NETWORK ID          NAME                DRIVER
3b26e78bb9e3        bridge              bridge
38155239309b        host                host
ae14bf65d3b9        none                null
e3f14e4b91ad        mynet            bridge


$ docker network inspect mynet
        "Name": "mynet",
        "Id": "e3f14e4b91ada13a2571944b586b0a79581c4e81d03379225618c85197562961",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                    "Subnet": "",
                    "Gateway": ""
        "Internal": false,
        "Containers": {
            "3718cd10214b551ff5c2f6fc5756d65be17994aab5844154145c07210f34af3c": {
                "Name": "driver",
                "EndpointID": "781a087f324cb84d09dbfb3621e8fce2d1a7f8a5271bb3afdce123ecc64822f8",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "",
                "IPv6Address": ""
            "b23d578dcd7b9c288aaa860c9872cd6e977422810cc25a56199ac2afba27e344": {
                "Name": "prov",
                "EndpointID": "4adfd0f5d40b7a3a0a295a424b5c3cedef0da22c65af285db66710f532b2508d",
                "MacAddress": "02:42:ac:12:00:05",
                "IPv4Address": "",
                "IPv6Address": ""
            "b336457f467a7221bc89e667db324259125e9e80604c113e8ba24b5031431a63": {
                "Name": "sdk",
                "EndpointID": "7afda3f559ee2e3a140acfa0f396edd5c5b82fd0b0cbeae4ffb74b9091c049c1",
                "MacAddress": "02:42:ac:12:00:04",
                "IPv4Address": "",
                "IPv6Address": ""
            "c1929b7d6c2ac578a6a00a262d8b554deb92aad17d6ef7151d2001199845f519": {
                "Name": "runtime",
                "EndpointID": "1f3489511231ec60b0392e44fa33932903e156e42236cd1a350036c73820d5da",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "",
                "IPv6Address": ""
        "Options": {},
        "Labels": {}

…docker commands used…

docker network create mynet
docker run -dit --name runtime --net mynet runtime:latest
docker run -dit --name driver --net mynet driver:latest
docker run -dit --name sdk --net mynet sdk:latest
docker run -dit --name prov --net mynet prov:latest

ifconfig output

$ ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
	inet6 ::1 prefixlen 128
	inet netmask 0xff000000
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
	nd6 options=1<PERFORMNUD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
	ether a4:5e:60:ee:09:bb
	inet6 fe80::a65e:60ff:feee:9bb%en0 prefixlen 64 scopeid 0x4
	inet netmask 0xffffff00 broadcast
	nd6 options=1<PERFORMNUD>
	media: autoselect
	status: active
	ether 6a:00:00:56:4f:f0
	media: autoselect <full-duplex>
	status: inactive
	ether 6a:00:00:56:4f:f1
	media: autoselect <full-duplex>
	status: inactive
	ether 06:5e:60:ee:09:bb
	media: autoselect
	status: inactive
	ether 5a:5f:f9:42:c9:5c
	inet6 fe80::585f:f9ff:fe42:c95c%awdl0 prefixlen 64 scopeid 0x8
	nd6 options=1<PERFORMNUD>
	media: autoselect
	status: active
	ether a6:5e:60:ee:2b:00
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x2
	member: en1 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 5 priority 0 path cost 0
	member: en2 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 6 priority 0 path cost 0
	nd6 options=1<PERFORMNUD>
	media: <unknown type>
	status: inactive

I’m having the same issue. Can’t reach the container by it’s IP

Yes at present we cannot route to containers from the host. We do have a tracking issue for this, but it is not clear when or if we can resolve this.

Hi thanks, so this also prevents the use of routing domain names to a webserver in a container, right? For example, I’m running a webserver in a container with a virtual host with domain name Normally I would add a line to /etc/hosts on my Mac host:

Where the ip address is that of the container running the webserver. But that’s not (yet) possible if I understand this issue correctly?

Thanks @justincormack. Is the issue public? In that case, do you have the link?

For a person relatively new to docker networking, is there any known workaround? I would have imagined this is a fairly common scenario for testing, we launch several containers and then run tests over them, and then access some resources on the containers (e.g. http pages, run mysql queries, … ) to check the results.

No, then internal issue tracker is not public at present.

The easiest thing is to run the tests on the containers in another container, as those have access to the IPs. You have to do that if you use overlay networks anyway, as the host can never access those, so it is more consistent.

Right, well, it was pretty consistent until trying Docker for Mac :wink:

It was so consistent that it is the first thing you face when going to the docs: "When Docker starts, it creates a virtual interface named docker0 on the host machine. It randomly chooses an address and subnet from the private range defined by RFC 1918 that are not in use on the host machine, and assigns it to docker0. "

I mean, I already thought of your workaround, and thanks for suggesting it but it would be great that docker worked consistently to the docs on every platform. Hopefully it will get addressed soon.

1 Like

OSX does not allow setting IP addresses on a bridge, Linux is unusual like this.

FreeBSD will also let you assign an IP to a bridge: . I’m prety sure that OpenBSD does as well.

Is this related to the same problem I have when doing the opposite; trying to connect to the host from within the container using the host ip?

I get the host ip like this
docker exec -i xdebugdocker_php_1 /sbin/ip route|awk '/default/ { print $3 }'

I want xdebug to connect to my editor on my computer. ATM this only works if I use the wifi-ip for my local computer.

I would really appreciate a solution for this since it is a huge pain for me.
Since my containers are exposing a lot of ports, always manually looking up the random ports on localhost that the container ports have been assigned to is not an option.