Support tap interface for direct container access (incl. multi-host)

Expected behavior

define networking type for xhyve (-s 2:0,virtio-tap,tap)

Actual behavior

current networking is hardcoded to -s 2:0,virtio-vpnkit


Docker for Mac: version: mac-v1.12.0-beta18-3-gec40b14
OS X: version 10.11.4 (build: 15E65)
logs: /tmp/20160710-185710.tar.gz
failure: docker ps failed: Failure(“docker ps: timeout after 10.00s”)
[ERROR] docker-cli
Connection refused (ECONNREFUSED) connecting to /var/run/docker.sock: check if service is running
Connection refused (ECONNREFUSED) connecting to /Users/birdman/Library/Containers/com.docker.docker/Data/s60: check if service is running
docker ps failed
[OK] app
[OK] menubar
[OK] virtualization
[OK] system
[OK] osxfs
[OK] db
[OK] slirp
[OK] moby-console
[OK] logs
[OK] vmnetd
[OK] env
[OK] moby
[OK] driver.amd64-linux

tap networking allows traffic to be routed to the host and the containers rather than nating/proxying the traffic.
This will enable host networking as well as macvlan and even multi-host networking.
I tested it manually with xhyve and it seems to work.
I saw that in older docker for mac version the com.docker.driver.amd64-linux exectuable allowed to provide a xhyve.args file.


starting the daemon manually with:

sudo /Applications/ -A -m 4G -c 4 -u -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-tap,tap1 -s 3,virtio-blk,file:///Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2,format=qcow -s 4,virtio-9p,path=/Users/birdman/Library/Containers/com.docker.docker/Data/s40,tag=db -s 5,virtio-rnd -s 6,virtio-9p,path=/Users/birdman/Library/Containers/com.docker.docker/Data/s51,tag=port -s 7,virtio-sock,guest_cid=3,path=/Users/birdman/Library/Containers/com.docker.docker/Data,guest_forwards=2376 -l com1,autopty=/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty,log=/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring -f kexec,/Applications/,/Applications/,earlyprintk=serial console=ttyS0 com.docker.driverDir="/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux", com.docker.database=“com.docker.driver.amd64-linux” ntp=gateway -F /Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/

allows me to assign a routable IP address to moby:

moby:~# ip addr add dev eth0
moby:~# ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.349 ms

1 Like

and to prove the point:

docker -H tcp:// network create -d macvlan --subnet --ip-range --gateway -o parent=eth0 net2

docker -H tcp:// run -itd --name alp2 --net net2 alpine /bin/sh

docker -H tcp:// exec -it alp2 ip addr sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: sit0@NONE: mtu 1480 qdisc noop state DOWN qlen 1
link/sit brd
3: ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN qlen 1
link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
4: ip6gre0@NONE: mtu 1448 qdisc noop state DOWN qlen 1
link/[823] 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
14: eth0@ip6gre0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 02:42:0a:01:00:80 brd ff:ff:ff:ff:ff:ff
inet scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe01:80/64 scope link
valid_lft forever preferred_lft forever

PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=0.408 ms
64 bytes from icmp_seq=1 ttl=64 time=0.204 ms

os x can ping the container…


+1 on this feature

I thought this was a hyperkit limitation but it turned out to be a configuration ability issue (or lack there of)
reference here:

Though it would be ideal that docker comes with tap interfaces installed similar to VirtualBox and VMWare. Worst case scenario we should be able to extend the feature set of Moby if the feature is available via third party software (brew install tuntap).


Forgot to mention the use case for this, this is a heavy blocker for us.

We have a 3rd party library we use and a lot of the ports are hard coded. This prevents us from starting more than 1 service at a time locally for testing purposes. This is where the tap interface is ideal as we can route over it to the container and get direct access to the IP address avoiding the whole port conflict issue.

And for those wondering, changing that 3rd party library to not contain hardcoded ports would be quite a heavy overhaul and not something we are equipped to do.


When looking for similar responses i found several people asking for the same feature:

(IP Routing to container - #12 by akbhargava)
(Network bridge on host - #3 by vijaybose)

This being said I’m not posting here without suggestions.
If possible if the GUI could allow you to set the network driver along with some additional options that change based on the driver selected (e.g. a list of tap interfaces if tap is selected). and a link to how to install tap interfaces on your Host. For now to keep things simple, “Docker For Mac” does not need to provide the tap package (similar to how Kitematic).

1 Like

Would love to see this issue fixed.

This is a blocker for many users.

This feature is extremely crucial for us to move forward with Docker for our development machines. We can still make it work with Toolbox, but this would be much easier to install and maintain, I’m sure.


I would be very interested by this as well, any news ?

1 Like

Maybe it will be fixed with osx sierra ?

the statement on OSX limitations is misleading/not true.
A few posts above I proved that it works.
It’s a limitation in the docker client that we cannot select a tap


related issue:

issue #171 above was closed in favour of a earlier feature request
however its still for the same issue

That GitHub issue seems to be getting no attention, which considering the amount of detail there is in this thread seems odd - any chance @michaelhenkel could post his observations directly into the issue to see if it sparks some life into it?

@michaelhenkel I’ve tried to run docker daemon on sierra and have got

Artems-MacBook-Pro:abcp artemkaint$ sudo /Applications/ -A -m 4G -c 4 -u -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-tap,tap1 -s 3,virtio-blk,file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2,format=qcow -s 4,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db -s 5,virtio-rnd -s 6,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port -s 7,virtio-sock,guest_cid=3,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data,guest_forwards=2376 -l com1,autopty=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty,log=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring -f kexec,/Applications/,/Applications/,earlyprintk=serial console=ttyS0 com.docker.driverDir="/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux", com.docker.database=“com.docker.driver.amd64-linux” ntp=gateway -F /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
open of tap device /dev/tap1 failed
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None
com.docker.hyperkit: [INFO] image has 0 free sectors and 244203 used sectors
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None returning 0
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port
vsock init 7:0 = /Users/artemkaint/Library/Containers/com.docker.docker/Data, guest_cid = 00000003
linkname /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
COM1 connected to /dev/ttys004
COM1 linked to /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
rdmsr to register 0x64e on vcpu 0
rdmsr to register 0x34 on vcpu 0

and script have been paused for a long time
I suppose they problem is in tap ext missed in sierra. What OS have you used?

Not bad, I’ll give it a try and write up some feedback or PR if applicable!!!

@almirkadric @strayerror I know this is off topic but could you explain what the exact issue with the hyperkit docker configuration that blocks this functionality? The solution looks good but I’m struggling to actually understand the underlying problem. Also, if there is no way currently (outside of external hacks) for host/service<->container communication, where does the address come from?

Good timing, I just implemented the above the other day and got a nice externalised helper setup for this. I plan to turn it into a module that can be installed and used when needed. My implementation is a bit different to the above, but it behaves exactly the same way as Docker for Windows so i opted to head in that direction.

To summarise the issue, currently Docker for Mac uses hyperkit under the hood to provide a Host Virtual Machine on which the containers a created. However the configuration for this virtual machine is hard coded into the Docker for Mac executable and is not configurable in any way. Why does this matter? Well one part of this configuration is the network settings. How the Host Virtual Machine talks to your physical machine. Currently this has been hardcoded to “virtio-vpnkit”. What driver does is it creates a vpn of sorts (using a unix socket on your physical machine) and bind specific port to your physical machine (like a nat, or exactly a nat?). However this means you cannot directly access the containers via their internal IP as there is no network interface which bridges between the physical machine and the Host virtual machine, thus you cannot create a network route to get to them.

What the above, and my implementation (hack!) do is inject an additional driver argument into the binary which adds a tap interface to the Host Virtual Machine. This then creates a eth1 interface on the Host VM which binds to the /dev/tap1 block device. Whenever the Host VM starts and opens this file descriptor the tap system will create a network interface called tap1. This is the bridge between the Host VM and the physical machine. From here you can assign an IP to the interface on your physical machine and an IP address to the interface inside the Host VM and create a route to you desired container network segment. Thus giving you access to the container via their internal IP address.

Why is this needed? Why not just map to the physical machines ports? Well in some instances ports are not configurable (SDKs have hardcoded ports etc) and if you need to run up a cluster with many of these containers you will run into port conflicts. To avoid port conflicts the solution is to have multiple IPs you can connect to which all listen on the same port. But to do this you need a virtual network where this IP segment will live. Thus the need for the above hacks.

Ideally what Docker for Mac (not hyperkit, as it really is not a hyperkit issue, they provide the arguments to do it) should be doing is allowing the user to configure the drivers attached to the Host Virtual Machine. What shape or form this takes is entirely up to the Docker for Mac team, but looking above and inside all the related issues, something is required by the community.

Now as for your other question “where does the address come from?” I’m assuming you’re talking about the ip address of the eth0 interface inside the Host Virtual Machine right? If so, try and ping that ip from you physical machine (not using docker commands), you will not be able to reach this IP address. The reason for this is because this IP only exists within the Host VM and has not been bridged to your Physical Machine. Because you can’t route to “” you also can’t route over “” to get to your container IPs.

In theory you could implement a bridge on your physical machine which connects the unix socket a virtual network interface, but currently I am unaware of any solution which can do this (other than tap which is above and required a different driver to work). Also it is much more complicated than the above and honestly if we have the above this would give power to the user to decide how they want that to work, if for some reason they need to change it.

If something above doesn’t make sense, or you still need further details please let me know.

P.S. This problem doesn’t exist on any other platform, it is purely a Docker for Mac issue. On windows there is a hvint0 interface which is a bridge to the physical machine. And on Linux there is no VM being run in the first place, so you can just create the routes without issues.

1 Like

@strayerror @michaelhenkel @dimitrasz
I just pushed up my version of the shim installer here I chose not to contribute to your shim installer @strayerror as my version works a little differently in terms of the architecture as it doesn’t bring up a docker network (this pollutes the iptables with forward blocking) which results in having to keep your containers on that networks subnet. Instead it creates a full bridge allowing you to create many networks and route to them all (let’s say a network subnet for every project your might currently be working on). This is essentially how Docker for Windows behaves, keeping the behavior consistent if you happen to work on both environments.

Also I split the install phase and up phase to create less moving parts on the repetitive runs (when Host Virtual Machine is restarted).

Let me know what you think!!!