Support tap interface for direct container access (incl. multi-host)

+1
This is a blocker for many users.

This feature is extremely crucial for us to move forward with Docker for our development machines. We can still make it work with Toolbox, but this would be much easier to install and maintain, I’m sure.

6 Likes

I would be very interested by this as well, any news ?

1 Like

Maybe it will be fixed with osx sierra ?

the statement on OSX limitations is misleading/not true.
A few posts above I proved that it works.
It’s a limitation in the docker client that we cannot select a tap
interface.

Michael

related issue: https://github.com/docker/for-mac/issues/171

issue #171 above was closed in favour of a earlier feature request
however its still for the same issue

That GitHub issue seems to be getting no attention, which considering the amount of detail there is in this thread seems odd - any chance @michaelhenkel could post his observations directly into the issue to see if it sparks some life into it?

@michaelhenkel I’ve tried to run docker daemon on sierra and have got

Artems-MacBook-Pro:abcp artemkaint$ sudo /Applications/Docker.app/Contents/MacOS/com.docker.hyperkit -A -m 4G -c 4 -u -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-tap,tap1 -s 3,virtio-blk,file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2,format=qcow -s 4,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db -s 5,virtio-rnd -s 6,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port -s 7,virtio-sock,guest_cid=3,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data,guest_forwards=2376 -l com1,autopty=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty,log=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring -f kexec,/Applications/Docker.app/Contents/Resources/moby/vmlinuz64,/Applications/Docker.app/Contents/Resources/moby/initrd.img,earlyprintk=serial console=ttyS0 com.docker.driverDir="/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux", com.docker.database=“com.docker.driver.amd64-linux” ntp=gateway -F /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/hypervisor.pid
Password:
open of tap device /dev/tap1 failed
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None
com.docker.hyperkit: [INFO] image has 0 free sectors and 244203 used sectors
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None returning 0
mirage_block_stat
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port
vsock init 7:0 = /Users/artemkaint/Library/Containers/com.docker.docker/Data, guest_cid = 00000003
linkname /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
COM1 connected to /dev/ttys004
COM1 linked to /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
rdmsr to register 0x64e on vcpu 0
rdmsr to register 0x34 on vcpu 0

and script have been paused for a long time
I suppose they problem is in tap ext missed in sierra. What OS have you used?

Not bad, I’ll give it a try and write up some feedback or PR if applicable!!!

@almirkadric @strayerror I know this is off topic but could you explain what the exact issue with the hyperkit docker configuration that blocks this functionality? The solution looks good but I’m struggling to actually understand the underlying problem. Also, if there is no way currently (outside of external hacks) for host/service<->container communication, where does the 192.168.65.1 address come from?

@strayerror
Good timing, I just implemented the above the other day and got a nice externalised helper setup for this. I plan to turn it into a module that can be installed and used when needed. My implementation is a bit different to the above, but it behaves exactly the same way as Docker for Windows so i opted to head in that direction.

To summarise the issue, currently Docker for Mac uses hyperkit under the hood to provide a Host Virtual Machine on which the containers a created. However the configuration for this virtual machine is hard coded into the Docker for Mac executable and is not configurable in any way. Why does this matter? Well one part of this configuration is the network settings. How the Host Virtual Machine talks to your physical machine. Currently this has been hardcoded to “virtio-vpnkit”. What driver does is it creates a vpn of sorts (using a unix socket on your physical machine) and bind specific port to your physical machine (like a nat, or exactly a nat?). However this means you cannot directly access the containers via their internal IP as there is no network interface which bridges between the physical machine and the Host virtual machine, thus you cannot create a network route to get to them.

What the above, and my implementation (hack!) do is inject an additional driver argument into the binary which adds a tap interface to the Host Virtual Machine. This then creates a eth1 interface on the Host VM which binds to the /dev/tap1 block device. Whenever the Host VM starts and opens this file descriptor the tap system will create a network interface called tap1. This is the bridge between the Host VM and the physical machine. From here you can assign an IP to the interface on your physical machine and an IP address to the interface inside the Host VM and create a route to you desired container network segment. Thus giving you access to the container via their internal IP address.

Why is this needed? Why not just map to the physical machines ports? Well in some instances ports are not configurable (SDKs have hardcoded ports etc) and if you need to run up a cluster with many of these containers you will run into port conflicts. To avoid port conflicts the solution is to have multiple IPs you can connect to which all listen on the same port. But to do this you need a virtual network where this IP segment will live. Thus the need for the above hacks.

Ideally what Docker for Mac (not hyperkit, as it really is not a hyperkit issue, they provide the arguments to do it) should be doing is allowing the user to configure the drivers attached to the Host Virtual Machine. What shape or form this takes is entirely up to the Docker for Mac team, but looking above and inside all the related issues, something is required by the community.

Now as for your other question “where does the 192.168.65.1 address come from?” I’m assuming you’re talking about the ip address of the eth0 interface inside the Host Virtual Machine right? If so, try and ping that ip from you physical machine (not using docker commands), you will not be able to reach this IP address. The reason for this is because this IP only exists within the Host VM and has not been bridged to your Physical Machine. Because you can’t route to “192.168.65.1” you also can’t route over “192.168.65.1” to get to your container IPs.

In theory you could implement a bridge on your physical machine which connects the unix socket a virtual network interface, but currently I am unaware of any solution which can do this (other than tap which is above and required a different driver to work). Also it is much more complicated than the above and honestly if we have the above this would give power to the user to decide how they want that to work, if for some reason they need to change it.

If something above doesn’t make sense, or you still need further details please let me know.

P.S. This problem doesn’t exist on any other platform, it is purely a Docker for Mac issue. On windows there is a hvint0 interface which is a bridge to the physical machine. And on Linux there is no VM being run in the first place, so you can just create the routes without issues.

1 Like

@strayerror @michaelhenkel @dimitrasz
I just pushed up my version of the shim installer here https://github.com/AlmirKadric-Published/docker-tuntap-osx. I chose not to contribute to your shim installer @strayerror as my version works a little differently in terms of the architecture as it doesn’t bring up a docker network (this pollutes the iptables with forward blocking) which results in having to keep your containers on that networks subnet. Instead it creates a full bridge allowing you to create many networks and route to them all (let’s say a network subnet for every project your might currently be working on). This is essentially how Docker for Windows behaves, keeping the behavior consistent if you happen to work on both environments.

Also I split the install phase and up phase to create less moving parts on the repetitive runs (when Host Virtual Machine is restarted).

Let me know what you think!!!

2 Likes

Time to make some waves, this issue has been going on for too long and enough is enough





Yes, that makes perfect sense now! Thank you so much for taking the time to explain. I’ll check out your project and will let you know! Thanks!

1 Like

@almirkadric

Happy you got it working for your usecase! It seems to offer more familar ground to those already having experience of working with Docker for Windows as well as added extensibility regarding multiple docker networks.

As a Linux user (without regular access to a Mac) trying to better support co-workers in a local environment with a single docker network, the aims of my shim were very tightly scoped: achieve the minimum amount of difference between the experience on Linux and that on OSX and support at least one docker network to which the host was automatically a member. To that end your solution probably isn’t ideal for me and the extra steps around creating a priviledged container and having to manually configure routes make me shy away from it.

That said I know there’s a definite hunger in the community for multiple network support and I’m glad you’ve (if you’ll excuse the pun) bridged the gap :smiley: and indeed that you’re willing to champion this issue as the sooner our hacks are no longer required and this functionality has proper baked in the support the better for everyone!

:confetti_ball:

1 Like

Guys, I managed to route traffic to containers and from containers using a dns solution for Docker for linux and for mac. I tried tap solution but didn’t worked.

There are some issues, like cannot mantain a opened connection to container for a long time, but you can access services inside the container without publishing any port.

https://github.com/zanaca/docker-dns

1 Like

@zanaca Looks interesting, will take a deeper look once I get the chance.

In regards to your issues with the TAP solution, could you clarify what problems you had?
This way we can help others who may stumble upon this thread.

Amazing!
after searching some hundreds of web pages
finnally found this working (with minor adjustments/typos)
Thanks! It’s Gr8!

1 Like

I detailed a workaround for K3s with TunTap and MetalLB but it feels kludgy for sure. Dropping a link here in case it proves valuable for anyone who doesn’t want to run Kubernetes in a *nix VM on Mac.