Docker Community Forums

Share and learn in the Docker community.

Support tap interface for direct container access (incl. multi-host)


(Michaelhenkel) #1

Expected behavior

define networking type for xhyve (-s 2:0,virtio-tap,tap)

Actual behavior

current networking is hardcoded to -s 2:0,virtio-vpnkit

Information

Docker for Mac: version: mac-v1.12.0-beta18-3-gec40b14
OS X: version 10.11.4 (build: 15E65)
logs: /tmp/20160710-185710.tar.gz
failure: docker ps failed: Failure(“docker ps: timeout after 10.00s”)
[ERROR] docker-cli
Connection refused (ECONNREFUSED) connecting to /var/run/docker.sock: check if service is running
Connection refused (ECONNREFUSED) connecting to /Users/birdman/Library/Containers/com.docker.docker/Data/s60: check if service is running
docker ps failed
[OK] app
[OK] menubar
[OK] virtualization
[OK] system
[OK] osxfs
[OK] db
[OK] slirp
[OK] moby-console
[OK] logs
[OK] vmnetd
[OK] env
[OK] moby
[OK] driver.amd64-linux

tap networking allows traffic to be routed to the host and the containers rather than nating/proxying the traffic.
This will enable host networking as well as macvlan and even multi-host networking.
I tested it manually with xhyve and it seems to work.
I saw that in older docker for mac version the com.docker.driver.amd64-linux exectuable allowed to provide a xhyve.args file.


(Michaelhenkel) #2

starting the daemon manually with:

sudo /Applications/Docker.app/Contents/MacOS/com.docker.hyperkit -A -m 4G -c 4 -u -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-tap,tap1 -s 3,virtio-blk,file:///Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2,format=qcow -s 4,virtio-9p,path=/Users/birdman/Library/Containers/com.docker.docker/Data/s40,tag=db -s 5,virtio-rnd -s 6,virtio-9p,path=/Users/birdman/Library/Containers/com.docker.docker/Data/s51,tag=port -s 7,virtio-sock,guest_cid=3,path=/Users/birdman/Library/Containers/com.docker.docker/Data,guest_forwards=2376 -l com1,autopty=/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty,log=/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring -f kexec,/Applications/Docker.app/Contents/Resources/moby/vmlinuz64,/Applications/Docker.app/Contents/Resources/moby/initrd.img,earlyprintk=serial console=ttyS0 com.docker.driverDir="/Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux", com.docker.database=“com.docker.driver.amd64-linux” ntp=gateway -F /Users/birdman/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/hypervisor.pid

allows me to assign a routable IP address to moby:

moby:~# ip addr add 10.1.0.2/24 dev eth0
moby:~# ping 10.1.0.1
PING 10.1.0.1 (10.1.0.1): 56 data bytes
64 bytes from 10.1.0.1: seq=0 ttl=64 time=0.349 ms


(Michaelhenkel) #3

and to prove the point:

docker -H tcp://10.1.0.2:2375 network create -d macvlan --subnet 10.1.0.0/24 --ip-range 10.1.0.128/25 --gateway 10.1.0.1 -o parent=eth0 net2

docker -H tcp://10.1.0.2:2375 run -itd --name alp2 --net net2 alpine /bin/sh
93b85398e5356541d8f843c1ce19171cc3c56d217e889c7132dc0c539932c612

docker -H tcp://10.1.0.2:2375 exec -it alp2 ip addr sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: sit0@NONE: mtu 1480 qdisc noop state DOWN qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN qlen 1
link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
4: ip6gre0@NONE: mtu 1448 qdisc noop state DOWN qlen 1
link/[823] 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
14: eth0@ip6gre0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 02:42:0a:01:00:80 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.128/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe01:80/64 scope link
valid_lft forever preferred_lft forever

ping 10.1.0.128
PING 10.1.0.128 (10.1.0.128): 56 data bytes
64 bytes from 10.1.0.128: icmp_seq=0 ttl=64 time=0.408 ms
64 bytes from 10.1.0.128: icmp_seq=1 ttl=64 time=0.204 ms

os x can ping the container…


(Almirkadric) #4

+1 on this feature

I thought this was a hyperkit limitation but it turned out to be a configuration ability issue (or lack there of)
reference here: https://github.com/docker/hyperkit/issues/45

Though it would be ideal that docker comes with tap interfaces installed similar to VirtualBox and VMWare. Worst case scenario we should be able to extend the feature set of Moby if the feature is available via third party software (brew install tuntap).


Network bridge on host
Connect directly to container
IP Routing to container
(Almirkadric) #5

Forgot to mention the use case for this, this is a heavy blocker for us.

We have a 3rd party library we use and a lot of the ports are hard coded. This prevents us from starting more than 1 service at a time locally for testing purposes. This is where the tap interface is ideal as we can route over it to the container and get direct access to the IP address avoiding the whole port conflict issue.

And for those wondering, changing that 3rd party library to not contain hardcoded ports would be quite a heavy overhaul and not something we are equipped to do.


(Almirkadric) #6

When looking for similar responses i found several people asking for the same feature:


(IP Routing to container)
(Network bridge on host)

This being said I’m not posting here without suggestions.
If possible if the GUI could allow you to set the network driver along with some additional options that change based on the driver selected (e.g. a list of tap interfaces if tap is selected). and a link to how to install tap interfaces on your Host. For now to keep things simple, “Docker For Mac” does not need to provide the tap package (similar to how Kitematic).


(Frank) #7

+1
Would love to see this issue fixed.


(Nicornk) #8

+1
This is a blocker for many users.


(Jens Ulrich Hjuler Pedersen) #9

This feature is extremely crucial for us to move forward with Docker for our development machines. We can still make it work with Toolbox, but this would be much easier to install and maintain, I’m sure.


(Onigoetz) #10

I would be very interested by this as well, any news ?


(Awtom) #11

Maybe it will be fixed with osx sierra ?


(Michaelhenkel) #12

the statement on OSX limitations is misleading/not true.
A few posts above I proved that it works.
It’s a limitation in the docker client that we cannot select a tap
interface.

Michael


(Stefan Foulis) #13

related issue: https://github.com/docker/for-mac/issues/171


(Almirkadric) #14

issue #171 above was closed in favour of a earlier feature request
however its still for the same issue


(Mahoney266) #15

That GitHub issue seems to be getting no attention, which considering the amount of detail there is in this thread seems odd - any chance @michaelhenkel could post his observations directly into the issue to see if it sparks some life into it?


(Artemkaint) #16

@michaelhenkel I’ve tried to run docker daemon on sierra and have got

Artems-MacBook-Pro:abcp artemkaint$ sudo /Applications/Docker.app/Contents/MacOS/com.docker.hyperkit -A -m 4G -c 4 -u -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-tap,tap1 -s 3,virtio-blk,file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2,format=qcow -s 4,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db -s 5,virtio-rnd -s 6,virtio-9p,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port -s 7,virtio-sock,guest_cid=3,path=/Users/artemkaint/Library/Containers/com.docker.docker/Data,guest_forwards=2376 -l com1,autopty=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty,log=/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring -f kexec,/Applications/Docker.app/Contents/Resources/moby/vmlinuz64,/Applications/Docker.app/Contents/Resources/moby/initrd.img,earlyprintk=serial console=ttyS0 com.docker.driverDir="/Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux", com.docker.database=“com.docker.driver.amd64-linux” ntp=gateway -F /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/hypervisor.pid
Password:
open of tap device /dev/tap1 failed
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None
com.docker.hyperkit: [INFO] image has 0 free sectors and 244203 used sectors
mirage_block_open: block_config = file:///Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 and qcow_config = None returning 0
mirage_block_stat
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s40,tag=db
virtio-9p: initialising path=/Users/artemkaint/Library/Containers/com.docker.docker/Data/s51,tag=port
vsock init 7:0 = /Users/artemkaint/Library/Containers/com.docker.docker/Data, guest_cid = 00000003
linkname /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
COM1 connected to /dev/ttys004
COM1 linked to /Users/artemkaint/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
rdmsr to register 0x64e on vcpu 0
rdmsr to register 0x34 on vcpu 0

and script have been paused for a long time
I suppose they problem is in tap ext missed in sierra. What OS have you used?


(Strayerror) #17

For anyone still needing it I’ve put together a repo to automate setting up a named bridge network in Docker for Mac, using the ideas in this thread, such that the host (OSX) can see the containers, and containers can see the host:

https://github.com/mal/docker-for-mac-host-bridge

It’s unable to automatically handle Docker daemon restarts just yet (see the warning in the Readme), but it’s usable and tested with 17.03 and 17.04. Hopefully some of you will find it useful too until this is better supported.


(Almirkadric) #18

Not bad, I’ll give it a try and write up some feedback or PR if applicable!!!


(Dimitrasz) #19

@almirkadric @strayerror I know this is off topic but could you explain what the exact issue with the hyperkit docker configuration that blocks this functionality? The solution looks good but I’m struggling to actually understand the underlying problem. Also, if there is no way currently (outside of external hacks) for host/service<->container communication, where does the 192.168.65.1 address come from?


(Almirkadric) #20

@strayerror
Good timing, I just implemented the above the other day and got a nice externalised helper setup for this. I plan to turn it into a module that can be installed and used when needed. My implementation is a bit different to the above, but it behaves exactly the same way as Docker for Windows so i opted to head in that direction.

To summarise the issue, currently Docker for Mac uses hyperkit under the hood to provide a Host Virtual Machine on which the containers a created. However the configuration for this virtual machine is hard coded into the Docker for Mac executable and is not configurable in any way. Why does this matter? Well one part of this configuration is the network settings. How the Host Virtual Machine talks to your physical machine. Currently this has been hardcoded to “virtio-vpnkit”. What driver does is it creates a vpn of sorts (using a unix socket on your physical machine) and bind specific port to your physical machine (like a nat, or exactly a nat?). However this means you cannot directly access the containers via their internal IP as there is no network interface which bridges between the physical machine and the Host virtual machine, thus you cannot create a network route to get to them.

What the above, and my implementation (hack!) do is inject an additional driver argument into the binary which adds a tap interface to the Host Virtual Machine. This then creates a eth1 interface on the Host VM which binds to the /dev/tap1 block device. Whenever the Host VM starts and opens this file descriptor the tap system will create a network interface called tap1. This is the bridge between the Host VM and the physical machine. From here you can assign an IP to the interface on your physical machine and an IP address to the interface inside the Host VM and create a route to you desired container network segment. Thus giving you access to the container via their internal IP address.

Why is this needed? Why not just map to the physical machines ports? Well in some instances ports are not configurable (SDKs have hardcoded ports etc) and if you need to run up a cluster with many of these containers you will run into port conflicts. To avoid port conflicts the solution is to have multiple IPs you can connect to which all listen on the same port. But to do this you need a virtual network where this IP segment will live. Thus the need for the above hacks.

Ideally what Docker for Mac (not hyperkit, as it really is not a hyperkit issue, they provide the arguments to do it) should be doing is allowing the user to configure the drivers attached to the Host Virtual Machine. What shape or form this takes is entirely up to the Docker for Mac team, but looking above and inside all the related issues, something is required by the community.

Now as for your other question “where does the 192.168.65.1 address come from?” I’m assuming you’re talking about the ip address of the eth0 interface inside the Host Virtual Machine right? If so, try and ping that ip from you physical machine (not using docker commands), you will not be able to reach this IP address. The reason for this is because this IP only exists within the Host VM and has not been bridged to your Physical Machine. Because you can’t route to “192.168.65.1” you also can’t route over “192.168.65.1” to get to your container IPs.

In theory you could implement a bridge on your physical machine which connects the unix socket a virtual network interface, but currently I am unaware of any solution which can do this (other than tap which is above and required a different driver to work). Also it is much more complicated than the above and honestly if we have the above this would give power to the user to decide how they want that to work, if for some reason they need to change it.

If something above doesn’t make sense, or you still need further details please let me know.

P.S. This problem doesn’t exist on any other platform, it is purely a Docker for Mac issue. On windows there is a hvint0 interface which is a bridge to the physical machine. And on Linux there is no VM being run in the first place, so you can just create the routes without issues.