Confused about networking defaults, how to get VM IP with native port forwarding off?

Running Beta 10, I’m trying to run a container with a DNS server in it, so it exports port 53. If I use native port forwarding, this clashes with an existing dns server.

How do I go back to the old behaviour where I could bind to the VM address which doesn’t have any services listening by default?

@lox99 For now, you can pinata set native/port-forwarding false - that will revert to the older behaviour of exposing ports from the VM’s IP instead of localhost. This is likely to be deprecated in a later beta.
If you could explain your use case for running the 2 DNS servers, one inside docker, one outside that would be really helpful!

When Docker initially announced the Mac native beta, it sounded like Docker Machine would still be a supported option to run Docker in a VM.

(The behavior you describe is identical to the way Docker on Linux works, and this is one of two reasons my normal, involved setup is to run Consul as a DNS server with --net host. Also, that specific setup is still broken on beta10, but I think it’s the same --net host issues everyone else was having with beta9.)

On Docker for Linux you cannot run a DNS server locally and publish one as well, so this is just the same. You should be able to run the native one bound to a different address though.

I would like to run Consul in a container as a DNS Server and looking for a way to either set it on daemon with the bridge IP address such that the containers automatically use Consul as the DNS server to locate different services. However with Docker for Mac am not following how to get the bridge (docker0) IP such that all the containers and access Consul, any ideas on how to get this working.

Thanks, that sort of works. We are using a dns server for service discovery. The issue is (and frankly the issue with relying on localhost for exports) is that things are often on the ports you require. OSX binds to port 53 for mDNSResponder:

~  › sudo lsof -i | grep 'domain (LISTEN)'
mDNSRespo   92 _mdnsresponder   31u  IPv4 0x1ea42481b1556ccb      0t0    TCP *:domain (LISTEN)
mDNSRespo   92 _mdnsresponder   32u  IPv6 0x1ea42481b12e4c7b      0t0    TCP *:domain (LISTEN)

This leads to clashes, and it keeps leading to clashes as you get more containers and more complicated setups.

Using localhost seems like a terrible, terrible idea to me. Surely it would make more sense to provide a virtual interface / ip for each docker network?

1 Like

I am having the same issue though my use case is different.
We have some services that are shared between multiple teams. These services are running on localhost and they are using some ports. My team also have some projects which are using Docker and these projects are also exposing ports that are the same with the services that are running on localhost. We could change the port mapping of our container but this will create a lot of work and also it does not not guarantee write one run everywhere feature as Docker provided before.
I would love to have docker.local back.

I don’t think docker.local should come back, there are too many issues with .local resolution. A different interface for docker, or a different interface per docker network would be my preference, and then .docker resolver for those would make me happy.

2 Likes

Anyone from the Docker team inclined to comment?

I was failing to get --net=host to work at all since installing BETA 10 (I did not have 9).
Thanks for the native/port-forwarding false work-around, that indeed makes it possible again.
If deprecated, however, I don’t understand how Docker’s host networking is going to work at all.
What am I missing?

--net=host is an absolute requirement. I’m trying to get kubernetes working (by running the k8s services in containers, not as daemons on the host). This works with a normal VM, and in docker for mac, however in docker for mac, there is no way to reliably reach the host vm from the mac since docker.local is now gone.

My kubernetes configuration:

The docker-compose file:

--net host would be awesome, but it is also extremely complex.

To understand why, let’s examine how Docker Mac works, in particular the network stack.

Docker Mac includes a Linux VM (nicknamed “Moby”). It runs as a regular process on OS X. If you have already used something like QEMU, this is very similar: a regular process, running as a regular user (not root, no special privilege). If you already used QEMU, you know that it is extremely slow and limited. To make Docker Mac usable, it relies on Hypervisor.framework (which gives access to virtualization instructions; think of it as hardware acceleration for VMs!) and on a number of other subsystems.

One subsystem named VPNKit is responsible for network connectivity. On a traditional VM environment, when a process inside the VM tries to send packets to the outside world, the packets go to a virtual network interface, are passed to the hypervisor, and the hypervisor injects the packets on your local network interface. Injecting arbitrary packets is not something that ordinary (non-privileged) processes can do. Therefore, when a process inside Moby (the Linux VM) tries to connect to the outside world, VPNKit intercepts the outgoing traffic, and reconstructs it using normal (non-privileged) system calls. In other words, when you do curl google.com in a container on Docker Mac, VPNKit will:

  • detect that you are trying to open a TCP connection to the outside world
  • establish a TCP connection (using normal system API calls like socket(), connect(), etc.)
  • reconstruct the TCP session so that the Linux VM (and ultimately your container running curl) thinks that it’s talking to the actual service

This is (kind of!) similar to masquerading, but taken to the “network traffic analysis / userland reconstruction” level. The idea is not new; this is how QEMU “slirp mode” works.

By the way, VPNKit is called VPNKit because this technique (which might seem overly complicated!) is the only way to make Docker Mac work reliably with all kinds of VPNs out there. If you are using any kind of enterprise VPN, you need this special mode to get your container traffic going.

Now, why is --net host complicated? Because VPNKit works automatically for outgoing traffic, but it needs help for incoming traffic. When an external connection is made to, e.g. port 80, VPNKit won’t be able to handle it, unless it’s already listening on port 80. When you start a container with -p 80:1234, VPNKit is informed that you want to expose something on port 80, and it will setup a listening socket there. Otherwise (e.g. if you run a container in host mode) it has no way to know.

So, what’s the solution?

  1. If you don’t need the upsides of VPNKit, you can always revert back to Docker-in-Virtualbox using the Docker Toolbox. In fact, I have Docker Mac and the Docker Toolbox installed side by side on this machine and it works like a charm. Problem solved!

  2. If you have ideas to make it work, you can contribute to VPNKit. It is open source, and your contributions will make Docker Mac better, but also be usable by other projects (e.g. if some day other container engines try to have a native experience like Docker Mac).

  3. If you think that Docker Inc. (the company) should spend more time and resources on this, you can vote with your wallet :slight_smile: Keep in mind that Docker Inc. does not charge a cent for Docker Mac, and that while Docker Mac itself is not open source, all the key components (VPNKit and others) are open source and available to the community.

Thank you!

Thanks for the reply Jérôme. I can understand that host networking would be complicated for those reasons. The sticking point that is echoed in a lot of these posts is that Docker for Mac isn’t so much net host support, but a way to route traffic into docker containers from the Mac without mapping ports to localhost. Any decently complex docker setup is going to have port collisions with things running on localhost already or with each other.

Is the idea that at some point docker networks might be route-able from the Mac side, or is this a “your pull requests are welcome” thing?

Thanks for the great write-up, @jpetazzo. I don’t think that much information about Docker for Mac networking existed in any one place.

  1. If you don’t need the upsides of VPNKit, you can always revert back to Docker-in-Virtualbox using the Docker Toolbox. In fact, I have Docker Mac and the Docker Toolbox installed side by side on this machine and it works like a charm. Problem solved!

The one thing that draws us to Docker for Mac is filesystem events propagating from OS X / macOS into containers. The lack of that is a major blocker especially for front-end JS/CSS development. My understanding is the osxfs component of Docker for Mac provides this, and that osxfs is not integrated in Docker Toolbox. I’d love to find out that that’s not true.

Any decently complex docker setup is going to have port collisions with things running on localhost already or with each other.

You’re absolutely right. In the long run, I can imagine that Moby (the Linux VM that powers Docker Mac) might run a custom kernel with an ad-hoc TCP stack that will defer all attempts to bind TCP ports to the OSX host. In other words, if you try to map a port that is already bound on the OSX host, you’d get EADDRINUSE, giving you a good hint about what’s going on.

From a technical standpoint, this is not very hard; but it would be a custom kernel patch, which would have almost zero chance of being integrated upstream (I can’t imagine kernel maintainers accepting a patch to facilitate the work of a 3rd party app for the special case when the kernel is running in a VM on OSX!), so maintenance would be costly in the long run.

Perhaps there are better solutions (my knowledge of these topics is only partial).

Is the idea that at some point docker networks might be route-able from the Mac side, or is this a “your pull requests are welcome” thing?

Ideally, yes, that would be wonderful. But that poses a significant problem: I think that this would require a virtual interface on OS X side, and there are security implications linked to that. I’ll be straight honest here: I don’t understand the security implications, and even if I did, I don’t know how much I could tell (all I know is that a report was made to Apple’s security team and that work is in progress there, but that’s pretty much it). :confused:

That being said, I do most of my Docker work on Linux but I rarely (if ever) need to route Docker networks from my machine. When I need to “break into” a network, I spin up an alpine container connected to the network and I do my thing. If I need to connect to a private service, I often start a one-off container (e.g. jpetazzo/hamba) to redirect traffic. This is particularly helpful because this workflow translates seamlessly to clusters (where I wouldn’t be able to connect to a random container anyway). I hope this makes sense!

1 Like

The one thing that draws us to Docker for Mac is filesystem events propagating from OS X / macOS into containers. The lack of that is a major blocker especially for front-end JS/CSS development. My understanding is the osxfs component of Docker for Mac provides this, and that osxfs is not integrated in Docker Toolbox. I’d love to find out that that’s not true.

You’re right. It’s a choice: either the VPN integration + filesystem sharing; or the more flexible network access with VirtualBox.

However, you can get filesystem sharing with VirtualBox if you set it up yourself. It’s a minor PITA but it’s totally doable (that’s how people were doing it before Docker Mac). I agree that it’s not totally straightforward (and heck, that was one of the huge motivations for Docker Mac!) but it doesn’t require extra software or licenses or whatevers.

Thank you!

1 Like