Docker Community Forums

Share and learn in the Docker community.

Access host (not vm) from inside container


(Mitack) #8

I understand not being able to connect to host loopback. I’m not even able to do that on Linux. It was basically an example of a use case-- I want to run a service on the host (python -m http.server, for example) where the ports are not exposed publicly, but yet I still want to be able to reach them (host + port) from within a container. Virtualbox + boot2docker/toolbox/etc. provide this mechanism with the virtual interface. I can run a service on my OSX host without a publicly exposed port, and can reach it within any container without any special --add-host options. And I don’t have to be connected to a network at the time, and I can move my laptop from network to network without anything loss of functionality.

(Alex Sherwin) #9

It appears to me like the whole point of this docker native is to make it feel like xyhve/hyper-v are not really there, and you’re clearly going through a lot of network and filesystem shenanigans to get us there, which is great.

But if I run a container and can’t simply “curl http://myhost.local” in some sane way, you’re cutting off how a whole segment of developers need to work.

Elaborate hacks and manual processes are essentially one-off and won’t be accepted as a mainstream solution, which makes it a barrier to entry as a every day workplace tool

It honestly can’t be that hard to accomplish this, we’re just talking about forwarding some packets at the end of the day… Compared to the filesystem work, this should be a piece of cake (I would imagine)

(Alex Sherwin) #10

Using this methodology does seem to work, i.e.

sudo ifconfig lo0 alias
docker run -it --rm busybox ping

or with hostname mapping for app config convenience

docker run -it --rm --add-host=docker.local: busybox ping docker.local

But again… this is a manual process, making it a barrier to entry/adoption

If you’re concerned with coming up with non-conflicting IP schemes, then just make this an option in the settings panel. Give us a textbox to enter any IPv4 address into and automatically manage the lo0 alias on OS X.

This at least gives teams a consistent way to operate via a intuitive interface

(Dave Tucker) #11

@asherwin how would you achieve the same result on Docker running on a Linux machine?
Would it be assumed that the default gateway (from inside the container) is an address at which the host can be reached? E.g, within the container ip route | awk '/default/ { print $3 }' would equal the address on docker0.

(Alex Sherwin) #12

At the risk of repeating myself, isn’t the point of docker native to make the end user feel like the Linux VM is “invisible”. The whole way this is setup is such that you never need to directly interact with that VM.

When running natively on Linux, the host your on is useful in other ways. You control this OS, configure it’s networking, maybe run non-containerized apps listening on various network ports that you might want your containers to talk to.

In this scenario, I’m saying that OS X/Windows should act in place of the actual Linux host in the “normal” setup… You’re abstracting away the xhyve/hyper-v VM entirely, so my default expectation would be that the containers would be able to communicate with the “host” , which for all intents and purposes here is OS X/Windows, not the underlying VM…

Given that would take a lot of network magic, I’d be happy if the above workaround with a lo0 alias was easily configured in a pretty settings panel, and the alias creation/removal was automatically managed by the app on start/shutdown, then that’s good enough to get packets to flow from the container -> host in some sane, easily understood manner

(Dave Tucker) #13

Yes. It seems we’re in violent agreement here. We’ve gone a long way to try and achieve this and we want to make sure that the experience for Mac users is the same as the experience on Linux.

My question is perhaps more related to this statement:

You can’t curl myhost.local in Docker on Linux either, unless you manually inject something like with --add-host.

Imagine you are on a Linux desktop, running Docker and need to achieve the exact same use case as you’ve explained already. What do you do?

I’ve already outlined how I might do it… but I’d like to see how other’s might achieve the same thing. I’m hoping this will lead us in the direction of a solution that works for Docker for Mac users (i.e some networking magic we have to do on the VM to make this work).

I don’t think Docker for Mac managing loopback interfaces is a good idea for the reasons @justincormack mentioned + we’d be diverging from the Linux UX by adding a magic IP or Name that’s reachable from within all containers.

Explain networking known limitations, explain "host"
(Alex Sherwin) #14

I can live with it as-is, but I guess I’m still stuck on the purpose of having docker feel native on OS X/Windows

I understand the programmer-centric thinking of all things same on all platforms, but let’s be realistic here, you’re doing docker native for OS X/Windows so developers can better seamlessly integrate docker into their everyday development workflows, which I applaud because I’ve been doing this manually VM’s since the early versions of docker with various hacks to get port’s and such exposed out to the host (OS X) etc.

I can’t imagine that even a stretch goal here is to ever expect people to run containers via docker native for real-world production usage. With that in mind, then what’s the harm in making something easier to use for developers. I’m not suggesting a docker cli UX change, we’re just talking about making some well-known IP (maybe off by default, and configurable in the docker native settings) that’s always-on that can be routed to from the docker containers, and this lives solely in the realm of the docker native VM and supporting application that manages it.

If your goal is to make the Linux VM invisible, then your “host running containers” is OS X, not Linux, and without the above there’s an obvious hurdle to making this a seamless integration into your development flow?

Here’s the scenario I envision:

Hey new guy, we all develop using docker native to support our apps, we run a custom nginx for authentication which must proxy all your apps, but we let you develop/run your apps from your native IDE. Go install docker native, and pull the latest custom nginx and set it up to proxy to the port you pick in your properties. Oh? What IP do you proxy to? Well that depends, every time you switch networks between office/home/vpn or get a new DHCP address you’re going to have to figure out your IP and go and re-configure your proxy, because, there’s no reasonable way for us to pre-configure things to “just work” for you. By the way you have to be online so your network interfaces are up.

Is this an elaborate example? Yes… but it’s one that’s already true for me, and I can easily envision lots of scenarios where you need containers to reach out to things running natively on your “host” (OS X, not Linux)

Again… I can live with this, I just think you’re missing an opportunity to make things easier on developers

(Justin Cormack) #15

The main issue is that we do not want to require root access on your Mac (we currently do use it, but only at initial install and we are trying to remove the use cases here too). Adding IP addresses requires root access, and as OSX does not have any kind of dummy interface it is kind of unclear what kind of interface to add. We can’t give it a name that containers would know about, so overall it does not seem terribly useful, and you probably may as well just use the lo0 hack.

(Mitack) #16

The "lo0 hack" is basically the default behavior on Linux, as is root access. The default docker install creates the docker0 interface, and the container route/gateway to the host.
If this were done during install and documented, I think that would go a long way to making this feel closer to a standard Linux docker installation.
The documentation should note that this must be used in conjunction with pinata network=hostnet. With network=hostnet and the lo0 alias, I can achieve what I wanted, which is apps running on native OSX host that are reachable from containers, not reachable publicly, not subject to changes if I switch networks, and not subject to issues if I’m completely disconnected from any network.
With network=nat, I haven’t yet found a way to reach the host at all in any scenario, so I’m not sure what the point of it is-- unless the point is to have containers in a 100% isolated environment only reachable by each other.

(Dteoh) #17

I managed to get it working with network=nat. After I switched to nat, ifconfig showed a new network interface bridge100. I was able to successfully map hostnames to the bridge100 IP address and access services on the host.

(Mitack) #18

@dteoh Care to share specifics?

$ IP=$(ifconfig | grep -A 3 bridge100 | grep inet | cut -d ’ ’ -f 2)
$ python3 -m http.server --bind $IP

In another terminal

$ IP=$(ifconfig | grep -A 3 bridge100 | grep inet | cut -d ’ ’ -f 2)
$ ping $IP
$ curl ${IP}:8000
$ docker run -it --rm busybox ping -c 1 $IP
$ docker run -it --rm --add-host=dh:${IP} buildpack-deps:curl curl --connect-timeout 5 dh:8000

The local commands work.
The docker commands fail.
The docker commands work if I use net=hostnet and the lo0 hack address.

(Martinpeverelli) #19

I’d also be interested in this, as this is the only missing link preventing me from using docker for my dev environment.
Also, I’m a total newbie on Docker, thus it would be cool if the solution would be as less hackery as possible.

In my scenario, I want a container running php to be able to reach the database engine on the host (osx). Repeat for X projects. As what I want is the app services to be containers, but the DBs to be on my local for easier access, persistence, backup, handling, etc.

(Dteoh) #20

In one terminal:

$ docker version                                                                                                                                                                                                            08:19:09
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Wed Apr 27 00:34:20 2016
 OS/Arch:      darwin/amd64

 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   8b63c77
 Built:        Tue May 10 10:39:20 2016
 OS/Arch:      linux/amd64
$ uname -a
Darwin laptop.local 14.5.0 Darwin Kernel Version 14.5.0: Thu Apr 21 20:40:54 PDT 2016; root:xnu-2782.50.3~1/RELEASE_X86_64 x86_64
$ python -m SimpleHTTPServer
Serving HTTP on port 8000 ...

In another terminal:

$ ifconfig
    ether 02:9a:9d:f9:9b:64
    inet netmask 0xffffff00 broadcast
        id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
        maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
        root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
        ipfilter disabled flags 0x2
    member: en5 flags=3<LEARNING,DISCOVER>
            ifmaxaddr 0 port 11 priority 0 path cost 0
    nd6 options=1<PERFORMNUD>
    media: autoselect
    status: active
$ docker run --rm --add-host=dh: -it alpine ping dh                                                                                                                                                             08:14:14
PING dh ( 56 data bytes
64 bytes from seq=0 ttl=63 time=0.406 ms
64 bytes from seq=1 ttl=63 time=0.473 ms
64 bytes from seq=2 ttl=63 time=0.400 ms
64 bytes from seq=3 ttl=63 time=0.427 ms
--- dh ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.400/0.426/0.473 ms
$  docker run --rm --add-host=dh: -it alpine wget http://dh:8000                                                                                                                                                 08:14:49
Connecting to dh:8000 (
index.html           100% |*******************************|   178   0:00:00 ETA

I think the problem is that you are binding services to the bridge. My understanding is that the bridge is forwarding the packets onto localhost.

Anyway, I personally switched to the lo0 alias solution because:

  • I don’t have to modify the default docker settings
  • I am in control of the alias IP address which means not having to parse ifconfig output in scripts

(Dteoh) #21

I think aliasing loopback is the simplest solution. I also have the same requirements as you: accessing MySQL on the host. The only additional thing I had to do, which was MySQL specific, was not to use the root database user (because you have to explicitly grant extra permissions if connecting from outside of localhost) but to make a separate database user.

(Mitack) #22

One problem with the lo0 hack is it’s not persistent-- you have to re-add the alias every time you reboot.
Also, the steps solution you provided @dteoh bind to all addresses, making the server public, which is one condition I’m trying to avoid, hence the reason for binding to the bridge address (or localhost, or alias, or whatever). If there’s another way to accomplish this, that’s what I’m looking for and open to.
When binding to a specific address, this (non-public, host reachable from containers) works on Linux and on Docker for Mac beta with lo0 hack address. On Linux, this is the only possible way (?) to accomplish this.

(Christoph Kluge) #23

I found this topic while I was searching for another issue which I discovered (I will mention later)

Host-networking works for me with host’s hostname and I’m switching daily in multiple networks (home, office, coffeeshops). The only case which I discovered was to set a dhcp-client-id because sometimes the dhcp-server overrides your host’s hostname.

sikei:docker christophkluge$ hostname

inside the docker container I can easily use

root@b6661fe0491f:/var/www# ping sikei
PING sikei ( 56 data bytes
64 bytes from icmp_seq=0 ttl=37 time=0.162 ms
64 bytes from icmp_seq=1 ttl=37 time=0.242 ms

another issue which I see is that host latency is much slower then the latency to a different container. Since I’m not leaving the host mashine I would expect to have more close results in both scenarios

first example: roughly 3x times more latency

  1. ping host system
  2. ping another container

$ ping host -c 10
round-trip min/avg/max/stddev = 0.187/0.300/0.394/0.053 ms
$ ping mysql -c 10
round-trip min/avg/max/stddev = 0.076/0.092/0.120/0.000 ms

second example: roughly 10x slower (probably depends on the app + queries + amount of data)

  1. setup mysql database on your host system (import any kind of dump)
  2. setup mysql database inside a container (import the same dump)
  3. run your application pointing to host-system database
  4. run your application pointing to container-database

$ pinata diagnose
OS X: version 10.11.4 (build: 15E65) version v1.11.1-beta13.1
Running diagnostic tests:
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x
Docker logs are being collected into /tmp/20160529-001229.tar.gz
Most specific failure is: No error was detected

(Garethadams) #24

For the moment, assume I’m happy with a workaround to find the IP address of the host. However, even if I know what that is, I don’t seem to actually be able to make a connection other than to ping.

From an Ubuntu machine with Docker installed:

ubuntu:~$ echo "Hello World" | nc -l 4321 &
[1] 19761
ubuntu:~$ docker run --rm -ti byrnedo/alpine-curl
GET / HTTP/1.1
User-Agent: curl/7.47.0
Accept: */*

Hello World
[1]+  Done                    echo "Hello World" | nc -l 4321

but on OS X with the Docker for Mac beta:

osx:~ $ echo "Hello World" | nc -l 4321 &
[2] 44802 44803
osx:~ $ docker run --rm -ti byrnedo/alpine-curl
curl: (7) Failed to connect to port 4321: Connection refused

However, if the listening port was set up by Docker, then the dockerised curl is actually able to access it!

osx:~ $ echo "Hello World" | docker run -p 4321:4321 --rm -i alpine nc -l -p 4321 &
[1] 80783 80784
osx:~ $ docker run --rm -ti byrnedo/alpine-curl
GET / HTTP/1.1
User-Agent: curl/7.47.0
Accept: */*

Hello World
[1]  + 80783 done       echo "Hello World" |
       80784 done       docker run -p 4321:4321 --rm -i alpine nc -lk -p 4321

All I see here is people using ping to test communication between Docker and a Mac host, but I don’t know if anyone’s got any further than that at all?

(Christoph Kluge) #25

Hey @garethadams,

docker does not expose the gateway ports for some reasons (tbh I don’t know why)… my workarounds are the following:

a) use the loopback as describe above ( - see answer#7
b) use the hosts “real” ip address inside the network (in most cases some 192.168.. ip address)

I would suggest to stick with a) as it’s more reliable and consistent if you often change networks (home, office, cafe’s… )

(PC) #26

Connections from the host to a container show up in the container as coming from But connections from the container to go to the xhyve vm and not the host.

Perhaps xhyve could remap or proxy every port except 2375 back to the host?

That way, access to the magic would behave exactly the way it behaves on linux.

This lack of consistency between linux and mac is tripping up my project.

(Joshua Chaitin-Pollak) #27

There have been a series of issues open going back 3 years asking for a dockerhost hostname inside the container. Here is the latest one, which seems to have recent activity and someone is taking it on:

Hopefully it will work on Docker for Mac.