Docker Community Forums

Share and learn in the Docker community.

Access host (not vm) from inside container


(Dteoh) #17

I managed to get it working with network=nat. After I switched to nat, ifconfig showed a new network interface bridge100. I was able to successfully map hostnames to the bridge100 IP address and access services on the host.

(Mitack) #18

@dteoh Care to share specifics?

$ IP=$(ifconfig | grep -A 3 bridge100 | grep inet | cut -d ’ ’ -f 2)
$ python3 -m http.server --bind $IP

In another terminal

$ IP=$(ifconfig | grep -A 3 bridge100 | grep inet | cut -d ’ ’ -f 2)
$ ping $IP
$ curl ${IP}:8000
$ docker run -it --rm busybox ping -c 1 $IP
$ docker run -it --rm --add-host=dh:${IP} buildpack-deps:curl curl --connect-timeout 5 dh:8000

The local commands work.
The docker commands fail.
The docker commands work if I use net=hostnet and the lo0 hack address.

(Martinpeverelli) #19

I’d also be interested in this, as this is the only missing link preventing me from using docker for my dev environment.
Also, I’m a total newbie on Docker, thus it would be cool if the solution would be as less hackery as possible.

In my scenario, I want a container running php to be able to reach the database engine on the host (osx). Repeat for X projects. As what I want is the app services to be containers, but the DBs to be on my local for easier access, persistence, backup, handling, etc.

(Dteoh) #20

In one terminal:

$ docker version                                                                                                                                                                                                            08:19:09
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Wed Apr 27 00:34:20 2016
 OS/Arch:      darwin/amd64

 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   8b63c77
 Built:        Tue May 10 10:39:20 2016
 OS/Arch:      linux/amd64
$ uname -a
Darwin laptop.local 14.5.0 Darwin Kernel Version 14.5.0: Thu Apr 21 20:40:54 PDT 2016; root:xnu-2782.50.3~1/RELEASE_X86_64 x86_64
$ python -m SimpleHTTPServer
Serving HTTP on port 8000 ...

In another terminal:

$ ifconfig
    ether 02:9a:9d:f9:9b:64
    inet netmask 0xffffff00 broadcast
        id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
        maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
        root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
        ipfilter disabled flags 0x2
    member: en5 flags=3<LEARNING,DISCOVER>
            ifmaxaddr 0 port 11 priority 0 path cost 0
    nd6 options=1<PERFORMNUD>
    media: autoselect
    status: active
$ docker run --rm --add-host=dh: -it alpine ping dh                                                                                                                                                             08:14:14
PING dh ( 56 data bytes
64 bytes from seq=0 ttl=63 time=0.406 ms
64 bytes from seq=1 ttl=63 time=0.473 ms
64 bytes from seq=2 ttl=63 time=0.400 ms
64 bytes from seq=3 ttl=63 time=0.427 ms
--- dh ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.400/0.426/0.473 ms
$  docker run --rm --add-host=dh: -it alpine wget http://dh:8000                                                                                                                                                 08:14:49
Connecting to dh:8000 (
index.html           100% |*******************************|   178   0:00:00 ETA

I think the problem is that you are binding services to the bridge. My understanding is that the bridge is forwarding the packets onto localhost.

Anyway, I personally switched to the lo0 alias solution because:

  • I don’t have to modify the default docker settings
  • I am in control of the alias IP address which means not having to parse ifconfig output in scripts

(Dteoh) #21

I think aliasing loopback is the simplest solution. I also have the same requirements as you: accessing MySQL on the host. The only additional thing I had to do, which was MySQL specific, was not to use the root database user (because you have to explicitly grant extra permissions if connecting from outside of localhost) but to make a separate database user.

(Mitack) #22

One problem with the lo0 hack is it’s not persistent-- you have to re-add the alias every time you reboot.
Also, the steps solution you provided @dteoh bind to all addresses, making the server public, which is one condition I’m trying to avoid, hence the reason for binding to the bridge address (or localhost, or alias, or whatever). If there’s another way to accomplish this, that’s what I’m looking for and open to.
When binding to a specific address, this (non-public, host reachable from containers) works on Linux and on Docker for Mac beta with lo0 hack address. On Linux, this is the only possible way (?) to accomplish this.

(Christoph Kluge) #23

I found this topic while I was searching for another issue which I discovered (I will mention later)

Host-networking works for me with host’s hostname and I’m switching daily in multiple networks (home, office, coffeeshops). The only case which I discovered was to set a dhcp-client-id because sometimes the dhcp-server overrides your host’s hostname.

sikei:docker christophkluge$ hostname

inside the docker container I can easily use

root@b6661fe0491f:/var/www# ping sikei
PING sikei ( 56 data bytes
64 bytes from icmp_seq=0 ttl=37 time=0.162 ms
64 bytes from icmp_seq=1 ttl=37 time=0.242 ms

another issue which I see is that host latency is much slower then the latency to a different container. Since I’m not leaving the host mashine I would expect to have more close results in both scenarios

first example: roughly 3x times more latency

  1. ping host system
  2. ping another container

$ ping host -c 10
round-trip min/avg/max/stddev = 0.187/0.300/0.394/0.053 ms
$ ping mysql -c 10
round-trip min/avg/max/stddev = 0.076/0.092/0.120/0.000 ms

second example: roughly 10x slower (probably depends on the app + queries + amount of data)

  1. setup mysql database on your host system (import any kind of dump)
  2. setup mysql database inside a container (import the same dump)
  3. run your application pointing to host-system database
  4. run your application pointing to container-database

$ pinata diagnose
OS X: version 10.11.4 (build: 15E65) version v1.11.1-beta13.1
Running diagnostic tests:
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x
Docker logs are being collected into /tmp/20160529-001229.tar.gz
Most specific failure is: No error was detected

(Garethadams) #24

For the moment, assume I’m happy with a workaround to find the IP address of the host. However, even if I know what that is, I don’t seem to actually be able to make a connection other than to ping.

From an Ubuntu machine with Docker installed:

ubuntu:~$ echo "Hello World" | nc -l 4321 &
[1] 19761
ubuntu:~$ docker run --rm -ti byrnedo/alpine-curl
GET / HTTP/1.1
User-Agent: curl/7.47.0
Accept: */*

Hello World
[1]+  Done                    echo "Hello World" | nc -l 4321

but on OS X with the Docker for Mac beta:

osx:~ $ echo "Hello World" | nc -l 4321 &
[2] 44802 44803
osx:~ $ docker run --rm -ti byrnedo/alpine-curl
curl: (7) Failed to connect to port 4321: Connection refused

However, if the listening port was set up by Docker, then the dockerised curl is actually able to access it!

osx:~ $ echo "Hello World" | docker run -p 4321:4321 --rm -i alpine nc -l -p 4321 &
[1] 80783 80784
osx:~ $ docker run --rm -ti byrnedo/alpine-curl
GET / HTTP/1.1
User-Agent: curl/7.47.0
Accept: */*

Hello World
[1]  + 80783 done       echo "Hello World" |
       80784 done       docker run -p 4321:4321 --rm -i alpine nc -lk -p 4321

All I see here is people using ping to test communication between Docker and a Mac host, but I don’t know if anyone’s got any further than that at all?

(Christoph Kluge) #25

Hey @garethadams,

docker does not expose the gateway ports for some reasons (tbh I don’t know why)… my workarounds are the following:

a) use the loopback as describe above ( - see answer#7
b) use the hosts “real” ip address inside the network (in most cases some 192.168.. ip address)

I would suggest to stick with a) as it’s more reliable and consistent if you often change networks (home, office, cafe’s… )

(PC) #26

Connections from the host to a container show up in the container as coming from But connections from the container to go to the xhyve vm and not the host.

Perhaps xhyve could remap or proxy every port except 2375 back to the host?

That way, access to the magic would behave exactly the way it behaves on linux.

This lack of consistency between linux and mac is tripping up my project.

(Joshua Chaitin-Pollak) #27

There have been a series of issues open going back 3 years asking for a dockerhost hostname inside the container. Here is the latest one, which seems to have recent activity and someone is taking it on:

Hopefully it will work on Docker for Mac.

(Ograycode) #28

I’m glad this thread exists. Our current setup is to have docker running the applications and postgresapp running on the host for easier data persistence, backup and restores. Not being able to easily access the database on the host is really keeping my team from adopting the beta.

(Alex Sherwin) #29

It seems to me like Docker employees have gone quiet on this thread.

Seems to me like where they left it is that they believe it’s functionally equivalent to how it works on Linux.

I think us, as users, clearly disagree, since the goal here is to run all our dev tools natively on OS X/Windows and have the containers work directly and seamlessly with them without major headaches

Sounds to me like that they didn’t propose any changes to fix this and expect the hacks/workarounds to be sufficient

I think that’s pretty disappointing because I think they’re missing the boat on this one

(Alexandre) #30

There’s an issue on GitHub to request to add a dockerhost host entry in the containers, no matter the platform.

I think this is very much related to what is being discussed here, isn’t it?


(Mitack) #31

From the official documentation,

Unfortunately, due to limtations in OSX, we’re unable to route traffic to containers, and from containers back to the host.

Since reaching the host from a container is possible and built-in for Linux, and is possible with Docker Toolbox on Mac and Windows, but it’s officially not possible with Docker for Mac, I’m sticking with Docker Toolbox and abandoning hope with this native app.

(Marzolfb) #32

+1 I think from me.

I think this thread illustrates my exact problem but I will highlight a different use case.

I work in a corporate environment that requires outbound internet access to go through an authenticating proxy. To avoid my credentials from being exposed everywhere in config files and whatnot, I’ve setup a local proxy running on my laptop that forwards to my corporate proxy. So, I’ve got my Max os proxy settings pointing to

Using Docker for Mac, there are two kinds of problems that arise from this.

First, if I want to search/pull/run a container from Dockerhub, for example using something simple like “docker search hello”. I get this error in a Terminal window:

Error response from daemon: Get http: error connecting to proxy dial tcp getsockopt: connection refused

I can get around this by finding the actual IP address of my host and setting my proxy environment variables to this IP address (btw - this only works if I change the system settings - I can’t just export these in the terminal window and have Docker for Mac pick them up). But like the original poster said, this IP address changes. The way it works using Docker Toolbox/Machine/Engine is that I’m able to use Virtualbox’s address to access the host so I have the proxy settings inside the DockerMachine VM configured to point to

Now, even if I solve that problem, I have a second problem. Let’s say I’ve got a container that now needs outbound Internet access. I haven’t tried this use case but I’m almost certain it will fail for the same reasons, I can’t point my http_proxy env variable inside the container to a reachable address. In the Docker Toolbox/Machine/Engine world, I can do this “easily” with “export http_proxy=”. I suspect if I use the host’s actual IP address this will work, but again its always changing so it is really not useful.

By the way, I’ve also tried eliminating my local proxy and going direct to the corporate proxy but that doesn’t work either:

Error response from daemon: Get Proxy Authentication Required

So, I’ve probably got 3 actual issues here that could be documented separately but I just don’t know if address the original poster’s problem will address all 3 or not…

Docker connecting to localhost services
(Ograycode) #33

I was able to reproduce old behaviour ( from inside the container able to talk to the host) by doing the following:

  1. Add as an alias to lo0 on the host machine: sudo ifconfig lo0 alias
  2. Open up postgresapp by changing:
  3. pg_hba.conf value to
  4. postgresql.conf value #listen_addresses = 'localhost' to listen_addresses = '*'

I do not recommend keeping postgres that wide open unless you are on a trusted network.

You can now reach the host’s db from from inside the container and from host to container it is localhost:port.

(Michael Friis) #34

Thanks for detailing your use-case. I’d be curious to understand in greater detail why you’re not interested in also containerizing postgres. Is it because you don’t have sufficient database init scripts that sets up database contents for development?

(Ograycode) #35

My use case is that I want an easy way to set up postgres and the various addons (postgis being the most important) and the data to persist between container launches. I’ve found that the easiest solution to that to be running postgresapp at the version my apps use.

In the past, I’ve tried using shared volumes to persist data, but postgres has rather strict rules on ownership which have proven to be impossible to make work. I’ve tried in different configurations of host -> vm -> container to just vm -> container and none have worked as expected.

Making this bit more complicated is some of the containers need to have shared data between them in the database – they are using the same database tables and the database is their integration point. While far from ideal, and frankly a bad design, it’s what I have to work with at the moment.

Then there is the issue of tooling. It’s straightforward to dump a database remotely and reload it locally, and lots of documentation around it. In addition, most of my data is on heroku and they provide a command just for that scenario which just works. Adding docker to the mix makes it that much more complicated and requires me to figure it out, and I’m no sysadmin or database admin.

Just to top it off, the database I need to have on hand is ~25GB in size. So doing a fresh db load on each container startup isn’t possible.

Finally, whatever solution that is used needs to be easily replicated by my entire team so they can get their work done as well.

So really, it came down to do I spend an unknown amount of time setting up postgres in docker (when I don’t manage databases in production, and pay someone to deal with that pain) or do I spend 10 minutes to download postgresapp, run the heroku command they provide and have the container hit the host machine.

(Michael Friis) #36

Got it, thanks - makes sense.