Docker Community Forums

Share and learn in the Docker community.

Access host (not vm) from inside container

I found this topic while I was searching for another issue which I discovered (I will mention later)

Host-networking works for me with host’s hostname and I’m switching daily in multiple networks (home, office, coffeeshops). The only case which I discovered was to set a dhcp-client-id because sometimes the dhcp-server overrides your host’s hostname.

sikei:docker christophkluge$ hostname

inside the docker container I can easily use

root@b6661fe0491f:/var/www# ping sikei
PING sikei ( 56 data bytes
64 bytes from icmp_seq=0 ttl=37 time=0.162 ms
64 bytes from icmp_seq=1 ttl=37 time=0.242 ms

another issue which I see is that host latency is much slower then the latency to a different container. Since I’m not leaving the host mashine I would expect to have more close results in both scenarios

first example: roughly 3x times more latency

  1. ping host system
  2. ping another container

$ ping host -c 10
round-trip min/avg/max/stddev = 0.187/0.300/0.394/0.053 ms
$ ping mysql -c 10
round-trip min/avg/max/stddev = 0.076/0.092/0.120/0.000 ms

second example: roughly 10x slower (probably depends on the app + queries + amount of data)

  1. setup mysql database on your host system (import any kind of dump)
  2. setup mysql database inside a container (import the same dump)
  3. run your application pointing to host-system database
  4. run your application pointing to container-database

$ pinata diagnose
OS X: version 10.11.4 (build: 15E65) version v1.11.1-beta13.1
Running diagnostic tests:
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x
Docker logs are being collected into /tmp/20160529-001229.tar.gz
Most specific failure is: No error was detected


For the moment, assume I’m happy with a workaround to find the IP address of the host. However, even if I know what that is, I don’t seem to actually be able to make a connection other than to ping.

From an Ubuntu machine with Docker installed:

ubuntu:~$ echo "Hello World" | nc -l 4321 &
[1] 19761
ubuntu:~$ docker run --rm -ti byrnedo/alpine-curl
GET / HTTP/1.1
User-Agent: curl/7.47.0
Accept: */*

Hello World
[1]+  Done                    echo "Hello World" | nc -l 4321

but on OS X with the Docker for Mac beta:

osx:~ $ echo "Hello World" | nc -l 4321 &
[2] 44802 44803
osx:~ $ docker run --rm -ti byrnedo/alpine-curl
curl: (7) Failed to connect to port 4321: Connection refused

However, if the listening port was set up by Docker, then the dockerised curl is actually able to access it!

osx:~ $ echo "Hello World" | docker run -p 4321:4321 --rm -i alpine nc -l -p 4321 &
[1] 80783 80784
osx:~ $ docker run --rm -ti byrnedo/alpine-curl
GET / HTTP/1.1
User-Agent: curl/7.47.0
Accept: */*

Hello World
[1]  + 80783 done       echo "Hello World" |
       80784 done       docker run -p 4321:4321 --rm -i alpine nc -lk -p 4321

All I see here is people using ping to test communication between Docker and a Mac host, but I don’t know if anyone’s got any further than that at all?

1 Like

Hey @garethadams,

docker does not expose the gateway ports for some reasons (tbh I don’t know why)… my workarounds are the following:

a) use the loopback as describe above ( - see answer#7
b) use the hosts “real” ip address inside the network (in most cases some 192.168.. ip address)

I would suggest to stick with a) as it’s more reliable and consistent if you often change networks (home, office, cafe’s… )

Connections from the host to a container show up in the container as coming from But connections from the container to go to the xhyve vm and not the host.

Perhaps xhyve could remap or proxy every port except 2375 back to the host?

That way, access to the magic would behave exactly the way it behaves on linux.

This lack of consistency between linux and mac is tripping up my project.


There have been a series of issues open going back 3 years asking for a dockerhost hostname inside the container. Here is the latest one, which seems to have recent activity and someone is taking it on:

Hopefully it will work on Docker for Mac.

1 Like

I’m glad this thread exists. Our current setup is to have docker running the applications and postgresapp running on the host for easier data persistence, backup and restores. Not being able to easily access the database on the host is really keeping my team from adopting the beta.

It seems to me like Docker employees have gone quiet on this thread.

Seems to me like where they left it is that they believe it’s functionally equivalent to how it works on Linux.

I think us, as users, clearly disagree, since the goal here is to run all our dev tools natively on OS X/Windows and have the containers work directly and seamlessly with them without major headaches

Sounds to me like that they didn’t propose any changes to fix this and expect the hacks/workarounds to be sufficient

I think that’s pretty disappointing because I think they’re missing the boat on this one

1 Like

There’s an issue on GitHub to request to add a dockerhost host entry in the containers, no matter the platform.

I think this is very much related to what is being discussed here, isn’t it?


1 Like

From the official documentation,

Unfortunately, due to limtations in OSX, we’re unable to route traffic to containers, and from containers back to the host.

Since reaching the host from a container is possible and built-in for Linux, and is possible with Docker Toolbox on Mac and Windows, but it’s officially not possible with Docker for Mac, I’m sticking with Docker Toolbox and abandoning hope with this native app.


+1 I think from me.

I think this thread illustrates my exact problem but I will highlight a different use case.

I work in a corporate environment that requires outbound internet access to go through an authenticating proxy. To avoid my credentials from being exposed everywhere in config files and whatnot, I’ve setup a local proxy running on my laptop that forwards to my corporate proxy. So, I’ve got my Max os proxy settings pointing to

Using Docker for Mac, there are two kinds of problems that arise from this.

First, if I want to search/pull/run a container from Dockerhub, for example using something simple like “docker search hello”. I get this error in a Terminal window:

Error response from daemon: Get http: error connecting to proxy dial tcp getsockopt: connection refused

I can get around this by finding the actual IP address of my host and setting my proxy environment variables to this IP address (btw - this only works if I change the system settings - I can’t just export these in the terminal window and have Docker for Mac pick them up). But like the original poster said, this IP address changes. The way it works using Docker Toolbox/Machine/Engine is that I’m able to use Virtualbox’s address to access the host so I have the proxy settings inside the DockerMachine VM configured to point to

Now, even if I solve that problem, I have a second problem. Let’s say I’ve got a container that now needs outbound Internet access. I haven’t tried this use case but I’m almost certain it will fail for the same reasons, I can’t point my http_proxy env variable inside the container to a reachable address. In the Docker Toolbox/Machine/Engine world, I can do this “easily” with “export http_proxy=”. I suspect if I use the host’s actual IP address this will work, but again its always changing so it is really not useful.

By the way, I’ve also tried eliminating my local proxy and going direct to the corporate proxy but that doesn’t work either:

Error response from daemon: Get Proxy Authentication Required

So, I’ve probably got 3 actual issues here that could be documented separately but I just don’t know if address the original poster’s problem will address all 3 or not…

I was able to reproduce old behaviour ( from inside the container able to talk to the host) by doing the following:

  1. Add as an alias to lo0 on the host machine: sudo ifconfig lo0 alias
  2. Open up postgresapp by changing:
  3. pg_hba.conf value to
  4. postgresql.conf value #listen_addresses = 'localhost' to listen_addresses = '*'

I do not recommend keeping postgres that wide open unless you are on a trusted network.

You can now reach the host’s db from from inside the container and from host to container it is localhost:port.


Thanks for detailing your use-case. I’d be curious to understand in greater detail why you’re not interested in also containerizing postgres. Is it because you don’t have sufficient database init scripts that sets up database contents for development?

My use case is that I want an easy way to set up postgres and the various addons (postgis being the most important) and the data to persist between container launches. I’ve found that the easiest solution to that to be running postgresapp at the version my apps use.

In the past, I’ve tried using shared volumes to persist data, but postgres has rather strict rules on ownership which have proven to be impossible to make work. I’ve tried in different configurations of host -> vm -> container to just vm -> container and none have worked as expected.

Making this bit more complicated is some of the containers need to have shared data between them in the database – they are using the same database tables and the database is their integration point. While far from ideal, and frankly a bad design, it’s what I have to work with at the moment.

Then there is the issue of tooling. It’s straightforward to dump a database remotely and reload it locally, and lots of documentation around it. In addition, most of my data is on heroku and they provide a command just for that scenario which just works. Adding docker to the mix makes it that much more complicated and requires me to figure it out, and I’m no sysadmin or database admin.

Just to top it off, the database I need to have on hand is ~25GB in size. So doing a fresh db load on each container startup isn’t possible.

Finally, whatever solution that is used needs to be easily replicated by my entire team so they can get their work done as well.

So really, it came down to do I spend an unknown amount of time setting up postgres in docker (when I don’t manage databases in production, and pay someone to deal with that pain) or do I spend 10 minutes to download postgresapp, run the heroku command they provide and have the container hit the host machine.

Got it, thanks - makes sense.

I was trying a bunch of ways to do this and just settled on the alias on lo0.
I just added this to my bash alias.

ifconfig lo0 |grep >/dev/null
if [ $? -ne 0 ]; then
  sudo ifconfig lo0 alias
  echo "docker host alias is in good shape move along."

My use case was that I am running apt-cacher-ng on my mac natively. Using just a brew install apt-cacher-ng and running it as a service all the time. This speeds up test-kitchen and other configuration management tool testing.


COPY /root

To my Dockerfile where the contents of are as follows.

set -x


if [ -z "${HOST_IP}" ]; then
  HOST_IP=$(route -n | awk '/^ {print $2}')


if [ $? -eq 0 ]; then
    cat >> /etc/apt/apt.conf.d/30proxy <<EOL
    Acquire::http::Proxy "http://$HOST_IP:$APT_PROXY_PORT"; DIRECT;
    cat /etc/apt/apt.conf.d/30proxy
    echo "Using host's apt proxy"
    echo "No apt proxy detected on Docker host"

Then when building my docker images

docker build -t newcontainer --build-arg APT_PROXY_PORT=3142 --build-arg HOST_IP= .

This allows me to run the cacher locally and programmatically check for it inside the container and set the apt proxy options. This also works on Linux or docker-machine just leave off the build-arg for HOST_IP and it will use the older method of grabbing the gateway interface instead of the aliased interface… This should in theory work with xhyve but the gateway interface is somehow hidden or locked down from the vm… Some insight on why you can’t access the gateway interface in xhyve vs virtualbox would be nice…

1 Like

I think my issue is related to this, can anybody here confirm whether this is the case?

I found this solution which allows to save the setup and makes it visible in the OS X Network Preferences:

a) Loopback device in System Preferences visible
$ sudo networksetup -createnetworkservice Loopback lo0

b) setup the ip
$ sudo networksetup -setmanual Loopback

Use case: debugging php server with Xdebug Client on MacOS

1 Like

For Docker for Mac, there are some workarounds in the networking sections of the docs.

docker.for.mac.localhost is a special DNS name that resolves to the host IP, you can use that from inside your containers to connect to the host

1 Like

I created a docker container for doing exactly that
you can then simply use container name dns to access host system e.g. curl http://dockerhost:9200/


I created a docker container for doing exactly that
you can then simply use container name dns to access host system e.g. curl http://dockerhost:9200/

It works perfectly. Thanks :heart:

1 Like