Docker Community Forums

Share and learn in the Docker community.

"docker build" cannot access other container hosts

I am running a Ruby gems server (“geminabox”) in Docker:

docker run --name geminabox -d -p 9292:9292 -e RUBYGEMS_PROXY=true lheinlen/geminabox

I have a Dockerfile (“myapp”) that needs to download some gems from that Ruby gems server:

FROM ruby:2.2.0
RUN gem sources --add http://geminabox:9292 && gem install some-gem

I can docker build the Dockerfile if I add an entry for "$(docker-machine ip) geminabox" to my local /etc/hosts.

But, ideally, I don’t want to expose the geminabox server outside of Docker. I want the entire build to be done within the Docker machine.

So I run my own light-weight Docker-In-Docker container:

docker run --name my-dind --link geminabox:geminabox -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/bin/docker -it my-dind bash

Within that bash instance, I run these commands:

$ getent hosts       localhost       localhost ip6-localhost ip6-loopback     geminabox d7479780fda8      b77d4991bb23

Looks good! I see geminabox.

$ gem sources --add http://geminabox:9292
http://geminabox:9292 added to sources

Looks good!

$ docker build -f Dockerfile -t myapp .
Sending build context to Docker daemon 872.4 kB
Step 1 : FROM ruby:2.2.0
 ---> bc5beaf30723
Step 2 : RUN gem sources --add http://geminabox:9292 && gem install some-gem
 ---> Running in f3dac56fa51c
Error fetching http://geminabox:9292:
	no such name (http://geminabox:9292/specs.4.8.gz)

I’m unable to make docker build connect to the geminabox container.

Any suggestions?


I may have found an answer to my own question.

There is an open feature request to allow docker build to get the network hosts of the host container, such as by running docker build --net=host"

Also, some workarounds documented at including adding echo "host IP" > /etc/hosts to every Dockerfile command that needs to know about the hosts.

Why go through the extra song and dance of running a local Ruby Gem server to install a gem (or several) instead of just bundling them with the code? Doesn’t using Docker-in-Docker, or hacking your network / host resolution just to make this possible, seem a bit complex just to get some code from local point A to local point B?

It seems to me that your build might be better off expressed as something like a series of container runs which end up dumping all the deps in vendor/cache (using volumes or docker cp), and then baking the final image based on COPY-ing that in. I’m no Ruby expert though.

Thanks for your reply, Nathan! Much appreciated.

For a small project, working directly off the file system and avoiding a private Ruby Gem server can probably work fine. In my organization, everyone works off a central private Ruby Gem server, so I am thinking ahead to how I might replicate that behavior inside Docker, so that teams could run and test their build systems in isolation, with close functional parity to the real build system.

I’ve been using --build-args as a workaround for now to pass in the IP of the rubygems server into docker build, and it’s working quite well so far.

i was also having the similar sort of issue .i was searching for the solution and came across your thread. thank you all for posting answers, they were helpful for me.
best regards!!