Binding to internal network error

Got a funny issue here:

OPTIONS="–selinux-enabled --bip= -H tcp:// -H unix:///var/run/docker.sock"

I am using --bip to change the default network (really have to in my case) and I am also trying to make docker listen on a TCP port - but only in the internal network.

The funny issue is that the “docker0” adapter (and thus the internal network’s host IP) only gets created after docker daemon runs for the first time. I need to remove the “-H tcp://”, start the daemon, stop the daemon, put the “-H …” back and start the daemon again.

Maybe the binding should wait until the adapter docker0 is created?

This might be something you should bring up as a GH issue.

It is however a little worrying - as you’d then giving full access to the Docker daemon to all containers. The approach we normally take, is to give access only to those containers that need it (and we decide to trust) by bind-mounting the docker socket into them

docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock debian

Thanks, Sven.

I am aware of that and I always prefer to bind the socket.

But in this is a particular case I prefer to expose the port instead: this is a droneio-dedicated host where droneio itself runs in a container. I am using drone to build & publish docker images (kinda like what docker hub does).Note that droneio must use a container for project builds (specfied on .drone.yml), so if the project itself relies on a docker client for builds you get something like this:

docker host (docker daemon)
  |- drone container (binds to socket, no big deal)
     |- "docker-client" container (".drone.yml" picks an image that holds just a docker client and git)
       |- docker image builds ("docker build" commands in ".drone.yml" build section)

I find the “docker-daemon-in-docker” pattern to run droneio in a container rather messy, so despite the logical hierarchy shown above all “docker blablabla” commands run against the only docker daemon around (the host itself).