How to publish ports to a "host" container that has nested Docker "subcontainers" inside of it?

I’m having trouble getting nginx in a Docker container to talk to some web services that are running in subcontainers from within that container.

I should probably explain a bit more, basically what I’m trying to do is test an Ansible playbook against a throwaway container. But that playbook itself sets up some Docker services. I’m not sure if this is technically “Docker-in-Docker” but what’s gotten me most of the way there is a Dockerfile for the “host”:

FROM ubuntu:22.04
ENTRYPOINT ["sleep", "infinity"]

Which I run as:

docker build -t fake-unbuntu-vm .
docker run --name ansible-test --detach --rm \
  -v /var/run/docker.sock:/var/run/docker.sock \
  fake-unbuntu-vm

And then within that ansible-test “host” (container) my Ansible playbook is able to install the distro’s Docker packages and then docker-compose up some various services I want to test are getting properly set up before I unleash the playbook on a client’s real Ubuntu server.

The problem is that the subcontainers, defined as e.g.

services:
  middleware:
    container_name: web-${APP_DEPLOY}
    user: '$APP_UGID'
    read_only: true
    # …snip…
    environment:
      - PORT=8000
    ports:
      - '$APP_PORT:8000/tcp'

(with APP_DEPLOYs being “prod”/“dev”/etc. and APP_PORTs being 9000/9100/etc.) — the HTTP servers running in those middleware subcontainers can’t be accessed from Nginx in the “host” container. Only on the top-level actual host can I access them!

To summarize the situation I have:

  • real Debian VM hosting the Docker daemon
    • fake Ubuntu “host” container (with /var/run/docker.sock bind mount)
      • nginx running directly (i.e. no subcontainer) from Ubuntu package, listening port 80
      • docker-compose service subcontainer(s) publishing HTTP listener on port 9000
      • docker-compose service subcontainer(s) publishing HTTP listener on port 9100

What I want is for nginx in the “host” container to be able to reverse proxy to the subcontainers on ports 9000/9100.

But instead what happens is that all services get published on the real Debian VM host, and nginx (and curl/etc.) in the fake Ubuntu “host” seems unable to make any connection to those services even though the nested docker-compose was told to publish them at that level.

root@ansible-test:/# docker ps
CONTAINER ID   IMAGE             COMMAND                CREATED          STATUS          PORTS                    NAMES
a38d8c97e44b   dev_middleware    "node middleware.js"   42 minutes ago   Up 42 minutes   0.0.0.0:9010->8000/tcp   web-dev
c42dbc34682d   prod_middleware   "node middleware.js"   42 minutes ago   Up 42 minutes   0.0.0.0:9000->8000/tcp   web-prod
0b67c24bf299   fake-unbuntu-vm   "sleep infinity"       42 minutes ago   Up 42 minutes                            ansible-test

root@anible-test:/# curl -v localhost:9000
*   Trying 127.0.0.1:9000...
* connect to 127.0.0.1 port 9000 failed: Connection refused
*   Trying ::1:9000...
* Immediate connect fail for ::1: Cannot assign requested address
* Failed to connect to localhost port 9000 after 2 ms: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 9000 after 2 ms: Connection refused

This isn’t a huge surprise (since all the containers are ultimately running on the Debian VM rather than inside the Ubuntu container) but what I don’t know is how — or even if! — I could get docker-compose to publish the ports in the right place?

There is no subcontainer or “fake host” here. You just mounted the original Docker socket from the host so your docker client in the so called “host container” talks to the Docker daemon on the host. Docker in Docker means you run the Docker daemon in a Docker container so you don’t need to mount the Docker socket.

https://hub.docker.com/_/docker

Look for the tags containing “dind”. You can find examples too.

Thanks, and yeah when reviewing some earlier work with this setup more closely I noticed I had already had to workaround some similar things with bind mounts as well.

So despite the /var/run/docker.sock being recommended over a true Docker-in-Docker (which was afaict born-deprecated?) the tools really aren’t aware and have no support for a handed-around connection to the Docker daemon socket:

  • any/all file mounts are simply done relative to the actual Docker daemon, regardless of what chroot/container the compose/run command is called relative to
  • similarly, any/all network stuff is simply done relative to the actual Docker daemon, not the namespace from which the daemon is being accessed

In this particular case, I think I’ve outgrown using a Docker container as a “fake VM” — it made working through many of the earlier details of the automated setup I’m doing quicker/easier to test but now at this phase I’m hitting the inherent limitations of using a Docker container as if it were a real server. Probably better to just start testing the final integration steps against an actual VM (despite the longer boot times, etc.) rather than getting too ambitious with these (not really nested!) “subcontainers”.

I used Docker for testing Ansible roles, but do you really need a container to run the Docker daemon?