Restricting container ports to host machine from docker-compose (ala "docker run -p 127.0.0.1:80:80")

Overview

Using docker-compose, I’m trying to figure out how to setup my docker-compose based containers so that:

  • The containers have open ports, available only to the host (no public-access holes in my host firewall)

  • The containers have no access to ports open on the host (at least not without hitting the host firewall)

So far, a solution elludes me…

docker “run -p”

From linux, I see that:

  • “docker run -p 8080:8080” pokes a hole in iptables (firewall), that grants the whole world access to port 8080 of the contianer (coincidentally this is not visible via firewall-cmd or ufw)

  • “docker run -p 127.0.0.1:8080:8080” should update iptables so that the port is only available to the host machine

docker-compose “expose”

What I don’t see is how to replicate this from docker-compose. We’ve got developers lauching a local custer of machines, with local ports, and I don’t want these visible to the outside world.

The docker-compose “expose” command doesn’t seem to support the ip specification.

Alt Approach: “network_mode: host”

In my docker-compose.yml, I can configure “network_mode: host”. This seems to work great as far as:

  1. All (without restriction) ports defined as “EXPOSE” in my Docker files are visible to the host

    • NOTE: That without “network_mode: host”, you’d have to explicitly set “expose” (docker-compose) or “-p” (docker) to fully expose these ports
  2. No special inbound firewall rules are created for the ports (yay. protected from the outside world.)

BUT, now my host isn’t protected from the containers. The containers can hit every open port on my host without being affected by the firewall.

Now What?

I’m tempted to setup an ssh proxy server inside the docker-compose cluster, with a secure ssh key, but man that seems like overkill.

Am I approaching this all wrong?

1 Like