How to use multiple containers with same port and without making those ports available to the host machine

Hi there,

I have two container images foo and bar that both expose port 80. I want to run them in the same pod together with an nginx container as sidecar for TLS encryption. The sidecar listens on port 443 and forwards all requests to foo which in turn consumes bar. The TLS-encryption terminates in sidecar which forwards the requests without encryption to foo.

Now, that all works fine with a compose file similar to this one:

version: "3.9"
services:
  foo:
    image: local-registry/foo
    ports:
      - "8080:80"
  bar:
    image: local-registry/bar
    ports:
      - "9090:80"
  sidecar:
    image: local-registry/my-nginx
    ports:
      - "443:443"

In this case, sidecar forwards the requests to port 8080 of foo. However, as far as I understand it, the ports 8080 and 9090 are now reachable from outside the pod, i.e., from the host machine, too? For security reasons, I would prefer to only make port 443 available to the host and keep the rest private.

How can I remap the exposed ports 80 of both foo and bar (in order to avoid conflict) without opening them to the host machine at the same time?

As far as I understand it, the yaml-attribute expose cannot do a remapping and it is only there for documentation purposes. At least that what is said in the Dockerfile documentation:

The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.

So I assume the same holds for the expose yaml attribute in the compose file.

Any advice?

Good evening @svdhero
my idea would be to NOT make the ports 80 of foo and bar to the outside world (=skip the ports-section for foor and bar).
Then change your sidecar-settings to forward to foo:80 (instead of 172.17.0.1:8080) and change footo consume from bar:80 instead of 172.17.0.1:9090.
In other words: you can access the other services using their names and their ports without exposing them to the outside world.

I’ no expert when it’s about nginx but you probably also want to take a look at traefik (reverse proxy). With traefik you actually don’t need to open any ports on any container as it routes everything internally (exclusive inside the docker network).

1 Like

Thank you for the interesting hint. I haven’t heard of Traefik and I will look into it.
However, for my original question, the TLS sidecar scenario was just one example of many use cases for port number overlapping of containers inside a pod.

@matthiasradde Are you saying that the mere fact that both foo and bar expose the same port does not result in a conflict inside the pod? That would be awesome. Coming from podman pods, that is a positive surprise.

However, what would I do if, for some other reason, I still wanted to remap the ports? Is there any way to only publish/change a container’s port internally to the pod without exposing it to the host?

@svdhero You can verify with docker container inspect <containerid|containername> that each container has a different IP-address:

            "MacAddress": "",
            "Networks": {
...
                    "Gateway": "172.19.0.1",
                    "IPAddress": "172.19.0.7",
...
                }
            }

So there is no conflict if multiple containers are using the same port (:80 in this case).

You can access one container from another using its container-name or service-name or ip-address, whereas ip-address is not a good idea because this might change every time you (re)start the container. This can be done without exposing a container’s port to the outside world.
You only need to expose a port if you directly want to access a container from another host.

You only have a problem if you want to expose (see it as a kind of port-forwarding from the docker-host to a docker-container) multiple containers at the same docker-host-port.
So it is not possible to expose foo's port :80 to docker-host’s port :1080 and bar's port :80to docker-host’s port :1080 because only service can listen on that port at the same time. In this case you need a reverse proxy (i.e., nginx) or a loadbalancer (i.e., Traefik) to handle this traffik and forward it to the correct container according to rules (i.e., hostname a to container foo, hostname b to container bar; /foo/* to container foo, /bar/* to container bar; …)

@matthiasradde Thank you for all the details. I’ve learned a lot. That was so helpful.

May I, nevertheless, repeat my last question again? What if the application foo for some reason is hardcoded and expects bar to listen on port 4711. Is there any way to map bar's port 80 to port 4711 without exposing it to public, i.e., to the host machine? Or would I then also need a reverse proxy (nginx) merely for the internal forwarding from bar:4711 to bar:80?

@svdhero According to my knowledge this is not possible only with docker-things.

There are some different ideas to do this:

  • Check bar's documentation if you can configure the port used by the service to :4711 (so you don’t need to publish it to the outside world) - maybe with some environment-variables passed to the container or with a conf-file mounted into the container.
  • You can publish bar's port :80 to docker-host’s port :4711 but deny access to this port from the outside world by local firewall-rules - not so good idea but should be working, too :slight_smile:

You might want to rethink your deployment model. What I would do is run foo and bar in their own pods with nginx in it’s own pod redirecting traffic. Then every pod gets its own IP address and its own set of ports and they can all listen on port 80 without any conflicts. If you define the foo and bar service as ClusterIP, no one will be able to get to them from outside the cluster. That sounds like it might solve your problem because only nginx will be exposed to the public. This is how I have nginx handle all TLS termination so that my services don’t need to implement it.

I know you mentioned using nginx as a sidecar. It sounds like something that a service mesh like Istio will do for you right about of the box.

Docker does not implement the pod concept and as such has no idea of sidecar or init containers. What docker provides is much simpler: all service are siblings, that by default share nothing (except you make them)

@svdhero can you share your point of view what a pod in a docker context would be? A compose stack is not a pod…

You have some options to mimic the behavior of pods to an extend:

  • use depends_on to let a service wait for another (a rather limited facet of what init containers can be used for)
  • hook a service into the network namepace of a different service or container to mimik the behavior of a k8s pod where all containes share the same network namespace (which requires the applications inside the service containers to not bind the same container port, as localhost will be the same for all containers that share the same network namespace)
  • declare the same volume for multiple services

May I suggest this excellent self-paced training? It should give you a good foundation of how docker works and how it can be used.

@meyay
Thank you for clarifying. I started my container journey with Podman and have only recently begun looking into Docker Compose. I always assumed that what you call a “compose stack” would be the same concept as a pod. For me it is not important to have a pod per se, I only need containers that “work together privately” and that only expose a subset of secured ports to the outside world.

@matthiasradde and @rofrano
The only reason why I was so insistent in my questions was that I wanted to understand the big picture and the concept of a compose stack. For my particular use case, it is enough to forward to foo:80 and bar:80, respectively, in my nginx.conf, exactly as @matthiasradde suggested in his first response.

Thank you very much everybody for helping me understand the various aspects of Docker Compose networking. That was so helpful. :+1: