Containers not honoring Docker Compose port assignments

I’ve been testing a lot of images lately and this is a problem I’ve run into enough that I’m starting to think I’m doing something wrong. About 50% of the time when I’m testing an image, port forwarding works exactly as I expect it to, but the other 50% of the time I get the behavior described below. All my containers run behind a reverse proxy, each with its own subdomain. Currently I’m experiencing this problem with the Xwiki image. Here is my compose file:

version: '3.9'
networks:
  my_net:
    external: true
services:
  xwiki_db:
    image: postgres:16.2-alpine3.19
    container_name: xwiki_db
    volumes:
      - ./db/:/var/lib/postgresql/data/
    environment:
      - POSTGRES_ROOT_PASSWORD=xwiki
      - POSTGRES_PASSWORD=xwiki
      - POSTGRES_USER=xwiki
      - POSTGRES_DB=xwiki
      - POSTGRES_INITDB_ARGS='--encoding=UTF8'
    networks:
      - my_net
  xwiki:
    image: xwiki:16.1.0-postgres-tomcat
    container_name: xwiki
    depends_on:
      - xwiki_db
    ports:
      - '1234:8080'
    environment:
      - DB_USER=xwiki
      - DB_PASSWORD=xwiki
      - DB_HOST=xwiki_db
    volumes:
      - ./data/:/usr/local/xwiki/
    networks:
      - my_net

All my containers are on the Docker network my_net, and it was created with this command:

sudo docker network create my_net

The Xwiki container has the IP address 172.17.0.6

My reverse proxy catches the request and forwards as expected, but it gets rejected by the Xwiki container with a 502 Bad Gateway error. Indeed, if I run the command sudo curl 172.17.0.6:1234 I get this error:

curl: (7) Failed to connect to 172.17.0.6 port 1234 after 0 ms: Connection refused

What is more curious is that I can run the command sudo curl 172.17.0.6:8080 and get no error. Further, I can change the reverse proxy configuration to use port 8080 instead of port 1234, and without changing anything in the docker-compose.yml config posted above be able to access Xwiki from a browser. This also makes no sense based on the output of sudo docker ps:

CONTAINER ID    IMAGE                           COMMAND                   CREATED           STATUS           PORTS                     NAMES
110110abcdef    xwiki:16.1.0-postgres-tomcat    "docker-entrypoint.s…"    19 minutes ago    Up 19 minutes    0.0.0.0:1234->8080/tcp    xwiki

In an attempt to simplify the environment and eliminate possible network conflicts, I deleted all my containers, networks, and the /var/lib/docker directory, then recreated the my_net network, the reverse proxy container, and the Xwiki container, but I got the same results.

What is going on here?

A lack of understand :slight_smile:

Let’s sort this out:

  1. User defined networks provide dns-based service discovery. Every bridge network created on the command line or through docker compose is a user defined network.
  2. For container to container communication, you can use the service or container name and container port, as long as both containers share a container network.
  3. for host or lan hosts to container communication, you must use the docker host’s host name or ip and mapped host port to communicate with the container.

If your reverse proxy is in a container as well, you need to apply what 2) says, otherwise what 3) says.

Working with docker can be frustrating, if you’re having to rely on trial & error while working on your solutions. An understanding of the concepts and how things are actually needed to be done will result in a much better experience.

Thus said, may I suggest this excellent free self-paced training? https://container.training/intro-selfpaced.yml.html

In other words, port mapping is not mapping a port from an IP address to the same IP address. You tried to use the container IP for both ports, but there is nothing listening on port 1234 in the container. You made that port available on the host. Using that port on the container’s ip address this way would never work.

I think I’m still missing something. The important bit of my Nginx network config follows this formula for all accessing all containers:

proxy_pass http://$container_name:$container_port$request_uri

In this case that is:

proxy_pass http://xwiki:1234$request_uri

Seems like this is what you described already?

Regarding the port mapping in the docker-compose.yml file, isn’t the first port always the host if no IP is specified? I.e.0.0.0.0 as the docker ps output in my OP indicated?

If your reverse proxy is running as a native process on the host, and the hostname of the docker host happens to be xwiki, then this would be #3 of my previous post.

Note: if the reverse proxy is running inside a container, there is no need to publish ports for any reverse proxy target containers, as they should communicate using a container network.

If the mapping provides no host ip, it will bind the host port to all ips available on the host using 0.0.0.0

1 Like

@meyay Is your server that runs the configuration with the proxy_pass setting in another container besides xwiki or xwiki_db?

I would think that if you want to set the proxy_pass and the server that you configured to do that is on the same network, then you would use:

proxy_pass http://xwiki:8080$request_uri

Since 1234 would be the host port and 8080 is the container where the service runs. However, this all is bit confusing since I would think that proxy_pass is part of the xwiki container and not the xwiki_db container running Postgres. But I may have it backwords.

Also, why did you make the network external?

Is your server that runs the configuration with the proxy_pass setting in another container besides xwiki or xwiki_db?

Yes, my reverse proxy is in its own container.

I would think that if you want to set the proxy_pass and the server that you configured to do that is on the same network, then you would use:

The proxy pass example you provided does work, and in fact I mentioned that I used that configuration in my OP. The fact that it is working confuses me since.

I think I’m narrowing in on my misunderstanding. Let me ask this. I’m operating under the assumption that when I start a container, I can specify an arbitrary port for its service to listen on. I don’t want Xwiki to listen on port 8080. If it uses port 8080 internally, that is fine, but I don’t want that exposed on my host machine or my Docker network. I though the way to accomplish this was to specify {port I want container to listen on}: {port container uses internally} in the container’s docker-compose.yml file. Is it simply not possible for all images to specify the container port they listen on? Some images provide away to change this, like Transmission, Gitlab, and others, and I think this is where I’m getting confused.

Since 1234 would be the host port and 8080 is the container where the service runs. However, this all is bit confusing since I would think that proxy_pass is part of the xwiki container and not the xwiki_db container running Postgres. But I may have it backwords.

Everything in my reverse proxy for this example is for Xwiki’s web interface and not its DB.

Also, why did you make the network external?

I created user defined Docker network because when when I was reading about the default network vs user defined networks it seemed that user defined networks were more performant and I think there were other benefits which I can’t rember at this moment. I also needed IPv6 support at one time for some testing, so I had re-created my_net with IPv6 support. Is there some reason I would want to stay with the default network?

I understand what I was missing now. The way I was thinking I could pick an arbitrary port for a service to listen on is incorrect. A service listens on the port it was made to listen on, and unless there is some configuration mechanism for that service to change its listen port like Transmission and Gitlab have, the port that the image dictates the service listens on is set in stone. In the case of my Xwiki example, that port is 8080. It doesn’t matter if another container has a service that also listens on 8080, because as long as the ports aren’t exposed via the docker-compose.yml port configuration, there will be no port mapping conflict. This additionally increases my security which I was concerned about (because all the containers were exposing a host port the way I was doing it). I thought that was dumb and just the way it worked, but now I realize my error. Thanks guys.

I’m operating under the assumption that when I start a container, I can specify an arbitrary port for its service to listen on.

This is correct, you can specify any port that is not in use for the service to listen on, this is the container port. You can also choose the host port to map to that container port.

I don’t want Xwiki to listen on port 8080. If it uses port 8080 internally, that is fine, but I don’t want that exposed on my host machine or my Docker network.

Which we see that is what you have done in you docker-compose.yml. You have exposed the xwiki service to the host on port 1234, and it runs on port 8080 inside its container.

I thought* the way to accomplish this was to specify {port I want container to listen on}: {port container uses internally} in the container’s docker-compose.yml file.

Yes, and you have done that.

Now to help you uderstand the network setup.

You have made an external Docker network. By default, IP addresses are automatically assigned (I believe via DHCP) to each container that runs on this network. I also believe they start with 172.17.0.* at the time of writing this. See here for more details: Networking overview: IP address and hostname

When you tried to run

sudo curl 172.17.0.6:1234

There is a problem because you mixed up a Docker container IP with a host port. That IP, at the time, belonged to your xwiki container, and is part of the Docker network, so the correct combination would have been 172.17.0.6:8080. You also would need to run that inside a container on the Docker network my_net to have it work without making other docker changes.

I see you already solved everything :slight_smile: Just some additional notes:

I don’t think there is a difference regarding performance, but and the benefit you probably mean was menitoned by @meyay

He also mentioned the proxy and the application container have to be on the same network

If you run the proxy container and the app container in separate compose projects (which is what usually makes sense), the network has to be external so it is not deleted when the proxy compose project is deleted. So that must have been your real reason for the external network. All networks created by compose are user-defined networks, they don’t have to be external.

That is actually the default IP of the default docker bridge. The first user-defined networks starts with 172.18., the second with 172.19. but it is all configurable.

If you use nginx and create your own configuration, you might want to read about nginx dns caching, as it provides the solution for how to work around nginx’s dns caching issue.

1 Like

Good to know. I was actually already using indirect targets as the post you linked to suggested, but I didn’t realize that by itself would prevent caching, so I was also using the proxy_no_cache directive. Apparently I don’t actually need that though.

The proxy_no_cache directive is related to caching a response.

The dns caching issue is caused by nginx’s behavior to resolve a hostname once and then caching it indefinitely. It solves the problem that a cached entry would resolve to an orphaned container ip, after a container is re-created due to configuration changes and received a new container ip. You might still want to set the resolver to the integrated resolver of the user defined network and give it a short validity timespan,

Though, ultimately I would not even bother with nginx, and use traefik + service labels instead, as it allows to dynamically configure the reverse proxy rules based on container start/stop events

NginxProxy can do the same