How to configure docker networking with docker-compose to enable remote access to docker container via IP?

This question seems to be quite common but I’ve never found a satisfactory answer to it. In addition w.r.t. setting up the network with docker-compose instead of plain docker there is additional confusion e.g. cause of Support IPAM gateway in version 3.x. To illustrate the question refer to this diagram (draw.io image with embedded diagram data) showing a typical network setup of IoT applications during development:

remote_host_network_setup

As a developer I want to be able to access the application (here: Django, but could be any other backend framework as well) running inside a Docker container on an Ubuntu Server from another machine (Ubuntu Desktop). Here the IP addresses of the Ubuntu Desktop and the Ubuntu Server are assigned statically (another usual configuration during development would be dynamically assigned IP addresses via DHCP from a router in a LAN). It seems to be reasonable to use custom bridge network mode for the Docker network + forwarding from the Ubuntu Server physical IP address to the IP address of the nginx container in the custom bridge network.

The simplified docker-compose.yml is with external network created with docker network create --gateway 10.5.0.1 --subnet 10.5.0.0/16 custom_bridge:

version: "3.5"
services:
  nginx:
    networks:
      nw_containers:
        ipv4_address: 10.5.0.2
    expose:
      - "80"
      - "8080"
      - "1883"
      - "9001"
    ports:
      - "80:8000"
      - "8000:8000"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - django
      - mosquitto
  db:
    networks:
      nw_containers:
        ipv4_address: 10.5.0.3
  django:
    networks:
      nw_containers:
        ipv4_address: 10.5.0.4
  mosquitto:
    networks:
      nw_containers:
        ipv4_address: 10.5.0.5
networks:
  nw_containers:
    external:
      name: custom_bridge

The simplified, relevant part of nginx.conf is:

http {
  upstream django {
    server django:8080;
  }
  server {
    listen 8000;
    listen [::]:8000;

    # Django, development and location specific `proxy_pass`es, should be similar to other backend frameworks
    location /admin {
      proxy_pass http://django/admin
    }
    # web server related `proxy_pass`es not shown
  }
}

stream {
  upstream mosquitto {
     server mosquitto:1883;
  }
  server {
    listen 1883;
    listen [::]:1883;
    proxy_pass mosquitto;
  }
}

How do I have to configure this development network setup to enable access of the web app from a remote machine (Ubuntu Desktop) with docker-compose v3?

Beautiful diagram, though Its completly unclear why publishing port of the nginx reverse proxy container and use the host ip:{published port} is insufficient for what you achive to do…

For instance if you publish port 8443 for the nginx container (and handle tls termination inside), you would use https://192.168.1.2:8443 to access the nginx container, which internaly leverages the reverse proxy rules you created to forward traffic to the target container. Bare in mind that a custom network has a build in dns server that allows container network internal communication using service names.

Update: I see you updated your post. Loose the “expose” declarations in your compose file; they are only used by linked containers (which is super lagacy, the same is true for depends_on). Also you do have an inconsistency between the published ports of the ngnx service and the ports you listen inside the nginx.conf.

@meyay I’ve updated and fixed the info in the question as much as possible.

People usually do not even understand my questions w.o. a diagram like this :slight_smile: The problem is, that I’m able to access the Django admin backend via 10.5.0.1/admin or localhost/admin. However the frontend is not rendered at all cause the frontend files cannot be served appropriately. I’ll try to fix that… The access to the MQTT broker via 10.5.0.2:1883 works just fine.

It has been a while since I worked with docker-compose the last time. There is also no deprecation warning in the docker-compose file reference v3 about expose.

There is also no deprecation warning in the docs about depends_on and startup-order. As nginx needs to know the service names during startup django and mosquitto have to be started before. How do I have to force this nowadays? Or do you suggest to migrate to something like mikrok8s right in the beginning of development?

From the remote host? That’s impossible. It is a bad practive to access a container by its internal ip. This is the reasony why no one actualy cares about the internal ips. You would need to manipulate the route for the 10.5.0.2 network of your 192.168.1.0 network to allow other hosts in your network to access the container directly - which would be a massive bad practive.

Do you know what it is used for? It realy does nothing unless you use container links. I consider it lagecy because as soon as you move your stuff to swarm, links will not work.

Won’t work with swarm either. To fix the availability problem you can tweak your nginx.conf to use the internal network dns server AND introduce a variable in proxy_pass to prevent it from caching the target ip, see: NGINX swarm redeploy timeouts - #5 by meyay (this even works for dynamic container ip’s). Nginx does not need to be started after the other containers, though the other containers need to be started to successfuly forward the traffic… which eventualy happens.

The whole world silently aggreed that Kubernetes is the orchestrator of choise. Kuberentes gives fine grained controll and is way more powerfull than docker-compose or swarm ever will be.

I know. I never tried to access containers directly via ips until now. However my line manager wants me to do it and I tried in bridge and macvlan network mode :slightly_smiling_face: During development it does not matter anyway.

“Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.” Mh, right I missed the bolt part. Using expose without using service names make no sense. W.r.t. container managment in production I fully agree of course.

Thanks for the reference. This will help me for sure.

Mh, I’m pretty sure I had “service not reachable” issues when nginx container was started before django and mosquitto.

:+1:

Good for him if he is able to express what he wants. Though, is he able to think through what the solution actualy needs? Isn’t that more a task of a (solution) architect?

I NEVER used macvlan on the job during the past 6 years, it was never realy necessary.

In a custom network all containers are free to communicate with other containers in the same network. While Kubernetes has network policies that allow to define which pod is allowed to communicate with other pods and ports, there is no counterpart for plain docker, docker-compose or docker swarm.

:slightly_smiling_face:

I’m glad to here that. It wasn’t really fun to mess around with it.

I’d love to play around with Kubernetes. However I’m a software developer whose job is it to write code. We don’t have a DevOp/Docker Captain which would usually care about things like this :slightly_smiling_face:

If we could program a bot with this text it would answer 30 % of the questions in this forum.

1 Like

I got rid of the static IPs and did a cleanup. The setup looks as follows. IPv4 means dynamically assigned via custom_bridge. The custom bridge (using the default bridge network is considered legacy) is created with docker network create -d custom_bridge. There is a redis container as well.

remote_host_network_setup

docker-compose.yml

version: "3.5"
services:
  nginx:
    image: nginx:1.17.9-alpine
    hostname: nginx
    container_name: c_nginx
    networks:
      - nw_containers
    ports:
      - "80:80"
      - "1883:1883"
      - "9001:9001"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./edge_frontend/www:/var/www
      - ./backend/edge_backend/static:/var/www/static:ro
    depends_on:
      - django
      - mosquitto
  mosquitto:
    image: eclipse-mosquitto:1.6.8
    hostname: mosquitto
    container_name: c_mosquitto
    networks:
      - nw_containers
    # ensure correct permissions: sudo chown -R 1883:1883 ./mosquitto/
    user: 1883:1883
    environment:
      - PUID=1883
      - PGID=1883
    volumes:
      - ./mosquitto/config/mosquitto.conf:/mosquitto/config/mosquitto.conf
      - ./mosquitto/data:/mosquitto/data
      - ./mosquitto/logs:/mosquitto/log
  redis:
    image: redis:5.0.7-alpine
    hostname: redis
    container_name: c_redis
    networks:
      - nw_containers
    volumes:
      - ./redis/redis.conf:/usr/local/etc/redis/redis.conf:ro
    command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
  database:
    image: postgres:12.0-alpine
    hostname: database
    container_name: c_database
    networks:
      - nw_containers
    environment:
      - POSTGRES_USER=blub
      - POSTGRES_PASSWORD=bla
      - POSTGRES_DB=blub
    volumes:
      - postgres_data:/var/lib/postgresql/data/
  django:
    build:
      context: ./backend
    entrypoint:
      - /code/entrypoint.sh
    command: daphne --bind 0.0.0.0 -p 8080 edge_backend.asgi:application
    stdin_open: true
    tty: True
    hostname: django
    container_name: c_django
    networks:
      - nw_containers
    volumes:
      - ./backend/edge_backend:/code/edge_backend/
      - ./backend/aos_fixture.json:/code/aos_fixture.json:ro
      - ./backend/entrypoint.sh:/code/entrypoint.sh:ro
      - ./data/config:/mnt/data/config
      - ./data/model:/mnt/data/model
    depends_on:
      - mosquitto
      - redis
      - database
networks:
  # docker network create -d bridge custom_bridge:
  # subnet: 172.18.0.0/16
  # gateway: 172.18.0.1
  nw_containers:
    external:
      name: custom_bridge
volumes:
  postgres_data:

The configuration of the custom bridge network looks like this:

$ docker network inspect custom_bridge 
[
    {
        "Name": "custom_bridge",
        "Id": "2718019ded6706f707db9a45c3446051ed3528b64217663d951d72c4cf99e082",
        "Created": "2020-07-28T12:35:32.592398458+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

nginx/nginx.conf

worker_processes  auto;  # 1 worker process per cpu

events {
  worker_connections  1024;  # explicit default
}

http {
  upstream django {
    server django:8080;
  }

  server {
    listen 80;
    listen [::]:80;

    # serve static files of the frontend,
    # /var/www/static contains static files of the backend
    location / {
      root /var/www;
      try_files $uri $uri/ /index.html;
    }

    # proxy admin url to asgi server (daphne) running on port 8080
    location /admin {
      proxy_pass http://django/admin;
    }
    # proxy frontend/backend integration api requests and requests to the interactive
    # graphiql interface to asgi server (daphne) running on port 8080
    location /graphql {
      proxy_pass http://django/graphql;
    }

    # proxy graphql subscription api to django channels backend redis (default port 6379)
    location /graphql/subscriptions {
      proxy_pass http://django/graphql/subscriptions;

      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";

      proxy_redirect off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Host $server_name;
    }
  }
}

stream {
  upstream mosquitto {
    server mosquitto:1883;
    server mosquitto:9001;
  }

  server {
    listen 1883;
    listen [::]:1883;
    listen 9001;
    listen [::]:9001;
    proxy_pass mosquitto;
  }
}

If I try to connect to the MQTT broker from the server (machine where the dockerized application runs) via localhost:1883 or 172.18.0.1:1883 this connecting works just fine. Means nginx forwards w.r.t. the mqtt broker correctly. All ports which need to be accessible from outside the container are open (STATE open): nmap 172.18.0.1. -p 80, nmap 172.18.0.1 -p 1883, nmap 172.18.0.1 -p 9001. All ports required for docker network internal communication only are closed (STATE: closed): nmap 172.18.0.1 -p 6379, nmap 172.18.0.1. -p 5432, etc.

What’s not working is the serving of static files via nginx: If I access the Django admin interface via localhost/admin i get the form. This means nginx forwards w.r.t. django via http correctly. However the frontend is not shown for the admin interface (localhost/admin). The frontend in general is not shown (localhost) as well. These problems relate to the nginx configuration.

W.r.t. the network configuration the remaining point not clear to me yet is how I can forward from the server static IP 192.168.1.2 to 172.18.0.1 that I can access the application from a remote machine (192.168.1.3). The answer for the case with dynamic IP assignment to the Ubuntu Server and Ubuntu Desktop is not clear to me as well.

The nginx web server issue is solved. I simply missed include mime.types; default_type application/octet-stream; sendfile on; from default nginx config file :upside_down_face:

worker_processes  auto;  # 1 worker process per cpu

events {
  worker_connections  1024;  # explicit default
}

http {
  include mime.types;

  default_type application/octet-stream;

  sendfile on;

  upstream django {
    server django:8080;
  }

  server {
    listen 80;
    listen [::]:80;

    # serve static files of the frontend,
    # /var/www/static contains static files of the backend
    location / {
      root /var/www;
      try_files $uri $uri/ /index.html;
    }

    # proxy admin url to asgi server (daphne) running on port 8080
    location /admin {
      proxy_pass http://django/admin;
    }
    # proxy frontend/backend integration api requests and requests to the interactive
    # graphiql interface to asgi server (daphne) running on port 8080
    location /graphql {
      proxy_pass http://django/graphql;
    }

    # proxy graphql subscription api to django channels backend redis (default port 6379)
    location /graphql/subscriptions {
      proxy_pass http://django/graphql/subscriptions;

      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";

      proxy_redirect off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Host $server_name;
    }
  }
}

stream {
  upstream mosquitto {
    server mosquitto:1883;
    server mosquitto:9001;
  }

  server {
    listen 1883;
    listen [::]:1883;
    listen 9001;
    listen [::]:9001;
    proxy_pass mosquitto;
  }
}

Uhm, that’s where the port mapping of the nginx container commes into play. You alread map the required ports from the host to the nginx container:

Thus, you should be able to access your endpoints like this:
port 80, location / → http://192.168.1.2
port 80, location /admin → http://192.168.1.2/admin
port 80, location /graphql → http://192.168.1.2/graphql
port 80, location /graphql/subscriptions → http://192.168.1.2/graphql/subscriptions

Though, I am not sure if server on port 1883 and port 9001 works like this. I am not sure that nginx is able to recognize the correct upstream target depening on the incomming listen port… I would assume that it treats the upstream servers are “same of the kind” and balances amongst them. I also find a proxy_pass declaration outside a context confusing. I am not saying it does’t work, maybe it does, I just find it unusual…

If it realy works the enspoints should be:
port 1883 → http://192.168.1.2:1883
port 9001 → http://192.168.1.2:9001

If none of those endpoints are accessible from the client computer (ip 192.168.1.3), you might want to check firewall configurations o the docker host and your client computer. It should work.

Not sure what this is about? So the 192.168.1.2 is a dhcp assigend ip?

Yepp. All communication between host and container network should go through dedicated ports only.

Yepp. Everything works like it should so far.

Hm, good point. I’ve to think about that.

I’m able to access the web application just fine! Seems like I had some stupid misconfiguration of the LAN :dizzy_face:

I’ve only tried the network setup with static ip address assignment yet.

Did we sort out everything now?

I don’t have enough time to test the dynamic IP address network setup today. I’ll do more extensive testing tomorrow in addition. You really helped me a lot! Thanks! In case something does not work I’ll have to append to this thread or open another new one.

Hint: docker binds publised port to 0.0.0.0 (=all interface ips) by default, thus dynamic assigned ips won’t be a problem. Typicaly you want to have a static ip for your server.

Thanks for the hint!