How to setup Nginx on Linux VM as Load Balancing between multiple containers for communication within the same port?

I am designing a program that will send messages (Client PC) to a Linux VM (Ubuntu) that will house the docker containers (Docker Host), where each container will have a program that will receive the message and then respond to the client PC through UDP and potentially TCP protocols where I specify the port.

Designing it for a single container worked perfectly with no issues, however, if I want to make another container then it will cause an issue with the connection to that same port. Hence, I found this Link (How to Expose Multiple Containers On the Same Port) and decided to go for Nginx and set it up on the Linux VM. So I went with the following settings:

sudo cat /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log debug;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 768;
    # multi_accept on;
}

# TCP and UDP Load Balancing
stream {
    # TCP Load Balancing
    upstream tcp_backend {
        server 172.17.0.2:8889;  # Container 1 (TCP)
        server 172.17.0.3:8890;  # Container 2 (TCP)
        server 172.17.0.4:8891;  # Container 3 (TCP)
    }

    server {
        listen 8888;            # TCP load balancer listens on port 8888
        proxy_pass tcp_backend; # Forward requests to the TCP backend
        proxy_timeout 20s;      # Timeout for proxied requests
    }

    # UDP Load Balancing
    upstream udp_backend {
        server 172.17.0.2:8889;  # Container 1 (UDP)
        server 172.17.0.3:8890;  # Container 2 (UDP)
        server 172.17.0.4:8891;  # Container 3 (UDP)
    }

    server {
        listen 8888 udp;        # UDP load balancer listens on port 8888
        proxy_pass udp_backend; # Forward requests to the UDP backend
        proxy_timeout 20s;      # Timeout for proxied requests
    }
}

http {
    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;

    ##
    # Gzip Settings
    ##

    gzip on;

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

#mail {
#   # See sample authentication script at:
#   # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#   # auth_http localhost/auth.php;
#   # pop3_capabilities "TOP" "USER";
#   # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#   server {
#       listen     localhost:110;
#       protocol   pop3;
#       proxy      on;
#   }
#
#   server {
#       listen     localhost:143;
#       protocol   imap;
#       proxy      on;
#   }
#}

So I can see communication in the Nginx as follows:

sudo tcpdump -i any -n port 8887 or port 8888
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
11:15:12.874895 eth0  In  IP 192.168.1.2.63075 > 192.168.1.3.8887: UDP, length 315
11:15:12.875338 eth0  In  IP 192.168.1.2.63075 > 172.17.0.1.8887: UDP, length 315
11:15:13.463923 docker0 Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:15:13.463928 vethf560a7c Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:15:27.752880 eth0  In  IP 192.168.1.2.63075 > 192.168.1.3.8887: UDP, length 315
11:15:27.753394 eth0  In  IP 192.168.1.2.63075 > 172.17.0.1.8887: UDP, length 315
11:15:28.457848 docker0 Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:15:28.457852 vethf560a7c Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:15:42.764085 eth0  In  IP 192.168.1.2.63075 > 192.168.1.3.8887: UDP, length 315
11:15:42.764804 eth0  In  IP 192.168.1.2.63075 > 172.17.0.1.8887: UDP, length 315
11:15:43.458021 docker0 Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:15:43.458026 vethf560a7c Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:15:43.520311 vethf560a7c P   IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [S], seq 1001230011, win 32120, options [mss 1460,sackOK,TS val 2027316222 ecr 0,nop,wscale 7], length 0
11:15:43.520320 docker0 In  IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [S], seq 1001230011, win 32120, options [mss 1460,sackOK,TS val 2027316222 ecr 0,nop,wscale 7], length 0
11:15:43.520335 docker0 Out IP 172.17.0.1.8888 > 172.17.0.2.47084: Flags [S.], seq 2759787398, ack 1001230012, win 31856, options [mss 1460,sackOK,TS val 312322409 ecr 2027316222,nop,wscale 7], length 0
11:15:43.520338 vethf560a7c Out IP 172.17.0.1.8888 > 172.17.0.2.47084: Flags [S.], seq 2759787398, ack 1001230012, win 31856, options [mss 1460,sackOK,TS val 312322409 ecr 2027316222,nop,wscale 7], length 0
11:15:43.520348 vethf560a7c P   IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [.], ack 1, win 251, options [nop,nop,TS val 2027316222 ecr 312322409], length 0
11:15:43.520350 docker0 In  IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [.], ack 1, win 251, options [nop,nop,TS val 2027316222 ecr 312322409], length 0
11:15:43.520504 docker0 Out IP 172.17.0.1.40166 > 172.17.0.2.8888: Flags [S], seq 1240504208, win 32120, options [mss 1460,sackOK,TS val 312322409 ecr 0,nop,wscale 7], length 0
11:15:43.520508 vethf560a7c Out IP 172.17.0.1.40166 > 172.17.0.2.8888: Flags [S], seq 1240504208, win 32120, options [mss 1460,sackOK,TS val 312322409 ecr 0,nop,wscale 7], length 0
11:15:43.520519 vethf560a7c P   IP 172.17.0.2.8888 > 172.17.0.1.40166: Flags [R.], seq 0, ack 1240504209, win 0, length 0
11:15:43.520524 docker0 In  IP 172.17.0.2.8888 > 172.17.0.1.40166: Flags [R.], seq 0, ack 1, win 0, length 0
11:15:43.520611 docker0 Out IP 172.17.0.1.8888 > 172.17.0.2.47084: Flags [F.], seq 1, ack 1, win 249, options [nop,nop,TS val 312322409 ecr 2027316222], length 0
11:15:43.520614 vethf560a7c Out IP 172.17.0.1.8888 > 172.17.0.2.47084: Flags [F.], seq 1, ack 1, win 249, options [nop,nop,TS val 312322409 ecr 2027316222], length 0
11:15:43.521143 vethf560a7c P   IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [.], ack 2, win 251, options [nop,nop,TS val 2027316223 ecr 312322409], length 0
11:15:43.521148 docker0 In  IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [.], ack 2, win 251, options [nop,nop,TS val 2027316223 ecr 312322409], length 0
11:15:43.559266 vethf560a7c P   IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [P.], seq 1:473, ack 2, win 251, options [nop,nop,TS val 2027316261 ecr 312322409], length 472
11:15:43.559272 docker0 In  IP 172.17.0.2.47084 > 172.17.0.1.8888: Flags [P.], seq 1:473, ack 2, win 251, options [nop,nop,TS val 2027316261 ecr 312322409], length 472
11:15:43.559284 docker0 Out IP 172.17.0.1.8888 > 172.17.0.2.47084: Flags [R], seq 2759787400, win 0, length 0
11:15:43.559287 vethf560a7c Out IP 172.17.0.1.8888 > 172.17.0.2.47084: Flags [R], seq 2759787400, win 0, length 0
11:15:57.904098 eth0  In  IP 192.168.1.2.63075 > 192.168.1.3.8887: UDP, length 315
11:15:57.904098 eth0  In  IP 192.168.1.2.63075 > 172.17.0.1.8887: UDP, length 315
11:15:58.464881 docker0 Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:15:58.464885 vethf560a7c Out IP 172.17.0.1.32893 > 172.17.0.2.8888: UDP, length 315
11:16:12.932868 eth0  In  IP 192.168.1.2.63075 > 192.168.1.3.8887: UDP, length 315
11:16:12.933161 eth0  In  IP 192.168.1.2.63075 > 172.17.0.1.8887: UDP, length 315
11:16:13.456929 docker0 Out IP 172.17.0.1.48185 > 172.17.0.2.8888: UDP, length 315
11:16:13.456935 vethf560a7c Out IP 172.17.0.1.48185 > 172.17.0.2.8888: UDP, length 315

192.168.1.2 -> Client IP; 192.168.1.3 -> Docker Host IP

Here is the typical command I use to run the Docker container:

docker run -it --cap-add=all --hostname debian -p 8889:8888/tcp -p 8889:8888/udp --dns-opt='options single-request' --sysctl net.ipv6.conf.all.disable_ipv6=1 --name Container1 image

However, the issue that I am facing now is that the containers are not sending any information back to the Client PC. Is there a mistake in my configuration of the Nginx or something is missing?

Would running the ngnix as a docker container or keeping it on the Docker Host Container is best?

Just to be sure: you want to use nginx as a loadbalancer, as in one incoming port should be load balanced to multiple backend instances running replicas of the same configuration?

Some odd observations:

  • the target containers use different container ports, why?
  • The target containers are not addressed by their container or service name, but addressed by the container ips - this is a bad practice. Though lookup by container or service name will not work for the default docker bridge network.
  • your target containers are attached to the default docker bridge.
  • seems you don’t use docker compose to configure your containers → you should start using it, as it will make your life way easier. Composerize can help with translating docker run commands into docker compose files.

Your posts do not provide enough information to answer this question.

Thank you for your reply. Here are my comments on your questions.

Thats right. I want the messages to be dispersed across all containers. Each container will then respond to the client’s PC with its hostname (I will try and make it unique in the docker run command) to ensure that all messages have arrived successfully to all containers.

Because if I made the nginx with the following configuration:

# TCP and UDP Load Balancing
stream {
    # TCP Load Balancing
    upstream tcp_backend {
        server 172.17.0.2:8888;  # Container 1 (TCP)
        server 172.17.0.3:8888;  # Container 2 (TCP)
        server 172.17.0.4:8888;  # Container 3 (TCP)
    }

    server {
        listen 8888;            # TCP load balancer listens on port 8888
        proxy_pass tcp_backend; # Forward requests to the TCP backend
        proxy_timeout 20s;      # Timeout for proxied requests
    }

    # UDP Load Balancing
    upstream udp_backend {
        server 172.17.0.2:8888;  # Container 1 (UDP)
        server 172.17.0.3:8888;  # Container 2 (UDP)
        server 172.17.0.4:8888;  # Container 3 (UDP)
    }

    server {
        listen 8888 udp;        # UDP load balancer listens on port 8888
        proxy_pass udp_backend; # Forward requests to the UDP backend
        proxy_timeout 20s;      # Timeout for proxied requests
    }
}

Then when I try to run the container

docker run -it --cap-add=all --hostname debian -p 8888:8888/tcp -p 8888:8888/udp --dns-opt='options single-request' --sysctl net.ipv6.conf.all.disable_ipv6=1 --name Container1 image

I will get the error that 8888 TCP is being used

docker: Error response from daemon: driver failed programming external connectivity on endpoint Container1 (ef1e24d84356efcb8505056cd8f3c989f0c7ca803cc82124ac8d514d3f61f898): failed to bind port 0.0.0.0:8888/tcp: Error starting userland proxy: listen tcp4 0.0.0.0:8888: bind: address already in use.

Which turns out that the ngnix is already listening to 8888 hence I can not run the container using the command that I mentioned in the quote.

Sorry, the issue here is that the service I am using here is the same across all the containers. So, I am not sure what would be the best approach here. Well, when I am using single container without nginx it works as I have added route in client PC to target the docker gateway, hence, I thought this would work if I added nginx and sent the messages across.

I am using Visual Code to create DockerFiles and then build images and then create containers.

I asked this question because I installed the Nginx on the Linux VM, which will disperse the messages by creating a docker container with the Nginx image on it and copying the configuration across to the Nginx container.

For simple setups, I tend to use nginx-proxy with its TLS companion to manage everything automatically, even LetsEncrypt TLS/SSL certificates, configured only by env variables on the target container.

For more complex setups including Docker Swarm, Traefik can do similar configuration discovery.

Thank you for your reply. So you use the DockerFile from the link and then mention what port/s should be used in the docker run command?

If the loadbalancer/reverse proxy is a container, the target containers don’t need to pulish ports, as they will communicate through the container network.

Here is an example of how it might look like:

# create user defined network
docker network create web
# create and start reverse proxy/lb container attached to user defined network
docker run -d --network web -p 8888:8888/tcp -p 8888:8888/udp  {whatever parameters your reverse proxy container needs} image
# create and start target container accessible through reverse proxy/lb
docker run -it --cap-add=all --hostname debian --dns-opt='options single-request' --sysctl net.ipv6.conf.all.disable_ipv6=1 --name target1 --network web image
docker run -it --cap-add=all --hostname debian --dns-opt='options single-request' --sysctl net.ipv6.conf.all.disable_ipv6=1 --name target2 --network web image
docker run -it --cap-add=all --hostname debian --dns-opt='options single-request' --sysctl net.ipv6.conf.all.disable_ipv6=1 --name target3 --network web image

Please notice how the containers are attached to the user defined network by using this argument: --network web. User defined networks provide dns-based service discovery, which allows to communicate to another container using it’s container name, instead of an ip.

Now your upstread could look ike this:

    upstream tcp_backend {
        server target1:8888;  # Container 1 (TCP)
        server target2:8888;  # Container 2 (TCP)
        server target3:8888;  # Container 3 (TCP)
    }

Though, you will want to use the approach that @bluepuma77 suggested, as it will update the reverse proxy/lb configuration dynamically.

Thank you for the explanation, I will look into it and @bluepuma77 approach as well.