Docker connection refused between two containers

Hey
I deployed two container in the same network (tried bridge and another that i created to test).
When creating the image of container n2 i exposed port 5000.
The thing is im trying to use n1 with nginx as a proxy redirect which is doing its job.
Although when redirecting to n2 it connection is denied.
Since im using the same network im redirecting to 0.0.0.0:5000.
When entering n1 to try to check if n2 has its port 5000 really open with nmap i find its filtered.

Anyone knows how to open or what the problem might be?

Check to make sure theyā€™re both on the same network. They probably arenā€™t, and youā€™ll probably have to create your own network and add them both to it in the docker run commands.

You can tell by doing a docker network ls, find which network you think they are on, and do a docker network inspect of that network, itā€™ll show you which containers are attached to that network.

Iā€™m not sure how youā€™re trying to connect, but you will probably have better luck connecting via the container-name, and not the IP.

Check to make sure theyā€™re both on the same network. They probably arenā€™t

They are

{
        "Name": "test",
        "Id": "ecf3d90b067c958f67e0bf7becaff902247ed8c637138928d28bb2a79f0c6a44",
        "Created": "2019-01-29T19:47:48.045805396Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "954f6416daeb4073a6382c2d2ea510084bdf8418232b14fca11e962449ae1444": {
                "Name": "n1",
                "EndpointID": "444c2d5844b63498e58facc7ee43bac7bf0c4ceefaa9dc08ea9a2c272b9e50ce",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "c9716668c1ce042bae148f3ec17cc1e985559903f01735be0114c5097ab11271": {
                "Name": "n2",
                "EndpointID": "d1bed1cfac03388fd285adacff42aa4ac7a6ff18ea26b11822eb5c29f4b05140",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }

youā€™ll probably have to create your own network and add them both to it in the docker run commands.
You can tell by doing a docker network ls, find which network you think they are on, and do a docker network inspect of that network, itā€™ll show you which containers are attached to that network.

Just did it as stated above and it is not working

the conf file of the nginx is the follow
both 0.0.0.0:5000 and the 172.18.0.3 are refused when trying to connect

server{
  listen 80;
  location \ {
    proxy_pass "http:0.0.0.0:5000";
    proxy_set_header Host $host;
    proxy_redirect          off;
    proxy_set_header        X-NginX-Proxy true;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  }
}

I have zero experience with nginx so I canā€™t help you there. Try connecting to http://n2:5000 instead of 0.0.0.0

You never ever use ip-numbers when working with docker containers. :smiley:

If you change the 0.0.0.0 in the nginx config to the name of the container (n1 or n2) it will work. Docker has an internal DNS that uses the container name as host.

Look at a previous reply I did a couple of days ago on a similar topic.

I just changed to n2 after that i restarted nginx and im sure the conf is loaded as it shows with command nginx -T the following:

# configuration file /etc/nginx/conf.d/default.conf:`
    server{
      listen 80;
      location / {
    proxy_pass "http://n2:5000";
    proxy_set_header Host $host;
    proxy_redirect          off;
    proxy_set_header        X-NginX-Proxy true;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      }
    }

But still not working.
Getting the following error

[error] 706#706: *13 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://172.18.0.3:5000/", host: "172.17.0.1:8080"

2019/01/30 15:00:28 [error] 706#706: *13 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.18.0.3:5000/favicon.ico", host: "172.17.0.1:8080", referrer: "http://172.17.0.1:8080/"

even though im using the name of the container, in the end it changes to the real ip

Tried using telnet to check if i could connect from the inside of the nginx container and failled

# telnet n2 5000
Trying 172.18.0.3...
telnet: Unable to connect to remote host: Connection refused

Tried nmap to check the port and its closed. Dont know why since i exposed it dockerfile

Where are you running nmap and telnet from?? Inside the nginx-container or your localhost?

EXPOSE tells docker that this container should expose this port(s) WITHIN the defined network. Unless you are running the network=host, it should work. If you are running network=host, you will have a port collision. It will be like two different applications tries to start something on port 8080.

Using the -p in docker run or ports: in docker-compose.yml binds the internal port to a host port.

Take a good look at the example in the link I had in the last reply.

Then tell us exactly how you started the containers please.

Where are you running nmap and telnet from?? Inside the nginx-container or your localhost?

Inside nginx container that is on the same network as the n2 container

Unless you are running the network=host, it should work. If you are running network=host, you will have a port collision. It will be like two different applications tries to start something on port 8080.

They are using the network iā€™ve created name test, not the host

Then tell us exactly how you started the containers please.

For the n1:

sudo docker run -it -d -p 8080:80 --network="test" --name n1 nginx

As for the n2:

sudo docker run -it -d --network="test" --name n2 n2

Also tried using in the command line for n2 the --expose flag even though its in the dockerfile to make sure its really exposed.
Also tried -p :5000 but this shouldnt matter since im trying to access from withing the network so exposing to the outside shouldnt be needed

This thread has been dormant for a little while, but Iā€™m running into the same problem.

Iā€™m using jenkinsci/blueocean with nginx, letsencrypt-nginx-proxy-companion and jwilder/nginx-proxy to docker-compose up a Jenkins server with https provided by letsencrypt. Everything seems to be starting up fine, and the jwilder/nginx-proxy container recceives the request and correctly attempts to route it to the nginx container but gets connection refused on port 80:

nginx-proxy_1                        | nginx.1    | 2019/06/16 19:43:37 [error] 90#90: *1 connect() failed (111: Connection refused) while connecting to upstream, client: <my IP>, server: jenkins.<my domain>.com, request: "GET / HTTP/2.0", upstream: "http://172.18.0.4:80/", host: "jenkins.<my domain>.com"
nginx-proxy_1                        | nginx.1    | jenkins.<my domain>.com <my IP> - - [16/Jun/2019:19:43:37 +0000] "GET / HTTP/2.0" 502 575 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
nginx-proxy_1                        | nginx.1    | 2019/06/16 19:43:37 [error] 90#90: *1 connect() failed (111: Connection refused) while connecting to upstream, client: <my IP>, server: jenkins.<my domain>.com, request: "GET /favicon.ico HTTP/2.0", upstream: "http://172.18.0.4:80/favicon.ico", host: "jenkins.<my domain>.com", referrer: "https://jenkins.<my domain>.com/"
nginx-proxy_1                        | nginx.1    | jenkins.<my domain>.com <my IP> - - [16/Jun/2019:19:43:37 +0000] "GET /favicon.ico HTTP/2.0" 502 575 "https://jenkins.<my domain>.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"

I have confirmed that all containers are on the same network, and I added a busybox-curl container to the network to verify that I can ping the nginx container from inside the network. But if I try curl the nginx container by name or by container-ip-address I get the same connection refused on port 80 that the proxy container is getting:

/ # ping 172.18.0.4
PING 172.18.0.4 (172.18.0.5): 56 data bytes
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.165 ms
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.092 ms
64 bytes from 172.18.0.4: seq=2 ttl=64 time=0.120 ms
^C
--- 172.18.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.092/0.125/0.165 ms
/ # curl http://jenkins_nginx_1
curl: (7) Failed to connect to jenkins_nginx_1 port 80: Connection refused
/ # curl http://172.18.0.4
curl: (7) Failed to connect to 172.18.0.4 port 80: Connection refused
/ # exit

Sincerely hoping a Docker genius can help me out :slight_smile:

PS, here is my docker-compose.yml:

version: '2'

services:

  jenkins:
    image: jenkinsci/blueocean:1.17.0
    volumes:
      - 'jenkins_data:/var/jenkins_home'
      - '/var/run/docker.sock:/var/run/docker.sock'
    ports:
      - '8080:8080'
      - '8443:8443'
    environment:
      - 'JENKINS_OPTS=--httpPort=8080 --httpsPort=8443'

  nginx:
    image: nginx
    links:
      - jenkins
    volumes:
      - "./etc/nginx/conf.d/:/etc/nginx/conf.d/"
    expose:
      - "80"
    environment:
      VIRTUAL_HOST: jenkins.<my domain>.com
      LETSENCRYPT_HOST: jenkins.<my domain>.com
      LETSENCRYPT_EMAIL: info@<my domain>.com

  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "./nginx/vhost.d:/etc/nginx/vhost.d"
      - "./nginx/html:/usr/share/nginx/html"
      - "./nginx/certs:/etc/nginx/certs"
      - "/var/run/docker.sock:/tmp/docker.sock:ro"

  letsencrypt-nginx-proxy-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    volumes_from:
      - "nginx-proxy"

volumes:
  jenkins_data:
    driver: local

Hi Matthewdb,
Add ā€˜hostnameā€™ field into your yml file; use that to form internal network.

example - yml:
jenkins:
image: jenkinsci/blueocean:1.17.0
hostname: jenkins1
.
.
nginx:
image: nginx
hostname: nginx1

Use ā€˜docker network lsā€™ and 'docker network inspect ā€™ to find out what are the names used in the network. Please check the ā€œNameā€ field; possibly jenkins1 and nginx1. Use the name in the server configuration. To reach jenkins from nginx, in nginx config, use ā€˜jenkins1ā€™.

ā€œContainersā€: {
ā€œremovedlongid0ā€: {
ā€œNameā€: ā€œjenkins1ā€,
ā€œEndpointIDā€: ā€œremovedā€,
ā€œMacAddressā€: ā€œ02:42:ac:18:00:03ā€,
ā€œIPv4Addressā€: ā€œ172.24.0.3/16ā€,
ā€œIPv6Addressā€: ā€œā€
},
ā€œremovedlongid3ā€: {
ā€œNameā€: ā€œnginx1ā€,
ā€œEndpointIDā€: ā€œremovedā€,
ā€œMacAddressā€: ā€œ02:42:ac:18:00:04ā€,
ā€œIPv4Addressā€: ā€œ172.24.0.4/16ā€,
ā€œIPv6Addressā€: ā€œā€
}
},

Hope this will be helpful!

1 Like

Hi,

Iā€™m in the same situation than the author. Cant reach a container from another container using telnet with appropriate port but Iā€™m able to ping each container. They are on same network.

Does anyone have a solution ?

I use centOS 8 as host server and Ansible script to create docker container and network :

- hosts: localhost
  become: yes
  tasks:

  - name: Create network for nginx and registry and other docker container
    docker_network:
      name: nginxnet

  - name: Creates nginx config volume
    docker_volume:
      name: nginx_config

  - name: Pulls nginx image
    docker_image:
      name: "nginx"
      source: pull

  - name: Starts nginx
    docker_container:
      detach: yes
      image: nginx
      name: nginx
      hostname: nginx.nginxnet
      restart_policy: always
      volumes:
        - nginx_config:/etc/nginx
      published_ports:
        - 0.0.0.0:443:443
        - 0.0.0.0:80:80
      networks:
        - name: nginxnet

  - name: Creates gogs data volume
    docker_volume:
      name: gogs_data

  - name: Pulls gogs image
    docker_image:
      name: "gogs/gogs"
      source: pull

  - name: Starts Gogs
    docker_container:
      detach: yes
      hostname: gogs.nginxnet
      image: gogs/gogs
      name: gogs
      restart_policy: always
      published_ports:
        - 0.0.0.0:3000:3000
      volumes:
        - gogs_data:/data
      networks:
        - name: nginxnet

From nginx container I tried to ping and telnet as you can see :

root@253bd8d76090:/# ping gogs.nginxnet
PING gogs.nginxnet (172.3.27.5) 56(84) bytes of data.
64 bytes from gogs.nginxnet (172.3.27.5): icmp_seq=1 ttl=64 time=0.102 ms
64 bytes from gogs.nginxnet (172.3.27.5): icmp_seq=2 ttl=64 time=0.089 ms
^C
--- gogs.nginxnet ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 49ms
rtt min/avg/max/mdev = 0.087/0.092/0.102/0.012 ms

root@253bd8d76090:/# telnet gogs.nginxnet 3000
Trying 172.3.27.5...
telnet: Unable to connect to remote host: No route to host

But as you can see above in ansible script I published port 3000 -> 3000 for gogs container. Then on my host if I run telnet localhot 3000 it works so gogs container is listening. But I cant find out why both container cant communicate each other since they are in same network.

Thanks for reading.

Answering my own question :

The problem here is with Centos 8, not docker. In fact itā€™s firewalld that block any connection between containers. Disabling completely firewalld will make the containers communicate again :slight_smile: but stopping firewalld isnā€™t a good idea, I mean about security.

So here is what Iā€™ve done :

  1. Remove all existing rule in firewalld (about port / interfacce)
  2. Execute the following commands line to open everything needed for container to works :

shell

firewall-cmd --add-service=http --permanent
firewall-cmd --add-service=https --permanent
firewall-cmd --zone=public --add-masquerade --permanent
firewall-cmd --reload
systemctl restart docker

firewall-cmd --zone=public --add-masquerade --permanent do the trick.

2 Likes

This was indeed firewall problem.

Ping from container to another container works fine
telnet from container to another container timeouts

after disabling firewall (we are in vpc with firewall in front anyway) everything works as expected

this is my docker-compose.yaml

services:
  pulsar:
    image: apachepulsar/pulsar:3.3.1
    container_name: pulsar
    hostname: pulsar-host
    environment:
      - PULSAR_MEM=-Xms2g -Xmx2g
    ports:
      - "1883:1883"
    networks:
      - pulsar-network
    volumes:
      - ./settings.txt:/tmp/settings.txt
    command: >
      sh -c "cat /tmp/settings.txt >> /pulsar/conf/broker.conf && bin/pulsar standalone"

  quarkus-app:
    image: openjdk:17-jdk-alpine
    container_name: quarkus-app
    hostname: quarkus-host
    ports:
      - "8081:8080"
    networks:
      - pulsar-network
    environment:
      - MQTT_HOST=pulsar
    depends_on:
      - pulsar
    volumes:
      - ./target/simple-mqtt-1.0.0-SNAPSHOT-runner.jar:/app/application.jar
      - ./target/application.properties:/app/config/application.properties
    command: >
      sh -c "sleep 20 && java -jar /app/application.jar"
    working_dir: /app
  
  mosquitto-client:
    image: alpine:latest
    container_name: mosquitto-client
    hostname: mosquitto-client-host
    networks:
      - pulsar-network
    command: sh -c "apk add --no-cache mosquitto-clients && tail -f /dev/null"

networks:
  pulsar-network:
    external: false
    driver: bridge

the pulsar service refuses any connection i.e, refuses connection with every container but does not do so from a localhost service.

so I tried disabling firewalls with no luck.

any suggestions?

Maybe your pulsar settings are wrong, default ports seems to be 6650 and 8080.

ports: is used to listen on container external ports, connecting within a Docker network the internal ports are used and automatically available.

PS: please donā€™t hijack 4 year old threads.