Host machine is unable to connect to container

  • Issue type
    Networking? The host cannot connect to the container, the connection is refused.
    “curl: (7) Failed to connect to 172.17.0.2 port 8091: Connection refused”

  • OS Version/build
    Debian 10
    Docker 20.10.8

  • App version
    latest, Docker Hub

  • Steps to reproduce
    The container is started with a script that is included from the image.

./run.sh snapshot SNAPSHOTNAME
Then I receive the output: "curl: (7) Failed to connect to 172.17.0.2 port 8091: Connection refused
"

In the Dockerfile:

# P2P (seed) port
EXPOSE 2001
# RPC ports
EXPOSE 5000
EXPOSE 8090
EXPOSE 8091

docker ps

root@localhost:/mnt/hiveAPI/hive-docker# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED              STATUS              PORTS                                                                                NAMES
eca353c4e3dc   hive      "hived --data-dir=/s…"   About a minute ago   Up About a minute   0.0.0.0:2001->2001/tcp, 0.0.0.0:5000->5000/tcp, 127.0.0.1:8090-8091->8090-8091/tcp   api

I have not messed with docker network, the defaults are there. I have also tried changing running the container with 0.0.0.0:8090-8091:8090-8091. That didnt allow me to connect to the container from the host machine either.

I have come across some information that suggested that docker hadn’t properly made the right iptable rules. I am unware of what there should be, so here is what I have. I have no modified it.

root@localhost:/mnt/hiveAPI/hive-docker# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy DROP)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (1 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere
root@localhost:/mnt/hiveAPI/hive-docker# iptables -t nat --list
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

I think my issue is steming from the iptables for NAT, chain OUTPUT. The destination is !127.0.0.1/8.

lastly my ifconfig

root@localhost:/mnt/hiveAPI/hive-docker# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:15ff:feaf:aa57  prefixlen 64  scopeid 0x20<link>
        ether 02:42:15:af:aa:57  txqueuelen 0  (Ethernet)
        RX packets 76  bytes 4456 (4.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 85  bytes 8378 (8.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 23.239.XXXXX  netmask 255.255.255.0  broadcast 23.239.XXXX
        inet6 2600:3c01::f03c:XXXX  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::f03c:92ff:XXXX  prefixlen 64  scopeid 0x20<link>
        ether f2:3c:92:19:19:27  txqueuelen 1000  (Ethernet)
        RX packets 4413  bytes 398448 (389.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2141  bytes 221712 (216.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 45.33.XXXX  netmask 255.255.255.0  broadcast 0.0.0.0
        ether f2:3c:92:19:19:27  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 390  bytes 139633 (136.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 390  bytes 139633 (136.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

EDIT
The log outputs were done in short by running the container and then stopping it.
/var/log/messages output when docker container is ran.

Aug 24 16:29:05 localhost kernel: [ 4739.623396] docker0: port 1(veth17b0bc7) entered blocking state
Aug 24 16:29:05 localhost kernel: [ 4739.624739] docker0: port 1(veth17b0bc7) entered disabled state
Aug 24 16:29:05 localhost kernel: [ 4739.625417] device veth17b0bc7 entered promiscuous mode
Aug 24 16:29:06 localhost kernel: [ 4739.876564] eth0: renamed from veth4d2cd15
Aug 24 16:29:06 localhost kernel: [ 4739.900667] IPv6: ADDRCONF(NETDEV_CHANGE): veth17b0bc7: link becomes ready
Aug 24 16:29:06 localhost kernel: [ 4739.901677] docker0: port 1(veth17b0bc7) entered blocking state
Aug 24 16:29:06 localhost kernel: [ 4739.902324] docker0: port 1(veth17b0bc7) entered forwarding state
Aug 24 16:29:18 localhost kernel: [ 4751.672300] docker0: port 1(veth17b0bc7) entered disabled state
Aug 24 16:29:18 localhost kernel: [ 4751.675264] veth4d2cd15: renamed from eth0
Aug 24 16:29:18 localhost kernel: [ 4751.733914] docker0: port 1(veth17b0bc7) entered disabled state
Aug 24 16:29:18 localhost kernel: [ 4751.736564] device veth17b0bc7 left promiscuous mode
Aug 24 16:29:18 localhost kernel: [ 4751.737099] docker0: port 1(veth17b0bc7) entered disabled state

/var/log/syslog

Aug 24 16:30:59 localhost systemd[1]: var-lib-docker-overlay2-bb66b8a043ac757dceb8d8aaaf98ecd113149bdf4de1437a339a6b9e21f35125\x2dinit-merged.mount: Succeeded.
Aug 24 16:30:59 localhost systemd[1]: var-lib-docker-overlay2-bb66b8a043ac757dceb8d8aaaf98ecd113149bdf4de1437a339a6b9e21f35125-merged.mount: Succeeded.
Aug 24 16:30:59 localhost kernel: [ 4853.409232] docker0: port 1(veth6fdd50a) entered blocking state
Aug 24 16:30:59 localhost kernel: [ 4853.409996] docker0: port 1(veth6fdd50a) entered disabled state
Aug 24 16:30:59 localhost kernel: [ 4853.410763] device veth6fdd50a entered promiscuous mode
Aug 24 16:30:59 localhost containerd[822]: time="2021-08-24T16:30:59.830331918Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/07b788b129caf2c6e57b550282c657a0746327788424c6a87f20ce83cd3cbca3 pid=6737
Aug 24 16:30:59 localhost systemd[1]: Started libcontainer container 07b788b129caf2c6e57b550282c657a0746327788424c6a87f20ce83cd3cbca3.
Aug 24 16:30:59 localhost kernel: [ 4853.626576] eth0: renamed from veth49d1c61
Aug 24 16:31:00 localhost kernel: [ 4853.662635] IPv6: ADDRCONF(NETDEV_CHANGE): veth6fdd50a: link becomes ready
Aug 24 16:31:00 localhost kernel: [ 4853.663468] docker0: port 1(veth6fdd50a) entered blocking state
Aug 24 16:31:00 localhost kernel: [ 4853.664110] docker0: port 1(veth6fdd50a) entered forwarding state
Aug 24 16:31:02 localhost systemd[1]: docker-07b788b129caf2c6e57b550282c657a0746327788424c6a87f20ce83cd3cbca3.scope: Succeeded.
Aug 24 16:31:02 localhost dockerd[845]: time="2021-08-24T16:31:02.641108235Z" level=info msg="ignoring event" container=07b788b129caf2c6e57b550282c657a0746327788424c6a87f20ce83cd3cbca3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 24 16:31:02 localhost containerd[822]: time="2021-08-24T16:31:02.641134706Z" level=info msg="shim disconnected" id=07b788b129caf2c6e57b550282c657a0746327788424c6a87f20ce83cd3cbca3
Aug 24 16:31:02 localhost containerd[822]: time="2021-08-24T16:31:02.641268108Z" level=error msg="copy shim log" error="read /proc/self/fd/11: file already closed"
Aug 24 16:31:02 localhost kernel: [ 4856.313865] veth49d1c61: renamed from eth0
Aug 24 16:31:02 localhost kernel: [ 4856.350519] docker0: port 1(veth6fdd50a) entered disabled state
Aug 24 16:31:02 localhost kernel: [ 4856.388256] docker0: port 1(veth6fdd50a) entered disabled state
Aug 24 16:31:02 localhost kernel: [ 4856.391000] device veth6fdd50a left promiscuous mode
Aug 24 16:31:02 localhost kernel: [ 4856.391838] docker0: port 1(veth6fdd50a) entered disabled state
Aug 24 16:31:02 localhost systemd[1]: run-docker-netns-58b953785d64.mount: Succeeded.
Aug 24 16:31:02 localhost systemd[1]: var-lib-docker-overlay2-bb66b8a043ac757dceb8d8aaaf98ecd113149bdf4de1437a339a6b9e21f35125-merged.mount: Succeeded.

This script you’re running, are you running that on the host or in the container?

Opps, I should have made that more clear.

On the host machine I am running this script. The host machine is trying to connect to the container which has the ip: 172.17.0.2.

All of the output I gave above is from the host machine as well

If you connect directly to he container ip, 172.17.0.2 in this case, the port mappings dosnt matter, then this should work 100%, are you sure its started correctly and working/listning on port 8091?

I am waiting to confirm with the person who make the docker image and the github rep holding everything to manage it. This is a app to do with a blockchain, and while I am unsure how to verify, my setup might be in a certain mode that prevents the connection. Some of the terminology is crossed and two words can mean the same thing.

So during a certain part of the initial running off this docker image/container, the function I thought was not working due to incorrect configuration is actually working fine as expected. The function will work once this initial init process completes. Nothing was broken, but rather appeared ‘broken’.