Cannot access container on new machine, port marked as "filtered" externally but not internally

I am attempting to setup a docker container registry, I have followed quite closely the documentation found here and appear to have a functional repository, however I can access the blasted thing. Note:

[root@redacted ~]# nmap 127.0.0.1

Starting Nmap 6.40 ( http://nmap.org ) at 2020-12-15 16:27 CST
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
Nmap scan report for localhost (127.0.0.1)
Host is up (-660s latency).
Not shown: 995 closed ports
PORT    STATE SERVICE
22/tcp  open  ssh
25/tcp  open  smtp
80/tcp  open  http
199/tcp open  smux
443/tcp open  https

Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
[root@redacted ~]# nmap 172.20.52.15

Starting Nmap 6.40 ( http://nmap.org ) at 2020-12-15 16:27 CST
Nmap scan report for redacted (172.20.52.15)
Host is up (0.0000080s latency).
Not shown: 997 closed ports
PORT    STATE    SERVICE
22/tcp  open     ssh
80/tcp  filtered http
443/tcp filtered https

Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds
[root@redacted ~]# 
[root@redacted ~]# docker --version
Docker version 19.03.13, build 4484c46d9d

The repository is on port 443. I tried setting up a nginx server on port 80 and got the same thing. I have setup a repository on a different server running docker 17. Was there something that changed in the rules? This is the command I used to spin up the container:

docker run -d \
  -p 443:443 \
  --restart=always \
  --name registry \
  -v /etc/docker/auth:/auth \
  -e "REGISTRY_AUTH=htpasswd" \
  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
  -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
  -v /etc/docker/certs:/certs   \
  -e REGISTRY_HTTP_ADDR=0.0.0.0:443   \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/redacted.crt   \
  -e REGISTRY_HTTP_TLS_KEY=/certs/redacted.key   \
  registry:2

My iptables rules:
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N LOGGING
-A INPUT -j LOG
-A INPUT -p udp -m udp --dport 162 -j ACCEPT
-A INPUT -p udp -m udp --dport 161 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -d 224.0.0.1/32 -j DROP
-A INPUT -d 255.255.255.255/32 -j DROP
-A INPUT -d 192.236.39.255/32 -j DROP
-A INPUT -j LOG
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -j LOG
-A INPUT -j LOGGING
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A DOCKER -j LOG
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped: "
-A LOGGING -j DROP

Any guidance would be grand, I have been searching the internet, reading docs for docker and IP tables for several days with no progress. depressed

Docker will poke holes in iptables, ufw (on Ubuntu) or firewall (on CentOS) whenever it requires them…

That’s what I would expect, and is indeed what I have experienced on the server running docker 17. Any ideas as to why that is not what I am experiencing now?

Well, I managed to get somewhere by using --network host, but I’m sure that’s not the recommended default way to do this.

Ok, so apparently my containers can’t get out either. I used “ping 8.8.8.8” from within a container and it can’t get out.

With --network host, the container uses the hosts namespace for the network interfaces - network-wise the processes inside the container run, as if they are directly executed on the host.

You didn’t mention your OS, your firewall of choice whether you tinkered around with /etc/docker/daemon.json or the systemd service defintion to turn off automatic iptables handling. And then there is always the chance that the installation is just messed up…

That makes sense and sounds like what I read in the docker documents. And of course when I inspect the continer using -netowrk host it uses the host’s IP. For the host that I originally posted this for that is fine as it is running one docker container, but I have this same problem on a different host as well. Unfortunately that “hack” won’t work on that host because it is hosting 5 containers that need to communicate with each other. Perhaps I can configure multiple nics to get it to work, but port forwarding as I understand it should be the preferred route, and the route that is used on my other 6 docker hosts.

OS is CentOS7. I have not tinked with daemon.json outside of enabling an insecure repository for testing, nor have I altered the systemd service definition (and when I spin up new containers with -p new iptables rules get inserted, so that looks good.). As to the installation, it was done via yum, I’m not sure of a way to validate it.

Running containers with --network host kind of ruins the magical experience, as every container port is bound to the host’s interfaces (if the process binds to 0.0.0.0), thus a port collision is very likely to happen. I generaly don’t recommend to use it like that.

Though, what I used in the past is the long syntax on global swarm service, to bind a single host port to the container. This is usefull if you need to bypass the ingress mesh to a) speed up things and b) if you require the real clients ip in your application.

I am helping out in a project right now where Docker CE 19.3.13 is used on CentOS 7 with enforced selinux: port publishing works like a charme.

Just for the sake of testing, I would set selinx to permissive and remove the docker-ce packages and reinstall them. I assume you do install the packges from docker’s repositories and not the lagacy docker package that is provided by the centos repos.

I agree, and I really wish I could avoid using --network host but it is the only thing that seems to work. SELinux has been permissive the whole time and I installed docker from the official docker repos. At this point, I give up. I’ll ruin the magic for a system that works. Thank you for your assistance none the less.

I meet the same problem. Hope for resolution.