Docker Community Forums

Share and learn in the Docker community.

Cannot access container on new machine, port marked as "filtered" externally but not internally

I am attempting to setup a docker container registry, I have followed quite closely the documentation found here and appear to have a functional repository, however I can access the blasted thing. Note:

[root@redacted ~]# nmap 127.0.0.1

Starting Nmap 6.40 ( http://nmap.org ) at 2020-12-15 16:27 CST
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
Nmap scan report for localhost (127.0.0.1)
Host is up (-660s latency).
Not shown: 995 closed ports
PORT    STATE SERVICE
22/tcp  open  ssh
25/tcp  open  smtp
80/tcp  open  http
199/tcp open  smux
443/tcp open  https

Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
[root@redacted ~]# nmap 172.20.52.15

Starting Nmap 6.40 ( http://nmap.org ) at 2020-12-15 16:27 CST
Nmap scan report for redacted (172.20.52.15)
Host is up (0.0000080s latency).
Not shown: 997 closed ports
PORT    STATE    SERVICE
22/tcp  open     ssh
80/tcp  filtered http
443/tcp filtered https

Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds
[root@redacted ~]# 
[root@redacted ~]# docker --version
Docker version 19.03.13, build 4484c46d9d

The repository is on port 443. I tried setting up a nginx server on port 80 and got the same thing. I have setup a repository on a different server running docker 17. Was there something that changed in the rules? This is the command I used to spin up the container:

docker run -d \
  -p 443:443 \
  --restart=always \
  --name registry \
  -v /etc/docker/auth:/auth \
  -e "REGISTRY_AUTH=htpasswd" \
  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
  -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
  -v /etc/docker/certs:/certs   \
  -e REGISTRY_HTTP_ADDR=0.0.0.0:443   \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/redacted.crt   \
  -e REGISTRY_HTTP_TLS_KEY=/certs/redacted.key   \
  registry:2

My iptables rules:
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N LOGGING
-A INPUT -j LOG
-A INPUT -p udp -m udp --dport 162 -j ACCEPT
-A INPUT -p udp -m udp --dport 161 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -d 224.0.0.1/32 -j DROP
-A INPUT -d 255.255.255.255/32 -j DROP
-A INPUT -d 192.236.39.255/32 -j DROP
-A INPUT -j LOG
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -j LOG
-A INPUT -j LOGGING
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A DOCKER -j LOG
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped: "
-A LOGGING -j DROP

Any guidance would be grand, I have been searching the internet, reading docs for docker and IP tables for several days with no progress. depressed

I’m not sure I understand the new port[name].constraint construct. Do you propose to move static attribute there too? I think it should be done for consistency, because static = xxx is, in fact, a constraint, just like availability = “public”:

port “http” {
constraint {
static = 1234
interface {
availability = “public”
}
}
}
Or, maybe, it would be better to not introduce new inner block that would always be required?

port “http” {
static = 1234
interface {
availability = “public”
}
}
Another question is about how to specify metadata for network interfaces. Do I get it right that it would go into client block of Nomad configuration, like this?

client {
network {
interface “eth0” {
availability = “public”
}
interface “eth1” {
availability = “private”
}
interface “docker0” {
is_docker_bridge = true
}
}
}
If that’s correct, then client.network_inteface option may now be deprecated, because you can do the same with new port constraints: just mark the desired interface with some metadata, e.g. use_for_nomad_tasks = true, and add this constraint to all ports. However, that would add complexity to job definitions, so maybe some special meta key may be introduced instead:

client {
network {
interface “eth1” {
# Special meta key that prohibits fingerprinting this interface
disable = true
}
interface “eth0” {
availability = “public”
}
}
And the last one: I think it would be useful to support negative meta constraints, e.g.:

Client config

client {
network {
interface “eth0” {
availability = “public”
}
interface “eth1” {
availability = “private”
}
interface “lo” {
availability = “local”
}
}
}

Task config

port “http” {
static = 1234
interface {
# would listen on “eth1” and “lo”
not {
availability = “public”
}
}
}
Or, maybe, regex constraints?

port “http” {
static = 1234
interface {
availability = “/^(?!public)./”
# or:
# availability = “/^(private|local)$/”
}
}

Ok, you’ve flown over my head but it sounds like you have correctly identified my issue. It looks like you are referencing Nomad which is not something I have used before. I’m not yet doing anything fancy, just plain docker commands at this point. Is there a way to get my interface “public” as you indicated without having to first install (what appears to be) an orchestration platform?

Edit: Also, since I’m still a newbie at this can you list the names of files as you talk about them? It gives me something to search for and read up on, thanks.

don’t expect a follow up from lewish95. Its someones stupid “let me google this for you” bot.

Docker will poke holes in iptables, ufw (on Ubuntu) or firewall (on CentOS) whenever it requires them…

That’s what I would expect, and is indeed what I have experienced on the server running docker 17. Any ideas as to why that is not what I am experiencing now?

Well, I managed to get somewhere by using --network host, but I’m sure that’s not the recommended default way to do this.

Ok, so apparently my containers can’t get out either. I used “ping 8.8.8.8” from within a container and it can’t get out.

With --network host, the container uses the hosts namespace for the network interfaces - network-wise the processes inside the container run, as if they are directly executed on the host.

You didn’t mention your OS, your firewall of choice whether you tinkered around with /etc/docker/daemon.json or the systemd service defintion to turn off automatic iptables handling. And then there is always the chance that the installation is just messed up…

That makes sense and sounds like what I read in the docker documents. And of course when I inspect the continer using -netowrk host it uses the host’s IP. For the host that I originally posted this for that is fine as it is running one docker container, but I have this same problem on a different host as well. Unfortunately that “hack” won’t work on that host because it is hosting 5 containers that need to communicate with each other. Perhaps I can configure multiple nics to get it to work, but port forwarding as I understand it should be the preferred route, and the route that is used on my other 6 docker hosts.

OS is CentOS7. I have not tinked with daemon.json outside of enabling an insecure repository for testing, nor have I altered the systemd service definition (and when I spin up new containers with -p new iptables rules get inserted, so that looks good.). As to the installation, it was done via yum, I’m not sure of a way to validate it.

Running containers with --network host kind of ruins the magical experience, as every container port is bound to the host’s interfaces (if the process binds to 0.0.0.0), thus a port collision is very likely to happen. I generaly don’t recommend to use it like that.

Though, what I used in the past is the long syntax on global swarm service, to bind a single host port to the container. This is usefull if you need to bypass the ingress mesh to a) speed up things and b) if you require the real clients ip in your application.

I am helping out in a project right now where Docker CE 19.3.13 is used on CentOS 7 with enforced selinux: port publishing works like a charme.

Just for the sake of testing, I would set selinx to permissive and remove the docker-ce packages and reinstall them. I assume you do install the packges from docker’s repositories and not the lagacy docker package that is provided by the centos repos.