after upgrading to Ubuntu 22.04, updating Docker to the newest version I am unable to reach some containers from the outside world. 2/3 are working as before the upgrade, but 1/3 cannot reached anymore. I use Nginx Proxy Manager.
I already checked the netplan.config.
I already deleted specific containers an recreated them and also the attached networks.
I already shutdown the UFW, knowing that Docker ignores UWF cause due to the own firewall rules.
I already studied the forum, reddit ans so on. None of the mentioned workaround worked for me (like update-alternatives --config iptables).
I guess it has something to do with the internal docker firewall rules. How can I check those?
iptables-save could export the rules and you can search for docker rules in it, but I donāt know how anything could happen to those rules. I would also check the docker network and the subnets they use.
UFW can still block traffic between containers and since you are using Nginx Proxy Manager, I guess the issue is not traffic blocked from the outside bot between containers. I think I didnāt have this issue with UFW when I upgraded to Ubuntu 22.04, but after shutting down ufw, you could also restart Docker if you havenāt done that yet.
And make sure you install Docker from the official repository provided by Docker, not from the repository of Ubuntu and not from Snap. Those unofficial ways can lead to problems too.
I figured out that ex the interface br-9c62dd44c333 for one of the containers that is not accessible is created, but not the necessary iptable-rules.
Deleting the container and the attached interface and recreating both doesnāt rewrite the docker rules for the recreated container in the iptables.
So I guess there is a problem within the iptable creating proccess of docker. I use the repo-version of docker itself. No snap or the ubuntu-repo-version.
Is it possible to recreate the necessary rules by hand? And when which one to I have to recreate. It seems there are quite a lot of them when I look into the rule set of the working containers and changes are big to mess it up.
I wouldnāt even attept to create Dockerās iptables rules manually. If the rules for the container are not created it must have a reason. I use Ubuntu 22.04 for a long time now and never had this issue which is good and also bad, because I donāt know what could have happened to your environment. I assume you checked system logs, right?
journalctl -e
Somestimes dmesg can help too. Watch the logs while you are creating a new container.
Just to be sure, I have to ask, how did you install Docker CE? Is it from the official APT repository provided by Docker Inc?
As suspected the firewall rules arenāt set when a container is created and the firewall blocks access:
Aug 14 14:29:58 virdoc systemd[1]: var-lib-docker-verlay2-590d8e8f881452a19f46b6c7ccf89150a4e6e970f23a6199ff8c1b3807c3892e\x2dinit-merged.mount: Deactivated successfully. Aug 14 14:29:58 virdoc kernel: [342861.400359] br-9c62dd44c333: port 1(veth7214014) entered blocking state Aug 14 14:29:58 virdoc kernel: [342861.400365] br-9c62dd44c333: port 1(veth7214014) entered disabled state Aug 14 14:29:58 virdoc systemd-udevd[4166518]: Using default interface naming scheme 'v249'. Aug 14 14:29:58 virdoc kernel: [342861.400868] device veth7214014 entered promiscuous mode Aug 14 14:29:58 virdoc kernel: [342861.400974] br-9c62dd44c333: port 1(veth7214014) entered blocking state Aug 14 14:29:58 virdoc kernel: [342861.400977] br-9c62dd44c333: port 1(veth7214014) entered forwarding state Aug 14 14:29:58 virdoc kernel: [342861.402299] br-9c62dd44c333: port 1(veth7214014) entered disabled state Aug 14 14:29:58 virdoc systemd-networkd[49757]: veth7214014: Link UP Aug 14 14:29:58 virdoc networkd-dispatcher[780]: WARNING:Unknown index 432 seen, reloading interface list Aug 14 14:29:58 virdoc systemd-udevd[4166519]: Using default interface naming scheme 'v249'. Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.835319827+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.835408180+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.835419793+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.836358675+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/3c3d7a785389baa324f5f0e1549db74645c8ab538309676f2db59f7d0c157fc3 pid=4166540 runtime=io.containerd.runc.v2 Aug 14 14:29:58 virdoc systemd[1]: Started libcontainer container 3c3d7a785389baa324f5f0e1549db74645c8ab538309676f2db59f7d0c157fc3. Aug 14 14:29:59 virdoc kernel: [342861.669122] eth0: renamed from vetheecdb08 Aug 14 14:29:59 virdoc systemd-networkd[49757]: veth7214014: Gained carrier Aug 14 14:29:59 virdoc systemd-networkd[49757]: br-9c62dd44c333: Gained carrier Aug 14 14:29:59 virdoc kernel: [342861.697290] IPv6: ADDRCONF(NETDEV_CHANGE): veth7214014: link becomes ready Aug 14 14:29:59 virdoc kernel: [342861.697458] br-9c62dd44c333: port 1(veth7214014) entered blocking state Aug 14 14:29:59 virdoc kernel: [342861.697464] br-9c62dd44c333: port 1(veth7214014) entered forwarding state Aug 14 14:29:59 virdoc networkd-dispatcher[780]: WARNING:Unknown index 434 seen, reloading interface list Aug 14 14:29:59 virdoc systemd-udevd[4166530]: Using default interface naming scheme 'v249'. Aug 14 14:29:59 virdoc systemd-networkd[49757]: veth87613b7: Link UP Aug 14 14:29:59 virdoc kernel: [342861.857640] br-9c62dd44c333: port 2(veth87613b7) entered blocking state Aug 14 14:29:59 virdoc kernel: [342861.857648] br-9c62dd44c333: port 2(veth87613b7) entered disabled state Aug 14 14:29:59 virdoc kernel: [342861.857847] device veth87613b7 entered promiscuous mode Aug 14 14:29:59 virdoc kernel: [342861.858630] br-9c62dd44c333: port 2(veth87613b7) entered blocking state Aug 14 14:29:59 virdoc kernel: [342861.858635] br-9c62dd44c333: port 2(veth87613b7) entered forwarding state Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304341384+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304417553+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304463603+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304695416+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/670d0244c5afd6461a8d3c05653750d740597806e2ffb914617dd87f7ea5c3cc pid=4166664 runtime=io.containerd.runc.v2 Aug 14 14:29:59 virdoc systemd[1]: Started libcontainer container 670d0244c5afd6461a8d3c05653750d740597806e2ffb914617dd87f7ea5c3cc. Aug 14 14:29:59 virdoc kernel: [342862.117169] eth0: renamed from veth9199a7f Aug 14 14:29:59 virdoc systemd-networkd[49757]: veth87613b7: Gained carrier Aug 14 14:29:59 virdoc kernel: [342862.145111] IPv6: ADDRCONF(NETDEV_CHANGE): veth87613b7: link becomes ready Aug 14 14:30:00 virdoc systemd-networkd[49757]: veth7214014: Gained IPv6LL Aug 14 14:30:01 virdoc systemd-networkd[49757]: veth87613b7: Gained IPv6LL Aug 14 14:30:02 virdoc kernel: [342864.801857] [UFW BLOCK] IN=br-bf3b9d3af07a OUT= MAC=02:42:1d:68:4d:94:02:42:ac:12:00:02:08:00 SRC=172.18.0.2 DST=178.254.33.192 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=9892 DF PROTO=TCP SPT=50048 DPT=27896 WINDOW=64240 RES=0x00 SYN URGP=0
Especially this part looks strange:
Aug 14 14:29:58 virdoc systemd-networkd[49757]: veth7214014: Link UP
Aug 14 14:29:58 virdoc networkd-dispatcher[780]: WARNING:Unknown index 432 seen, reloading interface list
Aug 14 14:29:58 virdoc systemd-udevd[4166519]: Using default interface naming scheme 'v249'.
I tried to find anything that mentions āUFW BLOCKā not caused by ufw but I found the opposite on multiple sites, for example:
Are you sure the log messages were not saved before you uninstalled UFW? The answer is probably yes, but it is still strange. I can imagine disabling ufw incorrectly, but hard to imagine uninstalling it incorrectly. Still, can you share how you disabled it and how you uninstalled it?
Also please, share the output of the following commands:
docker info
docker version
snap list docker
dpkg -l | grep docker
I know you told me you installed Docker from the official repository. I just want to be sure that nothing installed a different version from somehere else.
One thing I can still imagine, although I donāt think it is very likely, that there is another ufw somewhere.
snap list docker: error: no matching snaps installed
dpkg -l | grep docker:
ii docker-buildx-plugin 0.11.2-1~ubuntu.22.04~jammy amd64 Docker Buildx cli plugin.
ii docker-ce 5:24.0.5-1~ubuntu.22.04~jammy amd64 Docker: the open-source application container engine
ii docker-ce-cli 5:24.0.5-1~ubuntu.22.04~jammy amd64 Docker CLI: the open-source application container engine
ii docker-ce-rootless-extras 5:24.0.5-1~ubuntu.22.04~jammy amd64 Rootless support for Docker.
ii docker-compose-plugin 2.20.2-1~ubuntu.22.04~jammy amd64 Docker Compose (V2) plugin for the Docker CLI.
I also checked that there is only one instance of UFW running.
Everything looks good. Because I donāt have any more ideas what has happened on the host nd why you still see ufw in the logs when ufw is uninstalled, maybe we can identify the differences between the working and non-working containersā network.
Are the netwrks all created by Docker Compose? Do you have any network with custom parameters? MAybe some networks created as external networks earlier? You could also check the IP addresses. We saw one in the logs. What about the rest of the IPs? Are they different or are they all like 172.x.x.x?
All my containers are created via docker-compose and havenāt special custom parameters.
No some of the containers use also other IP-Addresses/Ranegs, but that doesnāt matter cause I have containers in the 172.xx and also in the other IP-Ranges that arenāt accessible. I already checked that and also changed the networking from non-working conotainers to IP-Ranges that are working or added working networks to them.
There is no difference in the behaviour. Still not reachable.
Since I use Ubuntu 22.04 too, if you can create a compose project to reproduce the issue, I can test it on my machine. Just let me know if you need me to do that. And if you can run new Ubuntu VMs not just Debian VMs, you can try that too, because I upgraded an Ubuntu 20.04 host long time ago which doesnāt run Docker anymore, only in virtual machines. I didnāt have problem with the upgrade then, and everything works in original Ubuntu 22.04 virtual machines so who knows what happened with your environment. It could be a bug in the Ubuntu upgrade or it is also possible that the current Docker version canāt handle OS upgrades so easily.
Thx for your offer. But right now I would like to know if there is any logging of Docker when creating a new container/network/rules to check if the firewall rules are set correctly or a- as i suppose - are missing totally.
And if there is any logging do I have to include it in anywhere (eg docker config) and if there are different logging levels. Any hint on that would be great. Thanks!
I didnāt understand your question first. I realized that and deleted my previous post. So if you a looking for a āverboseā option for logging, you can try the debug mode of dockerd:
Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: ERROR:Failed to get interface "vethf0039fd" status: Command '['/usr/bin/networkctl', 'status', '--no-pager', '--no-legend', '--', 'vethf0039fd']' returned non-zero exit status 1.
Aug 21 09:14:33 xx(dot)net systemd[1]: networkd-dispatcher.service: Got notification message from PID 3152671, but reception only permitted for main PID 758
Aug 21 09:14:33 xx(dot)net networkctl[3152671]: Interface "vethf0039fd" not found.
Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: ERROR:Unknown interface index 151 seen even after reload
Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: WARNING:Unknown index 151 seen, reloading interface list
Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: ERROR:Failed to get interface "veth021a0c3" status: Command '['/usr/bin/networkctl', 'status', '--no-pager', '--no-legend', '--', 'veth021a0c3']' returned non-zero exit status 1.
Aug 21 09:14:33 xx(dot)net systemd[1]: Cannot find unit for notify message of PID 3152662, ignoring.
Looks like the interfaces are not mapped/found correctly. So that maybe the rules arenāt attached in consequence?
It is still just a symptom, not the cause, so you still need to find out why the interface is not created which must have thrown an error before that.
There is one more thing that came to my mind. I remember that some people had docker network problem. We couldnāt find out why, but stoping docker daemon and deleting the network database file and starting Docker again helped. Just to be safe, I wouldnāt completely delete that file even though Docker should recreate it. The reason why that could help is because Docker stores network metdata in that file and if tha database becomes corrupt, that can lead to strange behaviors. I still donāt understand why you saw UFW BLOCK in the logs but this is one last thing I can suggest, so:
If you can, stop and delete all containers, compose projects.
Stop Docker
Copy /var/lib/docker/network/files/local-kv.db to somwehere so you can restore it if you need to.
Remove /var/lib/docker/network/files/local-kv.db
Start Docker
Recreate containers / compose projects if you deleted them.
I had a similar problem after upgrading from 20.04 to 22.04, but itās because I was still running the focal version of docker. I changed /etc/apt/sources.list.d/docker.list to this:
deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu jammy stable
(just s/jammy/focal/)
Then I ran sudo apt upgrade docker-ce.
If you installed the Canonical-provided docker package you might need something different.