Some containers unreachable after Upgrade to Ubuntu 22.04

Hey there,

after upgrading to Ubuntu 22.04, updating Docker to the newest version I am unable to reach some containers from the outside world. 2/3 are working as before the upgrade, but 1/3 cannot reached anymore. I use Nginx Proxy Manager.

I already checked the netplan.config.
I already deleted specific containers an recreated them and also the attached networks.
I already shutdown the UFW, knowing that Docker ignores UWF cause due to the own firewall rules.
I already studied the forum, reddit ans so on. None of the mentioned workaround worked for me (like update-alternatives --config iptables).

I guess it has something to do with the internal docker firewall rules. How can I check those?

Or is is just a small thing I have to do?

Thanks in adavance for your kind help.

mike

iptables-save could export the rules and you can search for docker rules in it, but I donā€™t know how anything could happen to those rules. I would also check the docker network and the subnets they use.

UFW can still block traffic between containers and since you are using Nginx Proxy Manager, I guess the issue is not traffic blocked from the outside bot between containers. I think I didnā€™t have this issue with UFW when I upgraded to Ubuntu 22.04, but after shutting down ufw, you could also restart Docker if you havenā€™t done that yet.

And make sure you install Docker from the official repository provided by Docker, not from the repository of Ubuntu and not from Snap. Those unofficial ways can lead to problems too.

Thanks for your reply.

I figured out that ex the interface br-9c62dd44c333 for one of the containers that is not accessible is created, but not the necessary iptable-rules.

Deleting the container and the attached interface and recreating both doesnā€™t rewrite the docker rules for the recreated container in the iptables.

So I guess there is a problem within the iptable creating proccess of docker. I use the repo-version of docker itself. No snap or the ubuntu-repo-version.

Is it possible to recreate the necessary rules by hand? And when which one to I have to recreate. It seems there are quite a lot of them when I look into the rule set of the working containers and changes are big to mess it up.

Thx and greets

I wouldnā€™t even attept to create Dockerā€™s iptables rules manually. If the rules for the container are not created it must have a reason. I use Ubuntu 22.04 for a long time now and never had this issue which is good and also bad, because I donā€™t know what could have happened to your environment. I assume you checked system logs, right?

journalctl -e

Somestimes dmesg can help too. Watch the logs while you are creating a new container.

Just to be sure, I have to ask, how did you install Docker CE? Is it from the official APT repository provided by Docker Inc?

Thanks again.

As suspected the firewall rules arenā€™t set when a container is created and the firewall blocks access:

Aug 14 14:29:58 virdoc systemd[1]: var-lib-docker-verlay2-590d8e8f881452a19f46b6c7ccf89150a4e6e970f23a6199ff8c1b3807c3892e\x2dinit-merged.mount: Deactivated successfully. Aug 14 14:29:58 virdoc kernel: [342861.400359] br-9c62dd44c333: port 1(veth7214014) entered blocking state Aug 14 14:29:58 virdoc kernel: [342861.400365] br-9c62dd44c333: port 1(veth7214014) entered disabled state Aug 14 14:29:58 virdoc systemd-udevd[4166518]: Using default interface naming scheme 'v249'. Aug 14 14:29:58 virdoc kernel: [342861.400868] device veth7214014 entered promiscuous mode Aug 14 14:29:58 virdoc kernel: [342861.400974] br-9c62dd44c333: port 1(veth7214014) entered blocking state Aug 14 14:29:58 virdoc kernel: [342861.400977] br-9c62dd44c333: port 1(veth7214014) entered forwarding state Aug 14 14:29:58 virdoc kernel: [342861.402299] br-9c62dd44c333: port 1(veth7214014) entered disabled state Aug 14 14:29:58 virdoc systemd-networkd[49757]: veth7214014: Link UP Aug 14 14:29:58 virdoc networkd-dispatcher[780]: WARNING:Unknown index 432 seen, reloading interface list Aug 14 14:29:58 virdoc systemd-udevd[4166519]: Using default interface naming scheme 'v249'. Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.835319827+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.835408180+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.835419793+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 14 14:29:58 virdoc containerd[803]: time="2023-08-14T14:29:58.836358675+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/3c3d7a785389baa324f5f0e1549db74645c8ab538309676f2db59f7d0c157fc3 pid=4166540 runtime=io.containerd.runc.v2 Aug 14 14:29:58 virdoc systemd[1]: Started libcontainer container 3c3d7a785389baa324f5f0e1549db74645c8ab538309676f2db59f7d0c157fc3. Aug 14 14:29:59 virdoc kernel: [342861.669122] eth0: renamed from vetheecdb08 Aug 14 14:29:59 virdoc systemd-networkd[49757]: veth7214014: Gained carrier Aug 14 14:29:59 virdoc systemd-networkd[49757]: br-9c62dd44c333: Gained carrier Aug 14 14:29:59 virdoc kernel: [342861.697290] IPv6: ADDRCONF(NETDEV_CHANGE): veth7214014: link becomes ready Aug 14 14:29:59 virdoc kernel: [342861.697458] br-9c62dd44c333: port 1(veth7214014) entered blocking state Aug 14 14:29:59 virdoc kernel: [342861.697464] br-9c62dd44c333: port 1(veth7214014) entered forwarding state Aug 14 14:29:59 virdoc networkd-dispatcher[780]: WARNING:Unknown index 434 seen, reloading interface list Aug 14 14:29:59 virdoc systemd-udevd[4166530]: Using default interface naming scheme 'v249'. Aug 14 14:29:59 virdoc systemd-networkd[49757]: veth87613b7: Link UP Aug 14 14:29:59 virdoc kernel: [342861.857640] br-9c62dd44c333: port 2(veth87613b7) entered blocking state Aug 14 14:29:59 virdoc kernel: [342861.857648] br-9c62dd44c333: port 2(veth87613b7) entered disabled state Aug 14 14:29:59 virdoc kernel: [342861.857847] device veth87613b7 entered promiscuous mode Aug 14 14:29:59 virdoc kernel: [342861.858630] br-9c62dd44c333: port 2(veth87613b7) entered blocking state Aug 14 14:29:59 virdoc kernel: [342861.858635] br-9c62dd44c333: port 2(veth87613b7) entered forwarding state Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304341384+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304417553+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304463603+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 14 14:29:59 virdoc containerd[803]: time="2023-08-14T14:29:59.304695416+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/670d0244c5afd6461a8d3c05653750d740597806e2ffb914617dd87f7ea5c3cc pid=4166664 runtime=io.containerd.runc.v2 Aug 14 14:29:59 virdoc systemd[1]: Started libcontainer container 670d0244c5afd6461a8d3c05653750d740597806e2ffb914617dd87f7ea5c3cc. Aug 14 14:29:59 virdoc kernel: [342862.117169] eth0: renamed from veth9199a7f Aug 14 14:29:59 virdoc systemd-networkd[49757]: veth87613b7: Gained carrier Aug 14 14:29:59 virdoc kernel: [342862.145111] IPv6: ADDRCONF(NETDEV_CHANGE): veth87613b7: link becomes ready Aug 14 14:30:00 virdoc systemd-networkd[49757]: veth7214014: Gained IPv6LL Aug 14 14:30:01 virdoc systemd-networkd[49757]: veth87613b7: Gained IPv6LL Aug 14 14:30:02 virdoc kernel: [342864.801857] [UFW BLOCK] IN=br-bf3b9d3af07a OUT= MAC=02:42:1d:68:4d:94:02:42:ac:12:00:02:08:00 SRC=172.18.0.2 DST=178.254.33.192 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=9892 DF PROTO=TCP SPT=50048 DPT=27896 WINDOW=64240 RES=0x00 SYN URGP=0

Especially this part looks strange:

Aug 14 14:29:58 virdoc systemd-networkd[49757]: veth7214014: Link UP
Aug 14 14:29:58 virdoc networkd-dispatcher[780]: WARNING:Unknown index 432 seen, reloading interface list
Aug 14 14:29:58 virdoc systemd-udevd[4166519]: Using default interface naming scheme 'v249'.

How can I resolve that issue?

Greets and Thanks for your kind help, mike

So UFW is still there.

Iā€™m not entirely sure what it means, but I found this:

Please answer this question too from my previous message:

I use the official APT repo by Docker. Sorry for that.

I disabled UFW again, rebooted and recreated a container, but UFW is still in the logs:

Aug 14 19:24:43 virdoc systemd[1]: run-docker-runtime\x2drunc-moby-ee853fdcf64fb31511df6c05a2fc6e91e9bb208f971cd83f26cce60ae0c16d3b-runc.j4qxO7.mount: Deactivated successfully. Aug 14 19:24:44 virdoc systemd[1]: var-lib-docker-overlay2-9b12d39dd28c1d45ce3bba2c89df0da96f18e2f76ab8c2c9f395f3306e8cf967\x2dinit-merged.mount: Deactivated successfully. Aug 14 19:24:44 virdoc kernel: [17235.374354] br-9c62dd44c333: port 1(vethda2de73) entered blocking state Aug 14 19:24:44 virdoc kernel: [17235.374361] br-9c62dd44c333: port 1(vethda2de73) entered disabled state Aug 14 19:24:44 virdoc kernel: [17235.374536] device vethda2de73 entered promiscuous mode Aug 14 19:24:44 virdoc networkd-dispatcher[757]: WARNING:Unknown index 94 seen, reloading interface list Aug 14 19:24:44 virdoc systemd-udevd[213023]: Using default interface naming scheme 'v249'. Aug 14 19:24:44 virdoc systemd-udevd[213024]: Using default interface naming scheme 'v249'. Aug 14 19:24:44 virdoc systemd-networkd[729]: vethda2de73: Link UP Aug 14 19:24:44 virdoc containerd[779]: time="2023-08-14T19:24:44.927858926+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 14 19:24:44 virdoc containerd[779]: time="2023-08-14T19:24:44.928111179+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 14 19:24:44 virdoc containerd[779]: time="2023-08-14T19:24:44.928132531+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 14 19:24:44 virdoc containerd[779]: time="2023-08-14T19:24:44.928870482+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/64d24f9dc45ed01e9b2b2e6a91be61c382f1c13c65cb3a7e2284c46e21839adb pid=213052 runtime=io.containerd.runc.v2 Aug 14 19:24:44 virdoc systemd[1]: Started libcontainer container 64d24f9dc45ed01e9b2b2e6a91be61c382f1c13c65cb3a7e2284c46e21839adb. Aug 14 19:24:45 virdoc kernel: [17235.609427] eth0: renamed from veth7fb476c Aug 14 19:24:45 virdoc systemd-networkd[729]: vethda2de73: Gained carrier Aug 14 19:24:45 virdoc systemd-networkd[729]: br-9c62dd44c333: Gained carrier Aug 14 19:24:45 virdoc kernel: [17235.623757] IPv6: ADDRCONF(NETDEV_CHANGE): vethda2de73: link becomes ready Aug 14 19:24:45 virdoc kernel: [17235.623884] br-9c62dd44c333: port 1(vethda2de73) entered blocking state Aug 14 19:24:45 virdoc kernel: [17235.623888] br-9c62dd44c333: port 1(vethda2de73) entered forwarding state Aug 14 19:24:45 virdoc kernel: [17235.750809] br-9c62dd44c333: port 2(veth560e358) entered blocking state Aug 14 19:24:45 virdoc kernel: [17235.750816] br-9c62dd44c333: port 2(veth560e358) entered disabled state Aug 14 19:24:45 virdoc kernel: [17235.750884] device veth560e358 entered promiscuous mode Aug 14 19:24:45 virdoc systemd-udevd[213033]: Using default interface naming scheme 'v249'. Aug 14 19:24:45 virdoc networkd-dispatcher[757]: WARNING:Unknown index 96 seen, reloading interface list Aug 14 19:24:45 virdoc systemd-networkd[729]: veth560e358: Link UP Aug 14 19:24:45 virdoc kernel: [17235.751306] br-9c62dd44c333: port 2(veth560e358) entered blocking state Aug 14 19:24:45 virdoc kernel: [17235.751309] br-9c62dd44c333: port 2(veth560e358) entered forwarding state Aug 14 19:24:45 virdoc containerd[779]: time="2023-08-14T19:24:45.304782367+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 14 19:24:45 virdoc containerd[779]: time="2023-08-14T19:24:45.304919565+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 14 19:24:45 virdoc containerd[779]: time="2023-08-14T19:24:45.304937480+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 14 19:24:45 virdoc containerd[779]: time="2023-08-14T19:24:45.305338603+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/054f9adcff8e332b2e237ace9a9f2b94d595787d7881aa7b985e8f184933baea pid=213159 runtime=io.containerd.runc.v2 Aug 14 19:24:45 virdoc systemd[1]: Started libcontainer container 054f9adcff8e332b2e237ace9a9f2b94d595787d7881aa7b985e8f184933baea. Aug 14 19:24:45 virdoc kernel: [17235.980136] eth0: renamed from vethd3b1d49 Aug 14 19:24:45 virdoc systemd-networkd[729]: veth560e358: Gained carrier Aug 14 19:24:45 virdoc kernel: [17235.995718] IPv6: ADDRCONF(NETDEV_CHANGE): veth560e358: link becomes ready Aug 14 19:24:46 virdoc systemd-networkd[729]: vethda2de73: Gained IPv6LL Aug 14 19:24:47 virdoc systemd-networkd[729]: veth560e358: Gained IPv6LL Aug 14 19:24:55 virdoc kernel: [17245.834483] [UFW BLOCK] IN=br-bf3b9d3af07a OUT= PHYSIN=veth59f4385 MAC=02:42:f5:85:12:ed:02:42:ac:12:00:02:08:00 SRC=172.18.0.2 DST=178.254.33.192 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=16477 DF PROTO=TCP SPT=59906 DPT=8287 WINDOW=64240 RES=0x00 SYN URGP=0 Aug 14 19:25:06 virdoc systemd[1]: run-docker-runtime\x2drunc-moby-ee853fdcf64fb31511df6c05a2fc6e91e9bb208f971cd83f26cce60ae0c16d3b-runc.wBKqNF.mount: Deactivated successfully. Aug 14 19:25:08 virdoc systemd[1]: run-docker-runtime\x2drunc-moby-ee853fdcf64fb31511df6c05a2fc6e91e9bb208f971cd83f26cce60ae0c16d3b-runc.I7MF42.mount: Deactivated successfully. Aug 14 19:25:15 virdoc kernel: [17265.928904] [UFW BLOCK] IN=br-bf3b9d3af07a OUT= PHYSIN=veth59f4385 MAC=02:42:f5:85:12:ed:02:42:ac:12:00:02:08:00 SRC=172.18.0.2 DST=178.254.33.192 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=37794 DF PROTO=TCP SPT=33798 DPT=8999 WINDOW=64240 RES=0x00 SYN URGP=0

Even when the status of UFW says:

ufw status
Status: inactive

Also when I uninstall UFW, reboot and recreate a container I recieve a message with UWF in the logs:

Aug 14 19:43:54 virdoc kernel: [ 80.363829] [UFW BLOCK] IN=ens3 OUT= MAC=00:1c:42:5e:40:8a:3c:8c:93:7e:b4:40:08:00 SRC=45.11.57.7 DST=178.254.33.192 LEN=60 TOS=0x00 PREC=0x00 TTL=56 ID=60413 DF PROTO=TCP SPT=60076 DPT=8545 WINDOW=64240 RES=0x00 SYN URGP=0 Aug 14 19:44:00 virdoc kernel: [ 86.966521] [UFW BLOCK] IN=br-bf3b9d3af07a OUT= PHYSIN=veth0d709d1 MAC=02:42:55:ed:64:c7:02:42:ac:12:00:02:08:00 SRC=172.18.0.2 DST=178.254.33.192 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=63844 DF PROTO=TCP SPT=39558 DPT=8999 WINDOW=64240 RES=0x00 SYN URGP=0

So it seems for me that the log entry is not UFW specific than more Firewall related at all?

The thing is that the problem only occurs on some containers. The most containers are accessible.

Thanks again for some more hints (the link you mentioned was unfortunately no help).

Greets, mike

I tried to find anything that mentions ā€œUFW BLOCKā€ not caused by ufw but I found the opposite on multiple sites, for example:

Are you sure the log messages were not saved before you uninstalled UFW? The answer is probably yes, but it is still strange. I can imagine disabling ufw incorrectly, but hard to imagine uninstalling it incorrectly. Still, can you share how you disabled it and how you uninstalled it?

Also please, share the output of the following commands:

docker info

docker version

snap list docker

dpkg -l | grep docker 

I know you told me you installed Docker from the official repository. I just want to be sure that nothing installed a different version from somehere else.

One thing I can still imagine, although I donā€™t think it is very likely, that there is another ufw somewhere.

My attempts and used commands:

Disabling UFW: ufw disable & reboot / recreating
Uninstalling UFW: apt remove ufw & checking processes for UFW (no one found) & reboot / recreating

docker info:

Client: Docker Engine - Community
 Version:    24.0.5
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.11.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.20.2
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 33
  Running: 33
  Paused: 0
  Stopped: 0
 Images: 32
 Server Version: 24.0.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 8165feabfdfe38c65b599c4993d227328c231fca
 runc version: v1.1.8-0-g82f18fe
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.15.0-79-generic
 Operating System: Ubuntu 22.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.751GiB
 Name: xxx.de
 ID: 4QZS:LT5I:NUFN:IHYU:HCO2:5TQQ:BP3J:NIXE:GRRY:GQG2:LMI6:KQI6
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

docker version:

Client: Docker Engine - Community
 Version:           24.0.5
 API version:       1.43
 Go version:        go1.20.6
 Git commit:        ced0996
 Built:             Fri Jul 21 20:35:18 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.5
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.6
  Git commit:       a61e2b4
  Built:            Fri Jul 21 20:35:18 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.22
  GitCommit:        8165feabfdfe38c65b599c4993d227328c231fca
 runc:
  Version:          1.1.8
  GitCommit:        v1.1.8-0-g82f18fe
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

snap list docker:
error: no matching snaps installed

dpkg -l | grep docker:

ii  docker-buildx-plugin                  0.11.2-1~ubuntu.22.04~jammy             amd64        Docker Buildx cli plugin.
ii  docker-ce                             5:24.0.5-1~ubuntu.22.04~jammy           amd64        Docker: the open-source application container engine
ii  docker-ce-cli                         5:24.0.5-1~ubuntu.22.04~jammy           amd64        Docker CLI: the open-source application container engine
ii  docker-ce-rootless-extras             5:24.0.5-1~ubuntu.22.04~jammy           amd64        Rootless support for Docker.
ii  docker-compose-plugin                 2.20.2-1~ubuntu.22.04~jammy             amd64        Docker Compose (V2) plugin for the Docker CLI.

I also checked that there is only one instance of UFW running.

Thx a lot again, mike

Everything looks good. Because I donā€™t have any more ideas what has happened on the host nd why you still see ufw in the logs when ufw is uninstalled, maybe we can identify the differences between the working and non-working containersā€™ network.

Are the netwrks all created by Docker Compose? Do you have any network with custom parameters? MAybe some networks created as external networks earlier? You could also check the IP addresses. We saw one in the logs. What about the rest of the IPs? Are they different or are they all like 172.x.x.x?

All my containers are created via docker-compose and havenā€™t special custom parameters.

No some of the containers use also other IP-Addresses/Ranegs, but that doesnā€™t matter cause I have containers in the 172.xx and also in the other IP-Ranges that arenā€™t accessible. I already checked that and also changed the networking from non-working conotainers to IP-Ranges that are working or added working networks to them.

There is no difference in the behaviour. Still not reachable.

Greets

Then unfortunately I am out of ideas.

Thanks rimelek for your support.

I installed a fresh Debian 12 VM with a fresh Docker (official epo) and created some containers. All the necessary FW-rules were created.

In my Ubuntu 22.05 VM those rules arenā€™t created anymore. I think this is the place I have further dig into.

Greets

Since I use Ubuntu 22.04 too, if you can create a compose project to reproduce the issue, I can test it on my machine. Just let me know if you need me to do that. And if you can run new Ubuntu VMs not just Debian VMs, you can try that too, because I upgraded an Ubuntu 20.04 host long time ago which doesnā€™t run Docker anymore, only in virtual machines. I didnā€™t have problem with the upgrade then, and everything works in original Ubuntu 22.04 virtual machines so who knows what happened with your environment. It could be a bug in the Ubuntu upgrade or it is also possible that the current Docker version canā€™t handle OS upgrades so easily.

Thx for your offer. But right now I would like to know if there is any logging of Docker when creating a new container/network/rules to check if the firewall rules are set correctly or a- as i suppose - are missing totally.

And if there is any logging do I have to include it in anywhere (eg docker config) and if there are different logging levels. Any hint on that would be great. Thanks!

Greets

I didnā€™t understand your question first. I realized that and deleted my previous post. So if you a looking for a ā€œverboseā€ option for logging, you can try the debug mode of dockerd:

  -D, --debug                                 Enable debug mode

https://docs.docker.com/engine/reference/commandline/dockerd/#description

To run the daemon with debug output, use dockerd --debug or add "debug": true to the daemon.json file.

Thanks.

When recreating a container I receive:

Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: ERROR:Failed to get interface "vethf0039fd" status: Command '['/usr/bin/networkctl', 'status', '--no-pager', '--no-legend', '--', 'vethf0039fd']' returned non-zero exit status 1.
Aug 21 09:14:33 xx(dot)net systemd[1]: networkd-dispatcher.service: Got notification message from PID 3152671, but reception only permitted for main PID 758
Aug 21 09:14:33 xx(dot)net networkctl[3152671]: Interface "vethf0039fd" not found.
Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: ERROR:Unknown interface index 151 seen even after reload
Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: WARNING:Unknown index 151 seen, reloading interface list
Aug 21 09:14:33 xx(dot)net networkd-dispatcher[758]: ERROR:Failed to get interface "veth021a0c3" status: Command '['/usr/bin/networkctl', 'status', '--no-pager', '--no-legend', '--', 'veth021a0c3']' returned non-zero exit status 1.
Aug 21 09:14:33 xx(dot)net systemd[1]: Cannot find unit for notify message of PID 3152662, ignoring.

Looks like the interfaces are not mapped/found correctly. So that maybe the rules arenā€™t attached in consequence?

Greets

It is still just a symptom, not the cause, so you still need to find out why the interface is not created which must have thrown an error before that.

There is one more thing that came to my mind. I remember that some people had docker network problem. We couldnā€™t find out why, but stoping docker daemon and deleting the network database file and starting Docker again helped. Just to be safe, I wouldnā€™t completely delete that file even though Docker should recreate it. The reason why that could help is because Docker stores network metdata in that file and if tha database becomes corrupt, that can lead to strange behaviors. I still donā€™t understand why you saw UFW BLOCK in the logs but this is one last thing I can suggest, so:

  1. If you can, stop and delete all containers, compose projects.
  2. Stop Docker
  3. Copy /var/lib/docker/network/files/local-kv.db to somwehere so you can restore it if you need to.
  4. Remove /var/lib/docker/network/files/local-kv.db
  5. Start Docker
  6. Recreate containers / compose projects if you deleted them.

Thanks rimelek,

unfortunately that doesnā€™t make any difference. Still the same behavior. The necessary firewall rules arenā€™t created.

Seems that I have to reinstall the whole server osā€¦

Greets

I had a similar problem after upgrading from 20.04 to 22.04, but itā€™s because I was still running the focal version of docker. I changed /etc/apt/sources.list.d/docker.list to this:

deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu   jammy stable

(just s/jammy/focal/)

Then I ran sudo apt upgrade docker-ce.

If you installed the Canonical-provided docker package you might need something different.

1 Like