Different container behaviour on Fedora 36 versus Debian 11.5

Hello,

I have docker running on Fedora 36. Earlier this year, around Q1, there were some docker updates that broke my Zoneminder installation. Post upgrade, Zoneminder wouldn’t start. I didn’t really have much time to look into the issue, so, I rolled docker back to ~containerd.io-1.5.x and docker-ce to ~10.10.12 and pinned the versions.

This week, in preparation to upgrade to Fedora 37, I decided to upgrade to the latest version of docker again. Post upgrade, I am experiencing the same issues I had previously where zoneminder won’t start. I get errors from Zoneminder like:

zoneminder    | [zoneminder-service] 2022-11-16 15:37:02.874961655 INFO Starting ZoneMinder...
zoneminder    | [zoneminder] 2022-11-16 15:37:11.080951085 local1.err: Nov 16 15:37:11 zmdc[480]: FAT [Can't connect to zmdc.pl server process at /zoneminder/run/zmdc.sock: No such file or directory]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.100921567 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl startup", output is "Starting server", status is 255]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.169029600 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl start zmc -m 1", output is "Unable to connect to server using socket at /zoneminder/run/zmdc.sock", status is 255]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.237262391 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl start zmfilter.pl --filter_id=1 --daemon", output is "Unable to connect to server using socket at /zoneminder/run/zmdc.sock", status is 255]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.305231187 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl start zmfilter.pl --filter_id=2 --daemon", output is "Unable to connect to server using socket at /zoneminder/run/zmdc.sock", status is 255]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.373230833 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl start zmwatch.pl", output is "Unable to connect to server using socket at /zoneminder/run/zmdc.sock", status is 255]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.442746570 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl start zmupdate.pl -c", output is "Unable to connect to server using socket at /zoneminder/run/zmdc.sock", status is 255]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.510296252 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl start zmtelemetry.pl", output is "Unable to connect to server using socket at /zoneminder/run/zmdc.sock", status is 255]
zoneminder    | [zoneminder] 2022-11-16 15:37:11.577843514 local1.err: Nov 16 15:37:11 zmpkg[474]: ERR [Unable to run "/usr/bin/zmdc.pl start zmstats.pl", output is "Unable to connect to server using socket at /zoneminder/run/zmdc.sock", status is 255]
...
zoneminder    | [nginx] 172.22.0.1 - - [16/Nov/2022:15:43:07 -0700] "GET /cgi-bin/nph-zms?mode=single&scale=100&monitor=1&rand=724512 HTTP/1.1" 200 33962 "http://localhost:2080/index.php?view=montagereview" "Mozilla/5.0 (X11; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0" "-"
zoneminder    | [zoneminder] 2022-11-16 15:43:08.215788037 local1.err: Nov 16 15:43:08 zms_m1[599]: ERR [zms_m1] [Can't open memory map file /dev/shm/zm.mmap.1: No such file or directory]
zoneminder    | [zoneminder] 2022-11-16 15:43:08.215788528 local1.err: Nov 16 15:43:08 zms_m1[599]: ERR [zms_m1] [Unable to connect to monitor id 1 for streaming]
zoneminder    | [zoneminder] 2022-11-16 15:43:08.215788801 local1.err: Nov 16 15:43:08 zms_m1[599]: ERR [zms_m1] [Monitor shm is not connected]

Connecting inside the container, ipcs -l indicates that shared memory is available, but in Zoneminder, the dashboard always indicates 0% used. Not sure there is a problem or rather just none is used because the daemon isn’t starting

I am using the Package zoneminder-base · GitHub container which seems to be the recommended Zoneminder container.

I have even tried deleting the container completely including all data volumes are rebuilding and still get the above error(s) which a fresh start and the shm errors after the first camera is configured.

The interesting part, though is that if I just copy my docker-compose and .env file to a debian machine and start the same container, it comes up just fine. That was quite a surprise. So it feels like docker on fedora isn’t behaving the same way as on Debian and started some time in the 1.5.x time frame.

Both the Fedora machine and the debian machine are now on docker-ce ~20.10.x and containerd ~1.6.9
Would anyone have any suggestions on how to figure this out why the behaviour is different?

On a whim, I installed podman on the same host and used it to podman-compose up the same exact image and it starts right up. It seems like some strange compatability issue with docker on Fedora specifically.