Have 70 Dockers all working well, except one which is Immich.
The issue I am facing is related to the mount points, I mount few folders in the docker:
/photos/thumbs - on the host /mnt/user/appdata/immich_cache/thumbs
/photos/encoded-video - on the host /mnt/user/appdata/immich_cache/encoded
/photos/profile - on the host /mnt/user/appdata/immich_cache/profile
/photos/backups - on the host /mnt/user/appdata/immich_cache/backups
They are located on the nvme cache drive, zfs filesystem.
The issue I am having, they randomly disappear from the immich container.
Let’s say immich is running 8h, usually /photos/encoded-video + /photos/backups and /photos/profile folders wont be visible in the container when I try to ls -la /photos. Also immich is reprting issus in the log that these folders are not there.
Simple container restart fixes it. To be honest not sure where the issue might be, all privileges are ok, everything seems to be working fine until mounts are dropped.
Nothing uses these files except immich, only certain folders from /immich_cache are unmounted.
Checked the log, immich does initial checks if folders exist and r/w rights are correct(it does write and read .immich file in them) so right after restart yesterday evening all was good, then suddenly at 12AM when it performed some operations he already lost an access to the folders.
Can you let me know what might be causing that? What should I check?
I can’t decide whether this is a problem specific to the Unraid docker package, or a problem with binding mountpoints.
A container, once created does not allow configuration changes. Thus, it can not drop (or add) mounts and anything else. If Docker on Unraid is able to do so, then it’s a modification Unraid did on their Docker package.
Binds have a specific behavior. If you bind host paths into a container that are mountpoints themselves, the container will not be able to see, if the mountpoint on the host is unmounted and remounted. In those cases, it is better to mount the parent folder, and use bind propagation that permits the container to see changes to the mountpoint.
IF you deploy your containers from a cli, I am sure mount propagation can be used, but if you deploy your configuration from an unraid specific ui, I have no idea whether it supports it or not.
Thank you for your reply, I am thinking how to approach it. As initially the folders inside the container exist, they just disappear after few hours. I would blame unraid way to mount “user” shares, however 70 other containers are working and there is literally 0 issues with them.
From what you said once nontainer is created mounts cannot be changed, and you are right, I was able see in portainer that the mount still exist and it points correct locations etc, however after checking ls -la in the console 3 folders were missing, until I restarted container and they were back.
What I have done, I changed mount from /mnt/user to direct /mnt/cache on these 3 problematic folders only. I will see what is going to happen in few hours/days. Shame that there is nothing in the logs that would even point where should I be looking.
What you describe sounds like the mountpoint on the host changes. LIke I wrote: If you mount /mnt instead (I know that it’s going to expose to much into the container) and use bind propagation to make the container see updates on submounts. You need to configure the mount propagation to use slave, which makes the replica mount (the one inside the container), see changes to submounts in the original folder.
To be more precise, when you bind a host folder into a container folder, the current inode of the host folder is looked up, and used to mount the folder into the container path. Though, If a folder gets unmounted/Remounted on the host, the inode changes, but the container will still see the old inode. If you mount the parent folder of the mount, the inode will remain stable, and if you use the propagation slave, the container will be able to see the changed inodes after a remount.
A new docker version is not going to change this behavior.
Kind of lol, I mean I get your point, and I know what you want to achieve, however not sure how to do it yet. But like I said, instead of /mnt/user/appdata/immich_cache/encoded-videos I changed the mount to direct one /mnt/cache/appdata/immich_cache/encoded-videos. This bypass the FUSE system used in unraid, as this is the only thing that comes to my mind, if it fails then I will try what you suggested.
Hi mate there is a chance we found where the issue is.
Immich does weird thing, it does create these empty folders in /photos/ even that we specify them to be in /immich_cache/. However if you delete the empty folders immich goes crazy, I think they are somehow linked inside.
What I have done recently, I changed the way mover is moving stuff, so in my case, I changed it to clean empty top folders and we have a theory when mover is running, it does wipe empty folders, which breaks the mount, and even they are recreated later link is gone until I recreate container.
Everything seems to be running ok until midnight, my mover runs at 22:30 so it would explain why everything seems to be ok before.
I will inform immich team of possible issue, maybe they can path it somehow on their side. Anyway it can by sorted out now if I put this on a separate share pool that got no cache attached.