I’m trying to troubleshoot empty bind mounts when running containers on a new host VM using AlmaLinux 9. We’re using a script to run the container and it uses the following command:
The guest container has the folder created but it’s empty. I have tried the following but experienced no changes:
Gave the host folder very loose permissions and made sure it had the expected owner and group.
Added the :z flag, no change, and then the :Z flag, again no change.
Tried disabling SE Linux temporarily, but when attempting to do so, saw that it was not running in the first place.
I also tried using the longer syntax to mount the folder but it did not change the outcome.
pwd is giving the expected path and using docker inspect on the container when running shows that the mount is using the expected path as well, and the expected path is within my home folder.
Docker reports that it’s the following version:
Docker version 28.3.3, build 980b856
It’s also not working on my teammate’s VM/server, but it worked on our previous Fedora and Amazon Linux VMs/servers, and other apps in our ecosystem using the same pattern exhibit the same problem, so I suspect the problem is with our new systems’ configurations or Docker setup.
Can anyone suggest next steps to try to debug this? I’d appreciate it!
Reviewing the docker inspect output from my original tries shows that the binding is using the correct path, it just appears to be empty inside the container and definitely has contents outside.
I just tried your suggestion, but the docker inspect output shows that the host directory is wrong as a result of not getting any output from pwd with that syntax. Trying it with $(shell pwd) gives me the expected path but the folder is still empty.
How did you install Docker? Sharing the platform almost answers it, but only almost. Direct links to the followed guide can be useful.
The following commands can give us some idea and recognize incorrectly installed Docker:
docker info
docker version
Review the output before sharing and remove confidential data if any appears (public IP for example)
For debugging, you could create a file in the container with a special name like “veryuniquefilename.txt” and run the following command on the host:
find / -name veryuniquefilename.txt
If the file is not on a mounted folder, it will most likely be under the Docker data root on a container filesystem. A bind mount just makes a folder available at another location but with a correct bind mount, there could be no difference from the container’s point of view.
If I remove the mount that’s not working as expected and add a new one that doesn’t already exist in the built container, I can cd into it and create files that persist outside of the container after exiting. So it seems that our current setup doesn’t like the idea of mapping a bind mount on top of an existing directory, although that seemed to work for us before.
I haven’t looked into all the places where we use this pattern, but I think in at least one of them we could remove the folder from the initial build so that it’s clear for the bind mount. But I’m not sure that will work for us in all cases.
I could not understand your description after the shared docker info output. I couldn’t follow what worked and what didn’t. Maybe if you can share commands or bullet lists to describe step by step what you did and what the result was.
Created a folder at (project root)/banana on the host.
Ran the container with a mount to the banana folder.
Inside the container, verified the banana folder was there, created two text files in the banana folder.
Ran the find command on the host and it was in the bound folder ((project root)/banana) as expected.
The problem is since the container has (project root) already, I can’t do a bind mount on (project root) directly, I can only do a binding on the banana folder since that’s not already populated in the container.
So the files were found on the host (outside containers) in the mount source, but nowhere else?
That shouldn’t happen unless the the daemon is running in a virtual machine or a remote machine, which I don’t see in your outputs.
I would check the docker daemon logs or any other system logs that indicates failing mounts, but I don’t see how a container can start when a specified folder is not mounted.
Can you reproduce this without a makefile, just directly running the docker commands? I assume you already tried that at least in your last test, I just want to be sure and sorry if I missed that in your messages.
I can imagine any security software blocking some system calls or commands on AlmaLinux, but I don’t remember mount issues without error messages and already started containers on other RHEL distributions either.
I would also point out that AlmaLinux is not officially supported even if it is “binary compatible with RHEL”, which only means if there is any kind of incompatibility with anything, even the exact versions of the libraries, it was probably never tested.
I was able to debug and find that the expected dockerd process wasn’t the one being used due to a script switching to the minikube docker context when it wasn’t expected, and fixing that and making sure our script wrapper switched back to the host VM’s docker context fixed the issue. Thanks for your debugging suggestions..