I might have a fundamental misunderstanding of how Docker works, but I thought that the platform Docker runs on locally, i.e., my OS, shouldn’t matter when running images. Shouldn’t all the necessary dependencies be packaged inside the container?
I have a Rust app, I’m using the same image as a builder and as a runner: rust:1.73-bullseye. On Fedora, it builds fine, but when I try to run it, I get these errors:
/app/maj-fullstack: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.32' not found (required by /app/maj-fullstack) /app/maj-fullstack: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.33’ not found (required by /app/maj-fullstack)
/app/maj-fullstack: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34’ not found (required by /app/maj-fullstack)
So I tried setting up a Debian VM, and there everything builds and runs without any problems.
Somehow Docker is looking for libc library on Fedora instead of the one inside the container which I know exists, or am I misunderstanding this?
That is only partially true. You have a better chance to run the app when it is in a container using the same image but the kernel is still the kernel of the host and that matters sometimes.
So once you try in a container and once in a virtual machine. What makes you think that the container is looking for glibc on the host?
That is actually not possible. The docker daemon could through error messages but that is independent from what is running in the container. You need to install all the dependencies in the docker image and everything has to be in the right folder. For example if you copied a rust binary to the docker image and tried to run it in the container, that could search for the depdendency in the wrong folder. It doesn’t mean it is searching for the dependency on the host, it is just searching for the dependency in the container at the same location where it was on the host.
So once you try in a container and once in a virtual machine. What makes you think that the container is looking for glibc on the host?
No, it was in a container both times.
First I built the Docker container on Fedora, but when I tried to run it I got the GLIBC errors.
Then I just pulled the exact same repo (same Dockerfile and everything) on a Debian VM, built the Docker container, ran the Docker container and everything works fine.
I presume it’s looking for glibc on the host because that’s the only thing that changed.
You need to install all the dependencies in the docker image and everything has to be in the right folder. For example if you copied a rust binary to the docker image and tried to run it in the container, that could search for the depdendency in the wrong folder.
Didn’t try to copy it, built it with docker both times The image should be the same for the same Dockerfile, no? And as I said, the image works on Debian, just not Fedora, so the Dockerfile seems fine, the app has been deployed a couple of times over the last couple of days, I would just prefer not to have to launch a VM everytime I have to deploy.
Why would a container be able to access anything on the host freely? This would contradict that containers are self-contained, wouldn’t it?
Have you checked, whether
the file actually doesn’t exist in the image?
disabling selinux or at least setting it to permissive mode, before building the image and creating a container based on the image, makes any difference?
it makes a difference if the image is build on a debian or ubuntu system (or any other system that doesn’t use selinux) and then a container is run on the Fedora system using that image?
Why am I asking these questions? I suspect selinux or acls to play their part in the problem you experience. An image should run on every os, as long as the kernel of the system supports all required features and modules, and a security feature of the system does not prevent it.
Why would a container be able to access anything on the host freely? This would contradict that containers are self-contained, wouldn’t it?
Definitely seemed weird to me.
Regarding your suggestions:
The file did exist in the image.
I just tried copying the image built on Debian to Fedora and everything worked as expected.
I then tried disabling selinux as suggested and that did resolve the issue. I managed to build and run the image on Fedora without any errors.
I only experienced problems with selinux once right after installing Fedora, it didn’t even cross my mind that it could be the cause of my issues. Thanks for your help
Glad it’s sorted out. It would have been much clearer, if the error message had indicated a permission problem, instead of claiming the file doesn’t exist…
If you figure out how to incorporate everything with enabled selinux, we would appreciate it if you keep us posted about it
As I don’t use RHEL based systems, I never had to deal with this problem myself. Though, years ago I was in a project, where they had to write selinux policies and make sure when running the container that bind volumes used by the container use a trailing :z or :Z (I don’t recall which one exactly) after -v /host/path:/container/path. Of course this is also true for the CI/CD runner/agent you use to build your images, if it runs as container.