Libc incompatibilities: when will they emerge?

Hi there,

I understand that there may be two potential sources of incompatibility between a Docker image and a host:

  • The Docker version of the target system (presumably there may at one point be problems if very new images need to run on very old hosts
  • Any library that does system calls, in particular libc

I’d like to understand, when the latter will emerge: when the image ist run and the first application in the image loads the libc contained in the image (so the loader would complain about a mismatch), or sometime later when a system function is called that is not available in the kernel ?

BTW: I have tried to provoke a failure with an Ubuntu 16.04 image running on an Ubuntu 12.04.5, but have failed to find any problems with simple toy applications, even though 12.04 uses a far older kernel than 16.04 would, and the libc contained in the image is far newer (albeilt with the same major number 6).

Thanks and Best Regards,
Beet

In practice, this isn’t something that I have seen come up terribly often (not on here nor on the IRC channel). It is certainly something to keep in mind for extreme circumstances. Normally, it is considered best practice to run as new a kernel as possible.

I have seen at least one case where someone was trying to run very very old software inside a docker container: Help get JDK 1.0 working in Docker?

I don’t know if the trouble that they ran into was due to a kernel syscall incompatibility or another version mismatch issue. The fact that they got it working at all seems to suggest that modern kernels should be able to run older userspaces just fine.

Granted, my evidence is anecdotal. Someone more knowledgable about the linux kernel might be able to shed more light on this.

Jeff – could you elaborate a bit on why we can run 16.04 images on a 12.04 host?

Our understanding is the containers have their own filesystem but “share” the kernel with the host – so how does that work with 16.04 images? Do they use the 12.04 kernel as-is?

@ruebe @dreadpirateshawn, to jump in and clarify a few things here:

A container image, in aggregate, is simply a root filesystem (snapshot) for a given process. This snapshot only encapsulates the userspace pieces (specifically, the filesystem).

It is correct that containers will use the kernel of the host where they are running. Just like normal processes running “bare” on the system would. They do not share libraries with the host or use a different kernel if the image is different. Each Docker container has its own set of libraries since each container has its own, unique root filesystem, but the kernel is always shared with the host.

Those libraries, including glibc, are in userspace. And the first rule of kernel maintenance, is “DO NOT BREAK USERSPACE”.

The kernel is generally quite good about providing backwards compatibility of interfaces (e.g. syscalls) across subsequent versions. If they don’t, it’s a (high priority) bug.

So, running a ubuntu:12.04 container on a more modern kernel, is generally very safe if original and target kernels are both vanilla. You ask, however, about the opposite case of running a ubuntu:16.04 application on a Ubuntu 12.04 (you don’t say precisely which version, I guess something like 3.13.11?). That’s unsafe, but it doesn’t mean applications you run will necessarily fail out of the gate. Many applications (nginx, $YOUR_WEBAPP, etc.) don’t rely on recent kernel features. If you look at the list of Linux system calls and when they were added, very few are added post-3.13. So that might explain why things seem to be working, but it doesn’t mean there aren’t nasty runtime bugs and errors lurking around the corner somewhere.

At any rate, you probably shouldn’t try to run applications targeted for future versions of the kernel on older ones, but I hope that helps explain more why they may seem to work.

1 Like