Docker Container for Cross compilation

Hello eveyone,

I am wondering if I can use Docker for cross compilation for SDK according to my toolchain. It involves use of kernel files like linux-header--generic.

I ran the cross compilation with all its dependencies (toolchain), but when the kernel modules are involved, my cross compilation fails saying Module.symvers not found!!.

I came to know that docker container for ubuntu distrinution does not come with these kernel headers but can be installed by running apt-get install linux-headers-$(uname -r) -y in the dockerfile. Butstill I am getting the same error Module.symvers not found!!. Even though it is clearly present in some header files inside /usr/src/ folder.

Please note I performed the same compilation with same modules on Ubuntu host of same version and same make version, the compilation happens without any hinderance but in docker container it fails with above mentioned message. Please help.

I would say it mostly depends on the image you use and what modules and tools are installed. Which image did you use?

Hi,
Thanks for the reply.

I am using ubuntu 18.04 image. All the dependencies and toolchain are uploaded in docker container. I am running a shell script entrypoint.sh file to source the environment variables since RUN command opens up a new instance of shell so environment variables will be lost. The environmental variable file contain some export statements, some if-else, for loop block. After sourcing that file in entrypoint.sh through ENTRYPOINT cmd, I’m running CMD the actual compilation of make. I dont know what am I missing.

Each RUN instruction will run in a separate container, the result will be an image layer that persists file system changes. Neither processes, nor variables survive after a RUN instruction is finished.

This is the right approach. The ENTRYPOINT instruction defines the script/binary to start, and CMD will be the argument to it. Please share your Dockerfile and entrypoint script.

1 Like

Hi,
Thanks for the reply.

Here is entrypoint.sh

#!/bin/sh
#Read in the file of environment settings
. /sdkapp/sdk-all-6.5.12/environment-setup-ppce5500-fsl-linux
#Then run the CMD
exec “$@”

And here is Dockerfile

FROM ubuntu:18.04
RUN apt-get update && apt-get install build-essential -y && apt install linux-generic-hwe-18.04 -y && apt-get install linux-headers-$(uname -r) -y
ADD opt/ /opt/
ADD linux-4.1.8-rt8-hopk /sdkapp/linux-4.1.8-rt8-hopk/
ADD sdk-all-6.5.12 /sdkapp/sdk-all-6.5.12/
WORKDIR /sdkapp/sdk-all-6.5.12/systems/linux/user/gto-4_1
ENTRYPOINT [“/sdkapp/sdk-all-6.5.12/entrypoint.sh”]
CMD [“make”]

This can’t be right. Even though if you install the files that make up the kernel, the kernel won’t be used. Also linux-headers-$(uname -r) will relate to the host’s kernel, no to the containers kernel.

But I need those files for SDK to run properly. Is there any way to ensure those kernel files are present within the docker container?

There is no such thing as kernel in the container, since it is not a separate system. It doesn’t need a kernel, it already has, the kernel of the host. It is just that the communication between the container and the host is limited. But sometimes processes in the container want to send system calls to the kernel. Then you need to allow the required kernel capabilities or run the container in privileged mode. It wasn’t always possible during the build process, and I never needed it, but there are some flags for buildx which is the default builder now.

https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/reference.md#run—securityinsecure

In your case the problem could be that something just checks if the module is available. The kernel header files are not the modules. As far as I know, those are just some api interfaces to communicate with the kernel. So if ou need the modules, you need to mount /lib/modules/${kernel_version} from the host.

I did it only once, when I needed to run virt-builder (to build VM images) in a Docker container because of some compatibility issues.

I also remember that once I had to install the header files, but I don’t remember why and if it had to include the OS version, but I guess I didn’t, because I didn’t use the hwe kernel. In case of a container I’m not sure which one I would need to define, the version of the container distro or the host.

You mean Mounting in a volume, right?

There is a differente between a volume and a bind mount. I meant bind mounting the host folder which you could do in a Dockerfile too.

I forgot to add to my previous post, that it is also possible that what you want is just not meant to run a container.

I could mount the kernel modules, because I just ran the container on a server and mounted the folder in runtime. Building an image should never depend on a specific host, because you could move the image to another host and it would just not work. So you can try it, and you should try it just to see if it works, but I wouldn’t recommend it if you want to share the image.

1 Like

Sure. Will post my findings. Thanks all for the time.