Using docker for applications with kernel module constraints, specifically CUDA?

I’ve been trying to apply docker to my research, which involves a lot of scientific computing using CUDA. I was hopking I could setup an image that had all the software installed, and then pass the GPU using the device option in docker-run. However, CUDA is picky about kernel modules, and, as far as I can tell, these are not controllable by docker. As an example, one of the install methods for CUDA involves disabling/blacklisting nouveau and rebooting prior to installation. Does all this mean docker isn’t appropriate for what I’m trying to do? Thanks!

It may very well be possible to get CUDA working. It might require you to do some of the setup ahead of time on the host. Generally speaking, the host’s is where you would need to blacklist modules. It is probably possible to do all the steps to configure the host’s udev/module blacklisting from a container image, but it would be ugly.

This blog post discusses installing kernel modules from a container:

Additionally, there are plenty of images on the docker registry with cuda baked in:

They have likely faced some of the same problems you are facing now.

I would consider the kernel modules stuff to be a prerequisite step that must be done on the host ahead of time. The container should assume the host has the right kernel modules and only worry about running your CUDA application.


Thanks! I’ll give this a go.