Docker Community Forums

Share and learn in the Docker community.

Using docker for applications with kernel module constraints, specifically CUDA?


(Skiguy0123) #1

I’ve been trying to apply docker to my research, which involves a lot of scientific computing using CUDA. I was hopking I could setup an image that had all the software installed, and then pass the GPU using the device option in docker-run. However, CUDA is picky about kernel modules, and, as far as I can tell, these are not controllable by docker. As an example, one of the install methods for CUDA involves disabling/blacklisting nouveau and rebooting prior to installation. Does all this mean docker isn’t appropriate for what I’m trying to do? Thanks!


(Jeff Anderson) #2

It may very well be possible to get CUDA working. It might require you to do some of the setup ahead of time on the host. Generally speaking, the host’s is where you would need to blacklist modules. It is probably possible to do all the steps to configure the host’s udev/module blacklisting from a container image, but it would be ugly.

This blog post discusses installing kernel modules from a container: http://dummdida.tumblr.com/post/117157045170/modprobe-in-a-docker-container

Additionally, there are plenty of images on the docker registry with cuda baked in: https://registry.hub.docker.com/search?q=cuda&searchfield=

They have likely faced some of the same problems you are facing now.

I would consider the kernel modules stuff to be a prerequisite step that must be done on the host ahead of time. The container should assume the host has the right kernel modules and only worry about running your CUDA application.

/Jeff


(Skiguy0123) #3

Jeff,
Thanks! I’ll give this a go.
Best
Steve