Accessing nvidia drivers at build time?

Hello,
I’m working on windows 11 with docker-desktop.
I have everything setup and working to run docker images with cuda; i can run a container that launches “nvidia-smi” successfully, so the nvidia drivers are available from a running container.
For example, this command works well and returns all the cuda info correctly:

docker run --gpus all nvidia/cuda:12.6.3-cudnn-devel-ubuntu24.04 nvidia-smi
=> NVIDIA-SMI 565.77.01 Driver Version: 566.36 CUDA Version: 12.7

But when trying to BUILD anything that requires the nvidia-smi fails.
For example, putthing this in the dockerfile:

RUN nvidia-smi
=>
[9/9] RUN nvidia-smi:
0.479 /bin/sh: 1: nvidia-smi: not found

I need the nvidia-smi command at build time to build an executable with the cuda feature:

RUN cargo build --release --features cuda

The problem is that the nvidia-smi command is available at runtime (through the “–gpus all” option), but not at build time.
Is there any way to fix that ?

Any help welcome :slight_smile:
Thanks
Cedric

You can comment on this issue:

Since buildx doesn’t run actual Docker containers and it is the default builder, it has to be implemented in buildx. Some people shared workarounds but I have no idea whether it would wokk with Docker desktop or at all.

Thanks for the suggestion.
I ended up building an intermediate image that i launched with --gpus all, then i could compile using the nvidia driver within the container, and finally commited the final container into a new image containing everything.
It’s a bit more work, not elegant, but it’s simple and it works.
I really hope that one day docker will be able to build with a --gpus all option.
Cheers
Cedric

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.