Nvdia-smi showing CUDA:N/A since latest Nvidia driver update on host

Without knowing what you did exactly, how you installed Docker and the nvidia driver, and how you start a container, we can’t tell what you do wrong.

Normally, you would install Docker from the official repo

Install the correct nvidia driver, and also install the NVidia container toolkit

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

This steps are mentioned in the Docker documentation in the GPU section

https://docs.docker.com/engine/containers/resource_constraints/#gpu

which also shows a command to test the installation

docker run -it --rm --gpus all ubuntu nvidia-smi

It also mentions environment variables you can set or use the cuda base image.

Update:

I just missed the fact that it worked before and the update broke it. So I’m not sure why it happened.

Update 2:

You can check this compatibility documentation

https://docs.nvidia.com/deploy/cuda-compatibility/#cuda-11-and-later-defaults-to-minor-version-compatibility