I’m trying to get PyTorch to recognize my CUDA drivers. I have installed PyTorch using pip with CUDA 12.1. I’m also using CUDA 12.1 on my NVIDIA CUDA setup. I tested that CUDA is working on WSL2 in Windows 10. I am using the Docker image nvidia/cuda:12.1-base-ubuntu22.04
. However, even though I run the nvidia-smi
command on Docker, and it recognizes my RTX 3060, it does not allow me to utilize it when using PyTorch for tasks like those in Luma3.
docker run --gpus all -it image
RUN nvidia-ctk runtime configure --runtime=docker
Restart wsl2 and docker desktop
RUN apt install nvidia-cuda-toolkit
Any advice on what I might be missing or how to get Docker to properly use my GPU with PyTorch?
Computer: Asus B550m, Ryzen 5800x, Evga Rtx 3060 12 GB Ram w/wsl2 cuda installed.
Docker Current version: 4.30.0 (149282)
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.19045 N/A Build 19045
Docker: nvidia-smi -> Verfies the rtx3060
I run:
import torch
def check_pytorch_cuda():
print("PyTorch Version: ", torch.__version__)
if torch.cuda.is_available():
print("CUDA is available. Device count: ", torch.cuda.device_count())
for i in range(torch.cuda.device_count()):
print(f"Device {i}: {torch.cuda.get_device_name(i)}")
else:
print("CUDA is not available.")
if __name__ == "__main__":
check_pytorch_cuda()
Output:
PyTorch Version: 2.3.1+cu121
/usr/local/lib/python3.10/dist-packages/torch/cuda/__init__.py:118: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 500: named symbol not found (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
CUDA is not available.
If run on windows conda:
PyTorch Version: 2.2.2
CUDA is available. Device count: 1
Device 0: NVIDIA GeForce RTX 3060
I know this a very deep question but if anyone knows a tutorial or wok around let me know.