I’m trying to make cuda available to pytorch, i.e., I would like the following line of python to return ‘True’: torch.cuda.is_available(). It currently returns the following:
/usr/lib/python3.13/site-packages/torch/cuda/init.py:182: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 12090). Please update your GPU driver by downloading and installing a new version from the URL: Download The Latest Official NVIDIA Drivers Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at /build/python-pytorch/src/pytorch-cuda/c10/cuda/CUDAFunctions.cpp:119.)
return torch._C._cuda_getDeviceCount() > 0
False
Torch is installed via the python-pytorch-cuda package, which requires cuda 13.
import torch
print (torch.version.cuda)
13.0
I’m running the nvidia 575 drivers, which nvidia-smi shows support for cuda 12.9.
If I upgrade the nvidia drivers to 580, cuda is available. However, per the current upgrade notice, the system is otherwise completely unsuable with constant screen flicker and failed window refreshes.
Any suggestions as to how I can make with work without doing a custom install of pytorch or other hackery?
While information from *-fetch type apps might be fine for someone wishing to buy your computer, for Support purposes it’s better to ask your system directly;
Output of the inxi command (with appropriate parameters, and formatted according to forum guidelines) will generate information useful for those wishing to help:
I suppose you can wait for 580 to become usable, or perhaps use a different GPU (I’m on 580 and haven’t noticed any issues, but I’m on unstable and haven’t updated in 3 weeks so I’ve still got a working version: 580.105.08-3).
You’re probably best looking into a custom install anyway, a lot of AI stuff tends to move slower than a rolling release does.
There’s AUR (en) - python-pytorch-cuda12.9 - presumably you’d use it with a venv, but you haven’t given us any details so we can’t be more specific.