The PyTorch binaries include the CUDA and cuDNN libraries.

cuda cudnn nvidia pytorch

In the days of yore, one had to go through this agonizing process of installing the NVIDIA (GPU) drivers, cuda, cuDNN libraries, and PyTorch.

Ok, those days are somewhat over. If you are using the PyTorch binaries, they come with cuda and cuDNN built in. I have not found great documentation for this, only a thread in the PyTorch discussion section from two years ago.

If you build from source, yes, you will need to install those libraries yourself.

Ok, how to verfiy? Here goes, assuming you also have FastAI installed (lazy at the moment):

import torch
from fastai.vision import *
from fastai.metrics import error_rate

print("Is cuda available?", torch.cuda.is_available())

print("Is cuDNN version:", torch.backends.cudnn.version())

print("cuDNN enabled? ", torch.backends.cudnn.enabled)

x = torch.rand(5, 3)
print(x)

Here’s the result –

fastai-user@atabb-Precision-T7610:~/testing$ python3 test2.py 
Is cuda available? True
Is cuDNN version: 7603
cuDNN enabled?  True
tensor([[0.7559, 0.9504, 0.9759],
        [0.7765, 0.6080, 0.1925],
        [0.7885, 0.9641, 0.9562],
        [0.4040, 0.7394, 0.5701],
        [0.4912, 0.2765, 0.4441]])

© Amy Tabb 2018 - 2023. All rights reserved. The contents of this site reflect my personal perspectives and not those of any other entity.