39dfe3a4d9
Signed-off-by: David Rotermund <54365609+davrot@users.noreply.github.com> |
||
---|---|---|
.. | ||
README.md |
Cuda
{:.no_toc}
* TOC {:toc}The goal
Convince PyTorch and Nvidia's GPUs working together. I assume you installed the PyTorch and/or TensorFlow version for CUDA. (see Python installation instructions on thsi site...)
Questions to David Rotermund
Windows
- Download and install CUDA driver
- Download and install cuDNN toolkit (you will need to create an account :-( )
Test the PyTorch
import torch
torch.cuda.is_available()
Expected output:
True
torch.cuda.is_available()
Expected output:
True
torch.backends.cuda.is_built()
Expected output:
True
torch.backends.cudnn.version()
Expected output (number depends on the GPU generation and may be different):
8904
torch.backends.cudnn.enabled
Expected output:
True
my_cuda_device = torch.device('cuda:0')
print(torch.cuda.get_device_properties(my_cuda_device))
Expected output (values depend on the GPU generation):
_CudaDeviceProperties(name='NVIDIA GeForce RTX 3060', major=8, minor=6, total_memory=12011MB, multi_processor_count=28)