PyTorch is officially supported by AMD for ROCm, and should be plug-and-play once set up correctly.
Learn more about installing PyTorch with ROCm here
AMD GPU devices are configured and accessed the exact same way as NVIDIA GPU devices. This means that any workflow that sets the PyTorch device the following way will work out-of-the-box, assuming PyTorch can detect your GPUs:
torch.device("cuda")
Debugging
In order to test whether your system is configured to use PyTorch with GPU acceleration, begin by starting a new file to run a couple of debugging commands:
The following code will return a boolean indicating whether your GPUs are being detected by PyTorch:
import torchprint(torch.cuda.is_available())
Now, go ahead and run your file using:
python3debug.py
In the event that this does not return True, there are a couple things you must check.
PyTorch Setup
One reason the above command may not function properly is that the incorrect version of PyTorch is installed. To check, add the following line to your debugging file:
print(torch.__version__)
You should get an output similar to:
[torch_version]a0+git[hash]
Or:
[torch_version].dev[date]+rocm[rocm_version]
If this output is not a ROCm-enabled PyTorch build, you must reinstall PyTorch with the correct version. One way to do this would be: