# PyTorch Quickstart

***

### Using ROCm Devices

PyTorch is officially supported by AMD for ROCm, and should be plug-and-play once set up correctly.&#x20;

{% hint style="info" %}
Learn more about installing PyTorch with ROCm [here](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/pytorch-install.html)
{% endhint %}

AMD GPU devices are configured and accessed the *exact* same way as NVIDIA GPU devices. This means that any workflow that sets the PyTorch device the following way will work out-of-the-box, assuming PyTorch can detect your GPUs:

```python
torch.device("cuda")
```

***

### Debugging

In order to test whether your system is configured to use PyTorch with GPU acceleration, begin by starting a new file to run a couple of debugging commands:

<pre class="language-bash"><code class="lang-bash"><strong>mkdir pytorch-hello-world
</strong><strong>cd pytorch-hello-world
</strong><strong>nano debug.py
</strong></code></pre>

The following code will return a boolean indicating whether your GPUs are being detected by PyTorch:

<pre class="language-python"><code class="lang-python"><strong>import torch
</strong><strong>print(torch.cuda.is_available())
</strong></code></pre>

Now, go ahead and run your file using:

```bash
python3 debug.py
```

In the event that this does *not* return `True`, there are a couple things you must check.

#### PyTorch Setup

One reason the above command may not function properly is that the incorrect version of PyTorch is installed. To check, add the following line to your debugging file:

```python
print(torch.__version__)
```

You should get an output similar to:

```
[torch_version]a0+git[hash]
```

Or:

```
[torch_version].dev[date]+rocm[rocm_version]
```

If this output is not a ROCm-enabled PyTorch build, you must reinstall PyTorch with the correct version. One way to do this would be:

```bash
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.1/
```

#### Checking ROCm Setup

To ensure ROCm is properly configured, run the following command:

```bash
rocm-smi
```

The output should be similar to (depending on your number of devices):

```
========================================= ROCm System Management Interface =========================================
=================================================== Concise Info ===================================================
Device  [Model : Revision]    Temp        Power     Partitions      SCLK    MCLK    Fan  Perf  PwrCap  VRAM%  GPU%  
        Name (20 chars)       (Junction)  (Socket)  (Mem, Compute)                                                  
====================================================================================================================
0       [0x74a1 : 0x00]       45.0°C      142.0W    NPS1, SPX       132Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
1       [0x74a1 : 0x00]       42.0°C      135.0W    NPS1, SPX       132Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
2       [0x74a1 : 0x00]       42.0°C      137.0W    NPS1, SPX       132Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
3       [0x74a1 : 0x00]       48.0°C      141.0W    NPS1, SPX       138Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
4       [0x74a1 : 0x00]       46.0°C      142.0W    NPS1, SPX       132Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
5       [0x74a1 : 0x00]       40.0°C      137.0W    NPS1, SPX       132Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
6       [0x74a1 : 0x00]       47.0°C      142.0W    NPS1, SPX       132Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
7       [0x74a1 : 0x00]       42.0°C      132.0W    NPS1, SPX       132Mhz  900Mhz  0%   auto  750.0W    0%   0%    
        AMD Instinct MI300X                                                                                         
====================================================================================================================
=============================================== End of ROCm SMI Log ================================================
```

If this is not the case, ROCm is not properly installed. You will more likely, however, have issues running the following command:

```bash
rocminfo
```

The output should be of the format:

<pre data-full-width="false"><code>ROCk module version 6.7.0 is loaded
<strong>=====================    
</strong>HSA System Attributes    
=====================    
<strong>....
</strong></code></pre>

If this command errors, it's most likely that devices are not properly mounted, or your user is not a part of the `render` group.

***

### Teardown

Navigate back to your base directory and remove your `pytorch-hello-world` folder:

```bash
cd ~
rm -rf pytorch-hello-world/
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.tensorwave.com/welcome-to-tensorwave/pytorch-quickstart.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
