# Docker Quickstart

***

### Pulling a Docker Image

Your node comes with Docker Engine installed, so all Docker functionality should be available to you on your first connection. Begin by pulling the desired image:

```bash
docker pull tensorwavehq/hello_world:latest
```

You can verify that your image was properly pulled by running the following command and checking for your desired image:

```bash
docker images
```

If the pull was successful, your output should look similar to this:

```
REPOSITORY                      TAG            IMAGE ID       CREATED          SIZE
tensorwavehq/hello_world        latest         359e600f7aac   2 minutes ago   61.2GB
```

{% hint style="info" %}
TensorWave's officially supported images can be found [here](https://hub.docker.com/u/tensorwavehq).
{% endhint %}

***

### Running a Docker Container

In order to run your Docker containers with GPU acceleration, you must mount the devices. For certain applications, you must also add the container to a group to utilize your GPUs.

#### Using the docker run Command

Here's an example command to mount the devices and configure the correct permissions:

```bash
docker run --device /dev/kfd --device /dev/dri --group-add video tensorwavehq/hello_world:latest
```

The usage of each option is as follows:

* `--device /dev/kfd`
  * This command mounts the main compute interface to your container.
* `--device /dev/dri`&#x20;
  * This command mounts the Direct Rendering Interface for your GPU. To restrict access, append `/renderD<node>`, where the node is the ID of the node you want to mount.
* `--group-add video` (optional)
  * This command adds your container to the server's `video` group, which is necessary for certain applications (including PyTorch).

#### Using docker-compose

The following is an equivalent docker-compose to the command above:

```yaml
version: '3'
services:
  hello_world:
    image: tensorwavehq/hello_world:latest
    devices:
      - /dev/kfd
      - /dev/dri
    group_add:
      - video
```

To use it, create a `docker-compose.yml` file in any subdirectory, and within that subdirectory, run:

```bash
docker compose up
```

#### Verifying Setup

If done properly, the output of either the run or compose command should be similar to:

```
CUDA available: True
Number of GPUs: 8
GPU 0: AMD Instinct MI300X
GPU 1: AMD Instinct MI300X
GPU 2: AMD Instinct MI300X
GPU 3: AMD Instinct MI300X
GPU 4: AMD Instinct MI300X
GPU 5: AMD Instinct MI300X
GPU 6: AMD Instinct MI300X
GPU 7: AMD Instinct MI300X
```

For other containers, to verify that your Docker container has access to your GPUs, run **both** `rocm-smi` and `rocminfo`. These commands will reveal information about the GPUs mounted to your container.

{% hint style="warning" %}
If one or both of these commands fails to execute successfully, please double check your running commands.
{% endhint %}

***


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.tensorwave.com/welcome-to-tensorwave/docker-quickstart.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
