Your node comes with Docker Engine installed, so all Docker functionality should be available to you on your first connection. Begin by pulling the desired image:
dockerpulltensorwavehq/hello_world:latest
You can verify that your image was properly pulled by running the following command and checking for your desired image:
dockerimages
If the pull was successful, your output should look similar to this:
REPOSITORY TAG IMAGE ID CREATED SIZE
tensorwavehq/hello_world latest 359e600f7aac 2 minutes ago 61.2GB
TensorWave's officially supported images can be found here.
Running a Docker Container
In order to run your Docker containers with GPU acceleration, you must mount the devices. For certain applications, you must also add the container to a group to utilize your GPUs.
Using the docker run Command
Here's an example command to mount the devices and configure the correct permissions:
This command mounts the main compute interface to your container.
--device /dev/dri
This command mounts the Direct Rendering Interface for your GPU. To restrict access, append /renderD<node>, where the node is the ID of the node you want to mount.
--group-add video (optional)
This command adds your container to the server's video group, which is necessary for certain applications (including PyTorch).
Using docker-compose
The following is an equivalent docker-compose to the command above:
To use it, create a docker-compose.yml file in any subdirectory, and within that subdirectory, run:
Verifying Setup
If done properly, the output of either the run or compose command should be similar to:
For other containers, to verify that your Docker container has access to your GPUs, run bothrocm-smi and rocminfo. These commands will reveal information about the GPUs mounted to your container.
If one or both of these commands fails to execute successfully, please double check your running commands.