This guide will help get you up and running to integrate your containers with a single (local) GPU directly on a Linux workstation.
Note:_ It does not cover the steps needed to connect to multiple GPUs over a network, nor will it help users of Windows or Mac Os (The toolkit mentioned is incompatible with those operating systems)._
Assuming that everything is set up, a correct install will show an output similar to this in your terminal window when you call nvidia-smi from within your container.
If the image you are testing has a default command that prevents you from getting to bash immediately when you run, you can override it with an additional argument placed after the image name. Typically the direct path to the command will be /usr/bin/nvidia-smi.
#cuda #linux #podman #nvidia #docker