Running CUDA on Windows Subsystem for Linux (WSL)

Running CUDA on the Windows Subsystem for Linux (WSL) is not a trivial task seeing that it requires the GPU to be available on a Virtual Machine. In this article I go in more detail how you can get your GPU working in WSL through something NVIDIA calls "Paravirtualization"

šŸ’” It is quite straightforward to utilize our Graphic Card, but at this moment of writing, a lot of dependencies are not out of the box for everyone.

Prerequisites

Tool Installation

Installing Docker on WSL

Once our WSL environment is setup, we can get started with setting up Docker. Installing Docker is quite trivial, but we should make sure not to utilize the apt packages, but the installer provided by docker.

curl https://get.docker.com | sh

Installing NVIDIA for Docker

Once Docker is installed, we need to install Nvidia their toolkit for it. Do this by running the commands below

# Enable Nvidia for Docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
curl -s -L https://nvidia.github.io/libnvidia-container/experimental/$distribution/libnvidia-container-experimental.list | sudo tee /etc/apt/sources.list.d/libnvidia-container-experimental.list
sudo apt update 
sudo apt install -y nvidia-docker2
sudo apt install nvidia-container-toolkit

# Finally restart the docker service
sudo service docker restart

Testing

If the above was done correctly, we should now be able to test our graphic card by starting a benchmarking container:

sudo docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -benchmark

Which should show something like the below:

Conclusion

It's quite trivial to run NVIDIA with its paravirtualization support, but we should make sure that we meet the prerequisites.