Install NVIDIA Container Toolkit with Docker 20.10 on Fedora 33
Table of Contents
This is guide, howto install NVIDIA Container Toolkit with Docker >= 20.10 on Fedora 33. Check video to see also howto install latest Docker Engine 20.10 (docker-ce) on Fedora 33. Same method works with Podman, but it will cause strange SELinux problems even with custom generated policy installed. So package still requires Docker 20.10 or newer. If you want run Podman version without docker dependencies, let me know and I can build different version of nvidia-docker2 package.
Check video version of guide:
Support inttf:
Buy Me a Coffee:
What you need before installation:
- Linux Kernel >= 5.9
- Latest NVIDIA Drivers >= 455.45.01
- Docker >= 20.10 (this is docker.com original Fedora install guide)
Install NVIDIA Container Toolkit with Docker on Fedora 33⌗
1. Change root user⌗
su -
# OR #
sudo -i
2. Install inttf.repo⌗
wget -O /etc/yum.repos.d/inttf.repo https://rpms.if-not-true-then-false.com/inttf.repo
3. Install nvidia-docker2 from inttf repo⌗
dnf install nvidia-docker2
4. Update /etc/nvidia-container-runtime/config.toml config file⌗
Enable following
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/var/log/nvidia-container-runtime.log"
5. Restart Docker⌗
systemctl restart docker
————————————————————————–
Now change back to normal user and run following commands as normal user!
6. Check nvidia-container-cli info⌗
nvidia-container-cli info
Output:
NVRM version: 455.45.01
CUDA version: 11.1
Device Index: 0
Device Minor: 0
Model: GeForce RTX 2060
Brand: GeForce
GPU UUID: GPU-864dc54d-b2e0-92fa-9612-f24aa710d12c
Bus Location: 00000000:01:00.0
Architecture: 7.5
7. Test NVIDIA Container Toolkit with Docker on Fedora 33 Installation⌗
docker run --privileged --gpus all --rm nvidia/cuda:11.1-base nvidia-smi
Output:
Thu Dec 10 18:03:13 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.45.01 Driver Version: 455.45.01 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 Off | 00000000:01:00.0 On | N/A |
| 0% 49C P8 6W / 160W | 611MiB / 5926MiB | 2% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
8. Test NVIDIA Container Toolkit with NVIDIA CUDA Sample nbody⌗
docker run --privileged --gpus all --rm nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -benchmark -numbodies=512000
Output:
...
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
MapSMtoCores for SM 7.5 is undefined. Default to use 64 Cores/SM
GPU Device 0: "GeForce RTX 2060" with compute capability 7.5
> Compute 7.5 CUDA device: [GeForce RTX 2060]
number of bodies = 512000
512000 bodies, total time for 10 iterations: 10104.104 ms
= 259.443 billion interactions per second
= 5188.862 single-precision GFLOP/s at 20 flops per interaction