This repository provides a library and a simple CLI utility to automatically configure GNU/Linux containers leveraging NVIDIA hardware.
The implementation relies on kernel primitives and is designed to be agnostic of the container runtime.
Refer to the repository configuration for your Linux distribution.
With Docker:
# Generate packages for a given distribution in the dist/ directory
make docker-ubuntu:16.04 TAG=rc.2
Without Docker:
make install
# Alternatively in order to customize the installation paths
DESTDIR=/path/to/root make install prefix=/usr
Refer to the nvidia-container-runtime project.
# Setup a rootfs based on Ubuntu 16.04 inside new namespaces
cd $(mktemp -d) && mkdir rootfs
sudo unshare --mount --pid --fork
curl http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04-core-amd64.tar.gz | tar -C rootfs -xz
useradd -R $(realpath rootfs) -U -u 1000 -s /bin/bash nvidia
mount --bind rootfs rootfs
mount --make-private rootfs
cd rootfs
# Mount standard filesystems
mount -t proc none proc
mount -t sysfs none sys
mount -t tmpfs none tmp
mount -t tmpfs none run
# Isolate the first GPU device along with basic utilities
nvidia-container-cli --load-kmods configure --no-cgroups --utility --device 0 .
# Change into the new rootfs
pivot_root . mnt
umount -l mnt
exec chroot --userspec 1000:1000 . env -i bash
# Run nvidia-smi from within the container
nvidia-smi -L
This project is released under the BSD 3-clause license.
Additionally, this project can be dynamically linked with libelf from the elfutils package (https://sourceware.org/elfutils), in which case additional terms apply.
Refer to NOTICE for more information.
A signed copy of the Contributor License Agreement needs to be provided to [email protected] before any change can be accepted.
- Please let us know by filing a new issue
- You can contribute by opening a pull request