Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: nvidia driver extension #476

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jfroy
Copy link
Contributor

@jfroy jfroy commented Sep 23, 2024

This patch deprecates the NVIDIA toolkit extension and introduces a new nvidia-driver extension (in production/lts versions and open source/proprietary flavors). The NVIDIA container toolkit must be installed independently, via a future Talos extension, the NVIDIA GPU Operator, or by the cluster administator.

The extension depends on the new glibc extension (#473) and participates in its filesystem subroot by installing all the NVIDIA components in it.

Finally, the extension runs a service that will bind mount this glibc subroot at /run/nvidia/driver and run the nvidia-persistenced daemon.

This careful setup allows the NVIDIA GPU Operator to utilize this extension as if it were a traditional NVIDIA driver container.

--

I've tested this extension on my homelab cluster with the current release of the NVIDIA GPU Operator, letting the operator install and configure the NVIDIA Container Toolkit (with my Go wrapper patch, NVIDIA/nvidia-container-toolkit#700).

This is the more Talos way of managing NVIDIA drivers, as opposed to letting the GPU Operator load and unload drivers based on its ClusterPolicy or NVIDIADriver custom resources, as discussed in siderolabs/talos#9339 and #473.

This configuration only works in CDI mode, as the "legacy" runtime hook requires more libraries that are removed by this PR.

--

One other requirement on the cluster is to configure the containerd runtime classes. The GPU Operator and the container toolkit installer (which is part of the toolkit and is used by the operator to install the toolkit) have logic to install the runtime classes and patch the containerd config, but this does not work on Talos because the containerd config is synthesized from files that reside on the read-only system partition.

There is a way to install the operator and bypass/disable the containerd configuration. The cluster administrator is then on the hook to do that.

--

There could be a Talos extension for the NVIDIA Container Toolkit. It probably would look a lot like the existing one and maybe even include all the userspace libraries needed for the legacy runtime (basically for nvidia-container-cli). For CDI mode support, a service could invoke nvidia-ctk to generate the CDI spec for the devices present on each node (this is a Go binary that only requires glibc and the driver libraries). However, there is some amount of logic in the GPU Operator to configure the toolkit to work with all the other components that the operator may install and manage on the cluster, so a Talos extension for the toolkit would provide a less integrated, possibly less functional experience.

This patch deprecates the NVIDIA toolkit extension and introduces a new
nvidia-driver extension (in production/lts versions and open
source/proprietary flavors). The NVIDIA container toolkit must be
installed independently, via a future Talos extension, the NVIDIA GPU
Operator, or by the cluster administator.

The extension depends on the new glibc extension (siderolabs#473) and participates
in its filesystem subroot by installing all the NVIDIA components in it.

Finally, the extension runs a service that will bind mount this glibc
subroot at `/run/nvidia/driver` and run the `nvidia-persistenced`
daemon.

This careful setup allows the NVIDIA GPU Operator to utilize this
extension as if it were a traditional NVIDIA driver container.

Signed-off-by: Jean-Francois Roy <[email protected]>
@jfroy
Copy link
Contributor Author

jfroy commented Sep 24, 2024

Quick fun update, with NVIDIA/gpu-operator#1007 I can launch pods with an NVIDIA GPU without using the custom NVIDIA runtimes, just the default runc.

There are caveats with bypassing/not using the NVIDIA runtime wrapper, but for customers that don't depend on those behaviors, it's a nice setup/maintenance simplification.

  • The runtime.nvidia.com CDI vendor will not work. This is a vendor that triggers a CDI spec generation on the fly and is implemented by the NVIDIA runtime wrapper.
  • Container image environment variables (see https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html) will not work as runc will not act on them in any way. This will break deployments that only use a container image with those environment variable and a runtime class invoking the NVIDIA runtime wrapper (either explicitly or because it is the cluster default). Arguably it is more in the spirit of Kubernetes to require those deployments to list/request a GPU resource.

@jfroy
Copy link
Contributor Author

jfroy commented Sep 24, 2024

As another note, the current NVIDIA GPU Operator supports more than an "LTS" and "production" version of the driver stack. Per https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/platform-support.html, there are 4 driver versions supported. With the open-vs-proprietary kernel module choice, that would mean 8 distinct NVIDIA driver system extensions per Talos release, if those numbers don't change. Maybe the extension can be templated to reduce duplicated code, but it does show the appeal of potentially taking a different approach to accelerator drivers, at least the complex ones like GPUs and FPGAs (and maybe also smart NICs / DPUs).

@frezbo
Copy link
Member

frezbo commented Oct 22, 2024

@jfroy the glibc changes are good, the tests passed, i think we can continue iterating

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants