-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fedora CoreOS official support in all components #696
Comments
Out-of-the-box support for FCOS would be great. At this time even having invested some time in deploying via helm with various components disabled, and/or flagged as host installed, I have been unable to get this working. As a result I've had to fall back to a model where I am running |
Hi all, I also tried to get gpu operator working by using some other docker images and I think I ended up with the same problem. Looking at the logs from dfateyev, I see the same error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli.real: ldcache error: open failed: /run/nvidia/driver/sbin/ldconfig.real: no such file or directory: I tried to symlink ldconfig in docs to ldconfig.real but since the os is immutable it doesn’t work. maybe I’m barking at the wrong tree here but maybe not. :) |
Here is a similar problem and symlinking solved it: |
Hi, I managed to get the GPU operator working. The driver is not working yet. It was related to the ldconfig.real binary. To fix it you simply need to run a different image for the toolkit pod: helm install --wait --generate-name I can successfully build a driver container but it fails to install the driver due to missing kernel headers. The ones that are needed in my case just don’t exist. Weird. But I guess you guys somehow managed to build a container with the drivers so it must be doable. And once that is working the gpu operator is fully operational for Fedora Core OS. |
As announced in the official documentation, currently there is no support for recent Fedora CoreOS-based workers in Kubernetes. There are no official GPU driver images published, and no official recommendations on how to deploy GPU operator to Kubernetes with Fedora CoreOS hosts.
We currently have Kubernetes solutions in Openstack (it features Fedora CoreOS and containerd). In order to use the GPU operator functionality, we should utilize various hacks and workarounds, along with a custom GPU driver image: running GPU driver image and Toolkit on the nodes separately out of Kubernetes scope, then deploying the GPU operator in Kubernetes, disabling already present features. This deployment approach is pretty cumbersome.
We are interested in the official Fedora CoreOS support both in the operator and GPU driver.
In the ideal scenario, we would like to install the GPU operator to deploy all the components working in Kubernetes with containerd out-of-box. We understand that we might need a custom GPU driver image — but without even initial CoreOS native support it's hard to prepare it.
There were several requests for better support Fedora CoreOS driver images, e.g. #34 and #8, and we would like to extend this request to better support in all GPU operator components.
We understand that "support in all components out-of-box" is a pretty broad subject — but we could start at least from something, gradually improving and testing the functionality.
1. Quick Debug Information
2. Issue or feature description
We have prepare a custom (unofficial) GPU driver image to use the operator functionality — Fedora's from the repo doesn't work out-of-box, but can start with workarounds. But,
nvidia-operator-validator
cannot finish the deployment validation, anyway.3. Steps to reproduce the issue
helm install --wait --generate-name -n gpu-operator --create-namespace nvidia/gpu-operator --set driver.usePrecompiled=true --set driver.version="550.54.15" --set driver.repository="docker.io/dfateyev"
, where "dfateyev/driver" is a custom GPU driver image for this Kubernetes cluster;nvidia-smi
cannot address GPU device files: we need to prepare them explicitly like this;nvidia-operator-validator
fails to start properly (see in the attached logs below).4. Information to attach
Attached logs: issue-696-logs.zip
The text was updated successfully, but these errors were encountered: