-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Roop not using gcolab GPUs #400
Comments
@C0untFloyd got any idea why is this happening? |
Google Colab has updated the Cuda version to version 12. ONNXRuntime does not support this version yet, but this feature will appear soon |
For a temporary fix, go to the command palette (ctrl+shift+P) and select "Use fallback runtime version" |
I was having problems on my local pc as well, suddenly not recognizing cuda but instead having "Azure" among the execution providers. I went back to installing onnxruntime-gpu v1.16.2 which seemed to have solved it. Weird... |
The previous runtime version isn't available anymore. Any solutions) |
Install latest official onnxruntime-gpu |
Hi. Could you tell me how to do this or is there a link you used? Thx. |
pip install --upgrade onnxruntime-gpu |
Hi Marat. This may sound pretty stupid, but where do I have to put this codeline in? I use the Roop-Unleashed colab version. Thx. |
Tried this and several versions like 16.2 17 and 17.1, with no success. |
Setup (after cloning): import os
!pip install pyyaml -r requirements.txt
if os.path.isfile("/usr/bin/nvidia-smi") or os.path.isfile("/opt/bin/nvidia-smi"):
!pip uninstall -y onnxruntime onnxruntime-gpu
!pip install onnxruntime-gpu==1.17.0 --extra-index-url https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple Run: import yaml
import os
roop_config = {
'clear_output': True,
'live_cam_start_active': False,
'max_threads': 32,
'output_image_format': 'png',
'output_video_codec': 'libx264',
'output_video_format': 'mp4',
'video_quality': 14,
'frame_buffer_size': 6,
'selected_theme': 'Default',
'server_name': '',
'server_port': 0,
'server_share': True
}
if os.path.isfile("/usr/bin/nvidia-smi") or os.path.isfile("/opt/bin/nvidia-smi"):
roop_config["provider"] = "cuda"
print("I: Using CUDA")
else:
roop_config["provider"] = "cpu"
with open("./config.yaml", "w") as config_file:
config_file.write(yaml.dump(roop_config))
!python run.py |
wht about on kaggle? |
Is there no way to avoid the message about disallowed code and cause it to be disconnected? |
@michidaeron @htmlcodepreview @Marv761125 - me updated and tested new setup code. Please test |
it works for me now |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue was closed because it has been stalled for 5 days with no activity. |
Describe the bug
I have collab premium. Copied the notebook, ran it in a v100 hardware. No GPU utilization. Tried it in a100, same result
Help?
got this error on console
[E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcufft.so.10: cannot open shared object file: No such file or director
The text was updated successfully, but these errors were encountered: