-
Notifications
You must be signed in to change notification settings - Fork 560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU not working #329
Comments
Same here on Linux using V3.3.4 with an Nvidia card, 545.29.02 driver, CUDA installed:
Roop startup looks correct: But once the process is started only:
and very poor performance due to the graphics card basically remaining idle. I haven't used the app in a while and updated via "git pull" without touching any settings. But I did check if "CUDA" was selected properly in the settings. The startup message also confirms this being active. Doesn't look like any errors are thrown while processing. It just never uses CUDA but opts for the CPU instead. |
Curious to know if you see the same performance with CUDA 11.8 ? I believe the original roop was tested with that (https://github.com/s0md3v/roop/wiki/2.-Acceleration). |
you are only using 2 threads |
That is a correct observation and, if CUDA is enabled, also the optimal amount for my ageing card. Now, since CUDA (for whatever reason) doesn't enable, it indeed runs two threads on the already much slower CPU basis. This brings me back to the issue of CUDA not being used, hence my post in this thread. |
that is strange. did you check if it indeed saved it to the config.yaml? if it even tried cuda without success, there would be an error message and a "fallback to cpu execution provider" or something like that. |
After several experiments, I found the main problem was because the core was set to 2. I use a computer with Ryzen 5 + RTX 4080 16Gb VRAM, the optimal setting should be to use 6 cores. This seems to refer to the physical core of the CPU. Just an assumption. |
Good points indeed. I checked the config.yaml and saw |
Finally could try with a fresh installation, in an equally fresh venv and running into the same problem. Indeed seems like my newer CUDA 12.0 should be changed to 11.8 to be compatible since I now receive
errors. Update: Since this isn't a roop-unleashed related problem but one of the ONNX Runtime, I have to look for updates on that end or downgrade CUDA. |
I'm running roop-unleashed with cuda 12.2, no problems.
Cuda installation throws errors when Visual Studio isn't installed properly for example.
Also you can try pip uninstall onnxruntime onnxruntime-gpu, followed by pip install onnxruntime-gpu==1.16.1 In case of install sequence, this has always been my working order:
|
Thanks for trying to help. Good points to check indeed, I will get back once I tried them. Mind you, I'm on Linux. As for my assumption regarding the ONNX Runtime 1.16.2. My understanding is that it's not compatible with CUDA 12+, but ends at 11.8. Anyhow, will try some more things later and report back. Forgot to add: |
lol sorry about that. I was looking at the issue creator's post for the specs. Here's the equivalent solution for linux regarding "adding to path", actually with the exact same error message. I'd try this before anything else. |
For anyone looking into this issue at a later point. I wanted to report back, so here I am (albeit very much later): Source: microsoft/onnxruntime#19292 (comment) Happy to say that this then enables the GPU-bound processing with roop again, with all the speed benefits. The fact of this being the nightly built, so far, isn't leading to problems. So you basically download the file which suits your python version (the filename contains "cp311" for python 3.11 for example) and then install it in the venv where roop is running ("pip install [path/to/file]"). To check the installed version, run this in python:
and hopefully receive 1.17.0 as output. After that, run roop again and see your GPU at work. :-) Note: (-see EDIT below!-) Once can check the latest releases ONNX here: https://github.com/microsoft/onnxruntime/releases EDIT: For some reason, even the now final ONXX runtime 1.17 causes the error and only the nightly release mentioned does not. So if you would still encounter it, try the nightly version to make the issue go away. |
See https://onnxruntime.ai/docs/install/#requirements for onnxruntime-gpu for CUDA 12.* installation. |
Much appreciated info on using the final 1.17. Thanks for that. :-) |
Why always use CPU, in reality I choose to use GPU. it's like it's not working at all.
Specifications of the computer I use.
Ryzen 5 5600x
Nvidia RTX 4080
32GB DDR4
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'has_user_compute_stream': '0', 'gpu_external_alloc': '0', 'enable_cuda_graph': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0'}, 'CPUExecutionProvider': {}} inswapper-shape: [1, 3, 128, 128]
Details
What OS are you using?
Are you using a GPU?
roop 3.3.4
The text was updated successfully, but these errors were encountered: