-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onnxruntime LoadLibrary failed with error 126 #21501
Comments
Follow the installation instructions for CUDA and cuDNN very very precisely. Make sure the libraries from both are available in the PATH that is set when the load failure occurs. You MUST use the instructions for the specific version of CUDA and cuDNN. e.g. some versions of cuDNN require zlibwapi.dll to be installed. |
Thank you for your quick answer, I'm going to reinstall them being very careful. About the PATH thing, is it related to the parameters in the .bat file that launches the app, or shall I just check if variables are present in the folder indicated in the error message ? |
Here is a feedback, I followed the instructions to install the compatible CUDA and cuDNN, and even TensorRT since some people seem to say it was required and i didn't have it. But the problem remains, and I now have this message : D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. So I guess I installed the wrong versions (that I thought were the right one according to requirements doc...) How can I be sure what versions go with my program ? |
I'm also facing the same issue #21527 |
I encountered the same issue, but strangely, the same environment works fine on a Linux system. Is there any way to force it to use the GPU for inference? This way, I can determine which DLL is missing.
|
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
Describe the issue
When I'm using Stable Diffusion Auto1111 WebUI with ControlNet IP-Adapter and ip-adapter-faceid-plus v2 created by h94 on hugginface, I keep getting the following error message :
[E:onnxruntime:Default, provider_bridge_ort.cc:1745 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\sd.webui\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
PS : I'm not dev, not a native english speaker, I followed tutos and solutions proposed here but may have made mistakes.
To reproduce
Launch Stable Diffusion Auto1111 WebUI and generate a 832x1216px pic using JuggernautX model and DPM++ 2M Karras sampler with 4.5 CFG and 30 steps.
Activate ControlNet IP-Adapter and load any preprocessor and ip-adapter-faceid-plusv2_sdxl.
Start the generation.
Urgency
Issue is not that urgent since there is no client involved, but I'm stuck in my project development.
Platform
Windows
OS Version
10
ONNX Runtime Installation
Other / Unknown
ONNX Runtime Version or Commit ID
CUDA 12
ONNX Runtime API
Python
Architecture
Other / Unknown
Execution Provider
CUDA
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: