onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. #21218
Labels
ep:CUDA
issues related to the CUDA execution provider
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
platform:windows
issues related to the Windows platform
Describe the issue
Getting the runtime error
To reproduce
Installed the dependencies for the following project
https://github.com/fudan-generative-vision/hallo/tree/v1.0.0
On starting the inference getting this error
Urgency
Low
Platform
Windows
OS Version
11
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.18.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
cuda_12.5.r12.5/compiler.34177558_0
The text was updated successfully, but these errors were encountered: