-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【CUDA does not work】onnxruntime-gpu==1.19.0 #21769
Comments
Maybe #20916 has resurfaced? |
@mszhanyi are there any special instructions for cu11 given ORT 1.19 now defaults to cu12? |
CUDA 11.x, cuDNN 8.9, onnxruntime 1.19 or 1.18.1This setting is compatible with Pytorch 2.3.1 (Do not use torch 2.4 or later since that requires cuDNN 9.x).
@prathikr, @MaanavD, please upload 1.19 for cuda 11. It is not available right now. It shall be the following feed:
CUDA 12.x, cuDNN 9.x, onnxruntime 1.19 or 1.18.1This setting is compatible with Pytorch 2.4 for cuda 12.1 or 12.4 (Do not use torch 2.3.1 or older since that requires cuDNN 8.x).
or
@prathikr, @MaanavD, please upload 1.19 for cuda 12 to onnxruntime-cuda-12 feed as well. |
Still dunno how to use onnxruntime 1.19.2 + cuda11 with c++, same error as this issue reported. Any instructions? |
@biggiantpigeon, the default GPU C/C++ package of onnxruntime 1.19.2 is for cuda 12.x and cudnn 9.x. I mean the zip file onnxruntime-win-x64-gpu-1.19.2.zip in release note is for cuda 12.x. If you want to use cuda 11.8 and cudnn 8.9 for c++, you can try 1.18.0. Note that we do have python package of 1.19.2 for cuda 11.8 and cudnn 8.9. You can install like
If you want to use C API for cuda 11.8, a walkaround is to copy DLLs from the above python package for cuda 11.x to overwrite the DLLs in onnxruntime-win-x64-gpu-1.19.2.zip. However, such walkaround is not tested. |
When releasing the pip onnxruntime-gpu package, can 11.x also be added, for example:
When installing in this way, you only need to:
or cuda 12.x and cudnn 9.x
@tianleiwu Request implementation |
Describe the issue
I used onnxruntime-gpu==1.18.1 before, it can load onnx and work in CUDA(11.7).
Today, after upgrading to onnxruntime-gpu==1.19.0, it prompts that CUDA does not work
Use the following code to check, it is available.
To reproduce
(1) Upgrade to pip install onnxruntime-gpu==1.19.0
(2) Use providers = ["'CUDAExecutionProvider"]
Load any onnx model.
(3) You can see the red error in the report.
LoadLibrary failed with xxxxxxx capi\onnxruntime_providers_cuda.dll"
Urgency
Hopefully, it will be compatible with CUDA 11.7 / 11.8 like 1.18.1, as the user base is large. Forcing an upgrade to cuDNN 9.* and CUDA 12.* is obviously not a good idea.
I don't know why cuDNN 9.* and CUDA 12.* are required, are there any new features?
Platform
Windows
OS Version
11
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.19.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 11.7 / 11.8
The text was updated successfully, but these errors were encountered: