Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【CUDA does not work】onnxruntime-gpu==1.19.0 #21769

Closed
juntaosun opened this issue Aug 16, 2024 · 7 comments
Closed

【CUDA does not work】onnxruntime-gpu==1.19.0 #21769

juntaosun opened this issue Aug 16, 2024 · 7 comments
Labels
ep:CUDA issues related to the CUDA execution provider

Comments

@juntaosun
Copy link

Describe the issue

I used onnxruntime-gpu==1.18.1 before, it can load onnx and work in CUDA(11.7).

Today, after upgrading to onnxruntime-gpu==1.19.0, it prompts that CUDA does not work

2024-08-16 15:41:33.7983729 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\sunny\anaconda3\envs\pytorch\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

2024-08-16 15:41:33.8037106 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

Use the following code to check, it is available.

providers = onnxruntime.get_available_providers()
print(providers)
# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

To reproduce

(1) Upgrade to pip install onnxruntime-gpu==1.19.0
(2) Use providers = ["'CUDAExecutionProvider"]
Load any onnx model.
(3) You can see the red error in the report.
LoadLibrary failed with xxxxxxx capi\onnxruntime_providers_cuda.dll"

Urgency

Hopefully, it will be compatible with CUDA 11.7 / 11.8 like 1.18.1, as the user base is large. Forcing an upgrade to cuDNN 9.* and CUDA 12.* is obviously not a good idea.

I don't know why cuDNN 9.* and CUDA 12.* are required, are there any new features?

Platform

Windows

OS Version

11

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.19.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 11.7 / 11.8

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Aug 16, 2024
@jacobilsoe
Copy link

Maybe #20916 has resurfaced?

@prathikr
Copy link
Contributor

@mszhanyi are there any special instructions for cu11 given ORT 1.19 now defaults to cu12?

@juntaosun
Copy link
Author

@MaanavD

@tianleiwu
Copy link
Contributor

tianleiwu commented Aug 18, 2024

CUDA 11.x, cuDNN 8.9, onnxruntime 1.19 or 1.18.1

This setting is compatible with Pytorch 2.3.1 (Do not use torch 2.4 or later since that requires cuDNN 9.x).

pip install onnxruntime-gpu==1.18.1

@prathikr, @MaanavD, please upload 1.19 for cuda 11. It is not available right now. It shall be the following feed:

pip install onnxruntime-gpu==1.19.0 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/

CUDA 12.x, cuDNN 9.x, onnxruntime 1.19 or 1.18.1

This setting is compatible with Pytorch 2.4 for cuda 12.1 or 12.4 (Do not use torch 2.3.1 or older since that requires cuDNN 8.x).

pip install onnxruntime-gpu==1.19.0

or

pip install onnxruntime-gpu==1.18.1 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

@prathikr, @MaanavD, please upload 1.19 for cuda 12 to onnxruntime-cuda-12 feed as well.

@biggiantpigeon
Copy link

biggiantpigeon commented Sep 18, 2024

Still dunno how to use onnxruntime 1.19.2 + cuda11 with c++, same error as this issue reported. Any instructions?

@tianleiwu
Copy link
Contributor

tianleiwu commented Sep 19, 2024

@biggiantpigeon, the default GPU C/C++ package of onnxruntime 1.19.2 is for cuda 12.x and cudnn 9.x. I mean the zip file onnxruntime-win-x64-gpu-1.19.2.zip in release note is for cuda 12.x. If you want to use cuda 11.8 and cudnn 8.9 for c++, you can try 1.18.0.

Note that we do have python package of 1.19.2 for cuda 11.8 and cudnn 8.9. You can install like

pip install onnxruntime-gpu==1.19.2 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/

If you want to use C API for cuda 11.8, a walkaround is to copy DLLs from the above python package for cuda 11.x to overwrite the DLLs in onnxruntime-win-x64-gpu-1.19.2.zip. However, such walkaround is not tested.

@juntaosun
Copy link
Author

juntaosun commented Sep 20, 2024

@biggiantpigeon, the default GPU package is cuda 12.x and cudnn 9.x for onnxruntime 1.19.2. If you wan to use cuda 11.8 and cudnn 8.9 for c++, you can try 1.18.0.

When releasing the pip onnxruntime-gpu package, can 11.x also be added, for example:

19.2+cu118 , 19.2 ,  19.0+cu118,  19.0,

When installing in this way, you only need to:

pip install onnxruntime-gpu==19.2+cu118

or cuda 12.x and cudnn 9.x

pip install onnxruntime-gpu==19.2

@tianleiwu Request implementation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

5 participants