Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Build] my cuda is 12. which version shold i install , torch.__version__ '2.2.0+cu121' #19602

Open
LucasLOOT opened this issue Feb 22, 2024 · 3 comments
Assignees
Labels
build build issues; typically submitted using template ep:CUDA issues related to the CUDA execution provider

Comments

@LucasLOOT
Copy link

Describe the issue

my cuda is 12.2 which version shold i install

Urgency

No response

Target platform

ubuntu22.04

Build script

none

Error / output

2024-02-22 17:02:52.455386347 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

2024-02-22 17:02:52.455408227 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:747 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
2024-02-22 17:02:52.936640939 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

2024-02-22 17:02:52.936663879 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:747 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.

Visual Studio Version

No response

GCC / Compiler Version

No response

@LucasLOOT LucasLOOT added the build build issues; typically submitted using template label Feb 22, 2024
@LucasLOOT LucasLOOT changed the title [Build] my cuda is 12.2 which version shold i install [Build] my cuda is 12. which version shold i install , torch.__version__ '2.2.0+cu121' Feb 22, 2024
@snnn
Copy link
Member

snnn commented Feb 22, 2024

@jchen351 , would you please help?

@jchen351
Copy link
Contributor

We are using cuda12.2 as well. PyTorch hasn't supported cuda 12 yet last time I checked.

@tianleiwu
Copy link
Contributor

tianleiwu commented Feb 22, 2024

@Allfather9,
See https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x
ORT for CUDA 12.x shall be compatible with torch 2.2.0+cu121, CUDA 12.x, cuDNN 8.x

@sophies927 sophies927 added the ep:CUDA issues related to the CUDA execution provider label Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build build issues; typically submitted using template ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

5 participants