-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Error] [ONNXRuntimeError] : 1 : FAIL : CUDA failure 3: initialization error #21368
Comments
I did not reproduce the issue. Here is what I try:
Then, run the following command line in docker
Finally, test an onnx model using python:
Everything seems good. |
Okie, I will check it again. Thank you |
Hi @tianleiwu, After many times debug in local machine and container docker, with the same code in both env, I saw that:
Here is my found:
what do you think this issue comes from? And any your suggestion? |
I spent more time and tried others serving. After that, I found uvicorn can work well. |
Describe the issue
I tested my model successfully in my local machine with same torch version. However, I have error when I load model in docker image.
To reproduce
Docker image: pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel
onnxruntime-gpu: onnxruntime-gpu==1.15
with docs provided, I think that I set up right
Urgency
No response
Platform
Linux
OS Version
20.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.15
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
Cuda 11.8
The text was updated successfully, but these errors were encountered: