-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX runtime crashes randomly in C++ when running a model #19160
Comments
I assume the same model works with the python bindings. Did you try to enable the logs to have more information about the crash? |
OK, I figured out the origin of the issue. It turned out that using a vector<char*> for input and output names was the problem. I temporarily changed it to arrays of const chars to make it work. array<const char*, 1> input_names = { "input:0" }; Now thinking about a means to provide names more dynamically. |
You should be able to find the information you need here: https://onnxruntime.ai/docs/api/c/struct_ort_1_1detail_1_1_const_session_impl.html. |
Thanks, but it does not really solve anything. The problem is that the char* you receive from session.GetInputNameAllocated(i, allocator).get() is deallocated after a short period of time, so storing it into the inputNames vector is not ok because it will point to a free memory block. This problem is also pointed in this thread: #14157, but it does not really provide a satisfying solution since it implies to move the unique_ptr of GetInputNameAllocated into a shared_ptr, which does not work either in my case. Simply copying the char* does not work because it is also deallocated too quickly before you can even do that. I'm out of solutions here. |
OK, so it turn out you HAVE to store the shared pointer outside of the loop to have access to the memory block it points: std::shared_ptr outputName; Doing this instead of creating a temporary variable outputName works, but is not really satisfactory, since it involves transforming a unique_ptr into a shared_ptr which you have to store, while its only purpose it to hold a reference to the memory pointed in the outputNames vector... If anybody have a better solution I am open to it. |
Describe the issue
I exported a TensorFlow model (frozen, in .pb format) to an ONNX model, that works perfectly fine in Python. Troubles arise when I try to do the same with the ONNX runtime in C++. When I run the session it randomly crashes, sometimes on the first execution, sometimes on the third or fourth... I use OpenCV to read an image to feed the model, but it doesn't seem to come from it since the same occurs with tensors fed with random values, even when not linking the program to the OpenCV libraries.
The ONNX model I use is too big to be uploaded here, so here is a download link:
https://evolucare-my.sharepoint.com/:u:/p/a_ducournau/Ee-QIfcU6nJKv5Mpu1oR7v4BfprDsxvoelBZ3cKVt6KnaQ?e=Y3mWGJ
To reproduce
Urgency
No response
Platform
Windows
OS Version
Server 2016
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.3
ONNX Runtime API
C++
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: