-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference fails with libc++abi terminating
error
#17616
Comments
libc++abi terminating
libc++abi terminating
error
I made a mistake in the original issue. The platform is macOS. |
The error comes from libiomp5 but we don't use openmp at all. |
Could you please build ONNX Runtime from source in debug mode and give me a stacktrace ? Here is the instructions: https://onnxruntime.ai/docs/build/inferencing.html After that, you just install the newly built package and run lldb -- python test.py It will give us everything. |
I built the latest ONNX Runtime from source and attempted this again. I am not getting the error on the latest build, but it still occurs with v1.15.1. I can attempt to build v1.15.1 from source and get the stack trace for that. I was having a bit of trouble with that build. |
We just published 1.16.0. Can you try that one instead? |
We have not shipped with openmp for some time. |
The model works fine with 1.16.0. I am getting no errors. What originally caused the issue in 1.15.1? |
Sorry I don't know. Without a stack trace I cannot tell what the issue is. And even if we know, we cannot go back to patch it. We only provide bug fixes to the latest release. |
Describe the issue
Inference with an ONNX model results in
libc++abi terminating
message and my shell terminating the process. Screenshot:When attempting this on a different machine we get the following error:
And upon setting the environment variable we get the following error:
The ONNX model used was originally a TensorFlow model that was converted to ONNX using the tensorflow-onnx model converter (tf2onnx). The original model (TensorFlow) was created using the nnsmith model generator. The TensorFlow model was converted as follows
python -m tf2onnx.convert --saved-model <path to tfnet> --output ./model.onnx --opset 16
To reproduce
Install Dependencies:
pip install onnxruntime
pip install torch
Steps:
Test file:
Urgency
No response
Platform
MacOS
OS Version
13.5.2 (22G91)
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.15.1
ONNX Runtime API
Python
Architecture
ARM64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: