You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes these errors happens:
[E:onnxruntime:, sequential_executor.cc:514 onnxruntime::ExecuteKernel] Non-zero status code returned while running BeamSearch node
and the output tensor ids are negative, which is not possible to tokens.
To reproduce
Use any mT5 onnx model and the following c++ code:
chaihahaha
changed the title
RunAsync output gibberish when multiple RunAsync infering in parallel.
RunAsync output gibberish when multiple RunAsync infer in parallel.
Nov 21, 2023
It seems adding a std::this_thread::sleep_for to sleep for 50ms after calling RunAsync would mitigate this problem. Still mysterious that sometimes input_tensors seems to be corrupted when read from callback.
This is a lifetime problem of Ort::Value objects, keeping a global object list to avoid releasing memory and avoid multiple threads visiting one Ort::Value would solve this.
Describe the issue
Sometimes these errors happens:
[E:onnxruntime:, sequential_executor.cc:514 onnxruntime::ExecuteKernel] Non-zero status code returned while running BeamSearch node
and the output tensor ids are negative, which is not possible to tokens.
To reproduce
Use any mT5 onnx model and the following c++ code:
Test with the following command:
Urgency
No response
Platform
Windows
OS Version
Win10
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.2
ONNX Runtime API
C++
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: