You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have generated an onnx model from the yolo_nas_pose nano model which can be found here. In order to generate the onnx file I have added the following code cell.
Besides this, I have merged this model with the following (simple) postprocessing model, which just takes the first out of the 120 predictions the model does for each of the images in the batch. I will paste here the tensorflow code used to generate the onnx file.
When I run this model using onnxruntime on its own it works correctly. The problem arises when I merge this model with a previously created one. The inputs to the model are exactly of the same shape and data type. Besides, the problem occurs in one of the yolo_nas_pose's onnx nodes. The error I get is the following.
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Concat node. Name:'posegraph2_/Concat' Status Message: /Users/runner/work/1/s/onnxruntime/core/framework/op_kernel.cc:83 virtual OrtValue *onnxruntime::OpKernelContext::OutputMLValue(int, const onnxruntime::TensorShape &) status.IsOK() was false. Shape mismatch attempting to re-use buffer. {4,3} != {6,3}. Validate usage of dim_value (values should be > 0) and dim_param (all values with the same string should equate to the same size) in shapes in the model.
Does anyone know what can be happening? I have tried a shape_inference, but the error still occurs.
To reproduce
To reproduce generate the two models stated above (necessary code is in the describe the issue section) and merge the models by using this code snippet.
hi @nachoogriis, I'm not an expert on onnx.compose.merge_models() but since you mentioned that merging models causes ORT exception in the first model, it's suggested to check if the merging functionality ever change the graph of the model, especially the "concat" part.
A workaround for your case is to merge the models in the original framework and export them as a whole onnx model.
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
I have generated an onnx model from the yolo_nas_pose nano model which can be found here. In order to generate the onnx file I have added the following code cell.
Besides this, I have merged this model with the following (simple) postprocessing model, which just takes the first out of the 120 predictions the model does for each of the images in the batch. I will paste here the tensorflow code used to generate the onnx file.
When I run this model using onnxruntime on its own it works correctly. The problem arises when I merge this model with a previously created one. The inputs to the model are exactly of the same shape and data type. Besides, the problem occurs in one of the yolo_nas_pose's onnx nodes. The error I get is the following.
Does anyone know what can be happening? I have tried a shape_inference, but the error still occurs.
To reproduce
To reproduce generate the two models stated above (necessary code is in the describe the issue section) and merge the models by using this code snippet.
Then, concatenate this model with a previous model that can be merged with this one (inputs/outputs correspond).
Urgency
No response
Platform
Mac
OS Version
Sonoma 14.1
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.18.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: