-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] C++ API SetOutputShape for register custom op. #21366
[Fix] C++ API SetOutputShape for register custom op. #21366
Conversation
@jywu-msft please take a look when you are available. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for contributing! Please refer to my comment regarding the new element type argument. Due to the potential complexity of implementing the change correctly, I would recommend separating that out into a separate pull request. Perhaps this pull request could only include the two bug fixes instead.
What do you think?
@adrianlizarraga This PR has been updated and only includes two bug fixes. |
@adrianlizarraga Is there any other concern with this PR? @jywu-msft The PRs related aligns with your suggestion (#21280 (comment)) in the other PR submitted to VitisAI EP, where the custom op implementation should be included within the EP rather than in the public API custom ops can completely within vitis ai ep implementation. |
/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline |
/azp run Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows ARM64 QNN CI Pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, Windows x64 QNN CI Pipeline, Linux MIGraphX CI Pipeline, Big Models |
/azp run ONNX Runtime React Native CI Pipeline, orttraining-amd-gpu-ci-pipeline, Linux Android Emulator QNN CI Pipeline |
Azure Pipelines successfully started running 9 pipeline(s). |
Azure Pipelines successfully started running 3 pipeline(s). |
Azure Pipelines successfully started running 10 pipeline(s). |
@mingyueliuh I'm running CI to make sure all tests pass. |
Description
Bug fix for the SetOutputShape method in custom op shape inference.
Motivation and Context
Bug a : A obvious bug that will cause all dimensions to be 1.
https://github.com/microsoft/onnxruntime/blob/main/include/onnxruntime/core/session/onnxruntime_cxx_inline.h#L2014
integer_dims.push_back(dim.IsInt()); -> integer_dims.push_back(dim.AsInt());
Bug b : vector out of range error
op's input maybe a scalar and shape is empty.
https://github.com/microsoft/onnxruntime/blob/main/include/onnxruntime/core/session/onnxruntime_cxx_inline.h#L1985