Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] C++ API SetOutputShape for register custom op. #21366

Merged

Conversation

mingyueliuh
Copy link
Contributor

@mingyueliuh mingyueliuh commented Jul 16, 2024

Description

Bug fix for the SetOutputShape method in custom op shape inference.

Motivation and Context

@mingyueliuh mingyueliuh marked this pull request as draft July 16, 2024 08:19
@mingyueliuh mingyueliuh changed the title Fix custom op set output shape [Fix] C++ API SetOutputShape for register custom op. Jul 16, 2024
@mingyueliuh
Copy link
Contributor Author

@jywu-msft please take a look when you are available. Thanks.

@mingyueliuh mingyueliuh marked this pull request as ready for review July 16, 2024 10:43
Copy link
Contributor

@adrianlizarraga adrianlizarraga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for contributing! Please refer to my comment regarding the new element type argument. Due to the potential complexity of implementing the change correctly, I would recommend separating that out into a separate pull request. Perhaps this pull request could only include the two bug fixes instead.

What do you think?

include/onnxruntime/core/session/onnxruntime_cxx_inline.h Outdated Show resolved Hide resolved
@mingyueliuh
Copy link
Contributor Author

mingyueliuh commented Jul 17, 2024

@adrianlizarraga This PR has been updated and only includes two bug fixes.
And I have raise a new PR for add support tensor element type : #21387

@mingyueliuh
Copy link
Contributor Author

@adrianlizarraga Is there any other concern with this PR?

@jywu-msft The PRs related aligns with your suggestion (#21280 (comment)) in the other PR submitted to VitisAI EP, where the custom op implementation should be included within the EP rather than in the public API custom ops can completely within vitis ai ep implementation.

@adrianlizarraga
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline

@adrianlizarraga
Copy link
Contributor

/azp run Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows ARM64 QNN CI Pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, Windows x64 QNN CI Pipeline, Linux MIGraphX CI Pipeline, Big Models

@adrianlizarraga
Copy link
Contributor

/azp run ONNX Runtime React Native CI Pipeline, orttraining-amd-gpu-ci-pipeline, Linux Android Emulator QNN CI Pipeline

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

Copy link

Azure Pipelines successfully started running 3 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

@adrianlizarraga
Copy link
Contributor

@mingyueliuh I'm running CI to make sure all tests pass.

@adrianlizarraga adrianlizarraga merged commit 86cedc6 into microsoft:main Jul 23, 2024
82 of 85 checks passed
@mingyueliuh mingyueliuh deleted the fix-custom-op-set-output-shape branch July 24, 2024 01:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants