-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix shape inference bug #19848
fix shape inference bug #19848
Conversation
@microsoft-github-policy-service agree |
@tianleiwu Can you please review it |
/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline |
/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models,Android CI Pipeline |
/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline |
Azure Pipelines successfully started running 2 pipeline(s). |
Azure Pipelines successfully started running 10 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 10 pipeline(s). |
yes, there is a failed case, where can I upload the onnx model, it's about 400MB, and it's a commonly known model |
We usually add some tiny model in testing. Like a model with only one MatMul node that could trigger the code path. 400MB is too large. You can create an issue and add a reproduce script and link to the model in that issue. |
#19870 Hi, issue is raised |
See my comments in the issue. I can run shape inference with --auto_merge flag successfully without this change. Could you double check that? |
Hi, the uploaded model is incorrect, could you please redownload it, I have checked it myself. |
@tianleiwu Can you please review it. |
/azp run Windows GPU CI Pipeline,ONNX Runtime Web CI Pipeline,Linux GPU CI Pipeline,orttraining-amd-gpu-ci-pipeline,Android CI Pipeline,iOS CI Pipeline,ONNX Runtime React Native CI Pipeline |
Azure Pipelines successfully started running 7 pipeline(s). |
Hi @tianleiwu How can I fix the failing cases, the error is not clear to me. |
Let me re-run the failed pipeline |
@tianleiwu Hi, can this be merged, I have many more commits to push. |
@tianleiwu since shape inference is super important when doing model optimization, can I keep this code in my repo, I would prefer maintain my own version of onnxruntime shape inference, and I will commit my code gradually, there are lots of bugs. |
There is one required pipeline "Windows GPU CI Pipeline" failed so this cannot merge. Let me try re-run last time. If failed again, please try rebase to latest main. |
it falied again, and it seems to be like a problem with cmake |
/azp run Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux GPU CI Pipeline,Linux OpenVINO CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-ortmodule-distributed,Big Models |
Azure Pipelines successfully started running 10 pipeline(s). |
@tianleiwu the failed case show that Failed to download Python from the Github Actions python registry (https://github.com/actions/python-versions), maybe the network problem. |
/azp run Windows x64 QNN CI Pipeline,Windows GPU CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline |
/azp run onnxruntime-binary-size-checks-ci-pipeline,Android CI Pipeline,iOS CI Pipeline,ONNX Runtime React Native CI Pipeline |
Azure Pipelines successfully started running 4 pipeline(s). |
Azure Pipelines successfully started running 10 pipeline(s). |
### Description for nodes like add, their input should be merged dynamically ### Motivation and Context when doing shape inference, for nodes like Add, currently when doing _onnx_infer_single_node, their inputs are generated from last node's output, but they should be merged.
Description
for nodes like add, their input should be merged dynamically
Motivation and Context
when doing shape inference, for nodes like Add, currently when doing _onnx_infer_single_node, their inputs are generated from last node's output, but they should be merged.