-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update DML EP to accept broadcasted tensor of size 1 to match CPU #19081
Conversation
onnxruntime/core/providers/dml/DmlExecutionProvider/src/Operators/DmlOperatorElementWise.cpp
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay with it, even if it's technically incorrect according to the spec, because it's harmless, and evidently com.microsoft.DequantizeLinear
differs from ai.onnx.DequantizeLinear
.
I'm okay with it, even if it's technically incorrect according to the spec, because it's harmless, and evidently In reply to: 1814391221 |
onnxruntime/core/providers/dml/DmlExecutionProvider/src/Operators/DmlOperatorElementWise.cpp
Outdated
Show resolved
Hide resolved
…ors/DmlOperatorElementWise.cpp Fix indent of first parameter.
onnxruntime/core/providers/dml/DmlExecutionProvider/src/Operators/DmlOperatorElementWise.cpp
Outdated
Show resolved
Hide resolved
…ors/DmlOperatorElementWise.cpp Fix curious duplicate lines.
Description
With QDQ enabled for Dml EP we are seeing some models not optimize constant nodes with incorrect tensor size of scale[1] and zeropoint[1] that does not match the input size. CPU accepts this parameter type so updating Dml EP to match CPU behavior.
Motivation and Context
Want to match CPU EP behavior.