Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update DML EP to accept broadcasted tensor of size 1 to match CPU #19081

Merged
merged 5 commits into from
Jan 11, 2024

Conversation

chrilaMSFT
Copy link
Contributor

Description

With QDQ enabled for Dml EP we are seeing some models not optimize constant nodes with incorrect tensor size of scale[1] and zeropoint[1] that does not match the input size. CPU accepts this parameter type so updating Dml EP to match CPU behavior.

Motivation and Context

Want to match CPU EP behavior.

Copy link
Contributor

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm okay with it, even if it's technically incorrect according to the spec, because it's harmless, and evidently com.microsoft.DequantizeLinear differs from ai.onnx.DequantizeLinear.

@chrilaMSFT
Copy link
Contributor Author

chrilaMSFT commented Jan 11, 2024

I'm okay with it, even if it's technically incorrect according to the spec, because it's harmless, and evidently com.microsoft.DequantizeLinear differs from ai.onnx.DequantizeLinear. #Resolved


In reply to: 1814391221

…ors/DmlOperatorElementWise.cpp


Fix indent of first parameter.
…ors/DmlOperatorElementWise.cpp


Fix curious duplicate lines.
@fdwr fdwr changed the title Update Dml Ep to accept broadcast tensor of 1 to match CPU Update DML EP to accept broadcasted tensor of size 1 to match CPU Jan 11, 2024
@chrilaMSFT chrilaMSFT merged commit 8a0a972 into main Jan 11, 2024
90 of 92 checks passed
@chrilaMSFT chrilaMSFT deleted the user/chrila/AddBroadcastScalerForTensor branch January 11, 2024 23:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants