Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] Failed to run Whisper inference after optimization with Dml EP #21156

Open
XciciciX opened this issue Jun 25, 2024 · 2 comments
Open
Labels
ep:DML issues related to the DirectML execution provider performance issues related to performance regressions platform:windows issues related to the Windows platform quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot

Comments

@XciciciX
Copy link

Describe the issue

I exported my medium Whisper model correctly. It could run the inference with the correct answer. After that, I optimized my model. I ran the command line: python -m onnxruntime.transformers.optimizer --input ./whisper-medium-onnx/decoder_with_past_model.onnx --output ./whisper-medium-onnx-test/decoder_with_past_model.onnx --float16 --model_type bart --num_heads 16 --hidden_size 1024 --use_multi_head_attention
I exported and optimized all my related models. When I ran the inference again, it could run with CPU EP, but it failed with Dml EP.
It failed with the following information:
image
I have debugged a little and found the problem is the --use_multi_head_attention in one model. If I do not fuse MHA, it can run. If I add the MHA, the error occurs.

To reproduce

Run the command line

Urgency

Yes

Platform

Windows

OS Version

11

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.17.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

No response

Model File

No response

Is this a quantized model?

Yes

@github-actions github-actions bot added ep:DML issues related to the DirectML execution provider platform:windows issues related to the Windows platform quantization issues related to quantization labels Jun 25, 2024
@sophies927 sophies927 added the performance issues related to performance regressions label Jun 27, 2024
@WA225
Copy link

WA225 commented Jul 2, 2024

I am having a similar issue described in microsoft/Olive#1221. Any idea how to fix it?

Copy link
Contributor

github-actions bot commented Aug 2, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:DML issues related to the DirectML execution provider performance issues related to performance regressions platform:windows issues related to the Windows platform quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

3 participants