[Performance] Failed to run Whisper inference after optimization with Dml EP #21156
Labels
ep:DML
issues related to the DirectML execution provider
performance
issues related to performance regressions
platform:windows
issues related to the Windows platform
quantization
issues related to quantization
stale
issues that have not been addressed in a while; categorized by a bot
Describe the issue
I exported my medium Whisper model correctly. It could run the inference with the correct answer. After that, I optimized my model. I ran the command line:
python -m onnxruntime.transformers.optimizer --input ./whisper-medium-onnx/decoder_with_past_model.onnx --output ./whisper-medium-onnx-test/decoder_with_past_model.onnx --float16 --model_type bart --num_heads 16 --hidden_size 1024 --use_multi_head_attention
I exported and optimized all my related models. When I ran the inference again, it could run with CPU EP, but it failed with Dml EP.
It failed with the following information:
I have debugged a little and found the problem is the --use_multi_head_attention in one model. If I do not fuse MHA, it can run. If I add the MHA, the error occurs.
To reproduce
Run the command line
Urgency
Yes
Platform
Windows
OS Version
11
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
DirectML
Execution Provider Library Version
No response
Model File
No response
Is this a quantized model?
Yes
The text was updated successfully, but these errors were encountered: