Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep original name during fusion #20097

Merged
merged 3 commits into from
Mar 28, 2024
Merged

Keep original name during fusion #20097

merged 3 commits into from
Mar 28, 2024

Conversation

pengwa
Copy link
Contributor

@pengwa pengwa commented Mar 27, 2024

Keep original name during fusion

This could be helpful to know where the fused node coming from, I feel this is very useful when debugging the execution order issues between different transformer layers.

For example:

  • A node named /_original_module/model/layers.1/self_attn/MatMul/MatmulTransposeFusion//MatMulScaleFusion/ goes through two fusion paths in the 1st transformer layer - e.g. MatmulTransposeFusion and MatMulScaleFusion.

  • /_original_module/model/layers.2/post_attention_layernorm/Mul_1/SimplifiedLayerNormFusion/ node is a fused node by SimplifiedLayerNormFusion.

Motivation and Context

@pengwa
Copy link
Contributor Author

pengwa commented Mar 28, 2024

Thanks you @frank-dong-ms !

@pengwa pengwa merged commit 55f63a4 into main Mar 28, 2024
95 checks passed
@pengwa pengwa deleted the pengwa/naming branch March 28, 2024 00:40
TedThemistokleous pushed a commit to TedThemistokleous/onnxruntime that referenced this pull request May 7, 2024
### Keep original name during fusion

This could be helpful to know where the fused node coming from, I feel
this is very useful when debugging the execution order issues between
different transformer layers.

For example:

- A node named
`/_original_module/model/layers.1/self_attn/MatMul/MatmulTransposeFusion//MatMulScaleFusion/`
goes through two fusion paths in the 1st transformer layer - e.g.
`MatmulTransposeFusion` and `MatMulScaleFusion`.

-
`/_original_module/model/layers.2/post_attention_layernorm/Mul_1/SimplifiedLayerNormFusion/`
node is a fused node by `SimplifiedLayerNormFusion`.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants