-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance] Improve the flash attention performance on bottom-up optimization pipeline #2177
Comments
@chengjunlu what about #950 ? That is prob. needed to reduce reg. pressure. We can track it here ? |
Forgot that one. Yes, let's track it here too. |
On agama 996 with latest main (commit 6f89dbe), Triton performance is 40% of XeTLA.
|
Triton/XeTLA improves 40%->43% by removing all environment variables on agama 996. |
This issue is to track the new design required for flash-attention on bottom-up optimization pipeline.
Status
The most of the optimization passes has been finished and been checked in llvm-target branch. And all the tasks in the old issue #878 have been finished. The GEMM Triton kernel with block pointer syntax can get the 90% performance of the XeTLA version. There is a promising performance on the flash attention with block pointer by adding simply changes in RewriteBlockPointer pass.
New problem
There are two new problems found in the developing the bottom-up optimization pipeline:
tt.load
to support FP8 for flash attention.RewriteTensorPointer
pass. #1766Plan
To achieve the goals of both performance and functionality on bottom-up phase, we need a new implementation than it is original planed.
tt.load
operation with the block pointer as memory ptr. (Optionally to support fallback to Intel 1D block IO.)This design also can benefit the new feature as TMA descriptor in future.
The text was updated successfully, but these errors were encountered: