0.1.5 (2024-08-13)
- Fix PagedPrefill python api and some typos (#441) (3fff008)
- fix prefill kernels' lse result for empty kv-cache (#440) (6ac28f4)
We thank contributions and feedbacks from the community: @comaniac, @hnyls2002, @jianfei-wangg, @Yard1.
0.1.4 (2024-08-09)
- append attention kernels for fp8 kv-cache (#420) (906c2f5)
- support min_p sampling (#422) (d52f2da)
- deterministic sampling (#417) (0dd801d)
- more sampling operator options (#431) (68df9c4)
- support fused add rmsnorm (#419) (b781513)
- support fused silu mul (#427) (ea0ba9a)
- fix dispatch fp16 type when enable fp8 (#430) (daa5566)
- improve numerical stability of sampling kernels (#429) (898d8ea)
We thank contributions and feedbacks from the community: @comaniac, @esmeetu, @LiuXiaoxuanPKU, @peng1999, @xslingcn, @Yard1, @zhyncs.
0.1.3 (2024-07-31)
- bugfix: Fix cudagraph mode of BatchPrefillWithRaggedKVCacheWrapper (#412) (9907bc)
- fix cu118 cub usage for sampling kernels (#410) (58d359)
- enhance allocator error info and add shape check for prefill begin forward functions (#413) (5e36c5)
0.1.2 (2024-07-29)
- add llama 3.1 style rope (#401) (4c89dec)
- non-inplace rope operators (#405) (74ffba1)
- sliding window attention (#406) (28cffd3)
- support non-contiguous (packed) input for prefill kernels (#404) (68c3719)
0.1.1 (2024-07-20)
- fix the invalid kernel configuration for architectures with small shared memory size (#385) (cdac57)
0.1.0 (2024-07-17)
- Add mask to
merge_state_in_place
(#372) (e14fa81) - expose pytorch api for block sparse attention (#375) (4bba6fa)
- Fused GPU sampling kernel for joint top-k & top-p sampling (#374) (6e028eb)
0.0.9 (2024-07-12)
- fix decode kernels output for empty kv cache (#363)(ac72b1)
- check gpu id in PyTorch APIs and use input tensor's gpu default stream (#361)(1b84fa)
- accelerate alibi (#365) (4f0a9f9)
- accelerate gqa performance (#356) (e56ddad)
- Optimize tensor conversions in C++ code to avoid unnecessary copies (#366) (1116237)
We thank @Yard1, @Ying1123 and @zhyncs for their contributions.
0.0.8 (2024-07-03)
- fix prefill/append kernel behavior for empty kv-cache (#353) (7adc8c)
- fix decode attention kernel with logits cap (#350) (f5f7a2)
0.0.7 (2024-06-28)
batch_decode_with_padded_kv_cache
was removed, we encourage user to useBatchDecodeWithPagedKVCacheWrapper
instead. (#343)
- fix the
forward_return_lse
function inBatchPrefillWithRaggedKVCache
class (#337) - fix the scheduler behavior of large page size (#333)
- change minimal
kv_chunk_size
back to 128 (#329) (f237f5f) - more options for kv tile size (#336) (bf2a6c7)
0.0.6 (2024-06-21)
Fix some bug in v0.0.5 that might lead to crashes and instable performance.
0.0.5 (2024-06-20)
- Support any GQA group size support for tensor-cores kernels.
- Support any page size support for tensor-cores kernels.
- Support CUDA-Graph for prefill/decode APIs.
- Add an option to accelerate decode kernels with Tensor Cores.
- Support custom attention mask. (https://docs.flashinfer.ai/tutorials/kv_layout.html#mask-layout-2d-ragged-tensor)
- Support logits cap in Grok-1 models.
- Fused GPU-sampling kernels: top-p, top-k, speculative verification. (https://docs.flashinfer.ai/api/python/sampling.html)
- PyTorch wrapper of group-gemm cutlass kernels. (https://docs.flashinfer.ai/api/python/group_gemm.html)
We thank @ibsidorenko, @LiuXiaoxuanPKU, @Yard1 @AgrawalAmey, @xuzhenqi, @mgerstgrasser, @esmeetu, @yz-tang, @HSQ79815, @Qubitium, @shreygupta2809, @sighingnow, @vinx13, @tqchen, @merrymercy, @comaniac and many others for their contributions and helpful discussions for 0.0.5 release.
- support any GQA group size for tensor-cores kernels (#301) (c111ca)
- support any page size for tensor-cores kernels (#306) (82fd8c)
- add
use_tensor_cores
option to decode kernels to accelerate GQA (#317) (3b50dd5) - add group gemm operators (#282) (e08ba42)
- initial support of distributed operators (#289) (03553da)
- initial support of logits hook (#298) (ab1e2ad)
- Separate Q and KV dtypes for decode (#286) (5602659)
- support cuda graph for batched multi-query(prefill/append) attention (#275) (83ceb67)
- support cuda graph for batched multi-query(prefill/append) attention (#277) (24cc583)
- support custom attention mask in prefill/append attention kernels (#266) (7304282)
- fused speculative sampilng kernels (#259) (cea2bb)
- expose sampling APIs in pytorch (#238) (092902)
- initial cuda graph support (#256) (7e9cc7f)
- split kv-cache for prefill/append kernels (#310) (f0bb0a3)
- use packed bit array for attention mask (#308) (3d43dc9)
0.0.4 (2024-05-01)
- pytorch 2.3 support
- gpu sampling kernels (top-p, top-k)
- more gqa group sizes
- add mma instructions for fp8 (#179) (d305798)
- mma rowsum for fp8 (#180) (5af935c)
- support any num_heads for get_alibi_slope (#200) (b217a6f)
0.0.3 (2024-03-08)
- adding
sm_scale
field for all attention APIs (#145) (85d4018) - enable
head_dim=256
for attention kernels (#132) (0372acc) - pytorch api of fp8 kv-cache (#156) (66ee066)
- support ALiBi (#146) (383518b)
- bugfix to pr 135 (#136) (3d55c71)
- fix bugs introduced in #132 (#135) (9b7b0b9)
- fix FindThrust.cmake (#161) (30fa584)