[Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643)

* optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x])

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Steve Luo
2024-04-25 14:24:02 +08:00
committed by GitHub
parent 90cd5227a3
commit a8fd3b0342
8 changed files with 152 additions and 49 deletions

View File

@@ -62,7 +62,7 @@ void flash_decoding_attention(
torch::Tensor& out, // [num_tokens, num_heads, head_size]
torch::Tensor& query, // [num_tokens, num_heads, head_size]
torch::Tensor&
key_cache, // [num_blocks, num_kv_heads, block_size, head_size]
key_cache, // [num_blocks, num_kv_heads, head_size/x, block_size, x]
torch::Tensor&
value_cache, // [num_blocks, num_kv_heads, block_size, head_size]
torch::Tensor& context_lens, // [num_tokens]