[Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643)

* optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x])

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Steve Luo
2024-04-25 14:24:02 +08:00
committed by GitHub
parent 90cd5227a3
commit a8fd3b0342
8 changed files with 152 additions and 49 deletions

View File

@@ -593,7 +593,7 @@ class NopadLlamaAttention(ParallelModule, LlamaAttention):
high_precision,
)
# inference_ops.flash_decoding_attention(
# attn_output,
# output_tensor,
# query_states,
# k_cache,
# v_cache,
@@ -605,6 +605,7 @@ class NopadLlamaAttention(ParallelModule, LlamaAttention):
# fd_inter_tensor.mid_output_lse,
# sm_scale,
# )
# attn_output = output_tensor
else:
if is_verifier:
rotary_embedding(query_states, key_states, cos_sin[0], cos_sin[1])