[Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding (#5461)

* Support FP16/BF16 Flash Attention 2

* fix bugs in test_kv_cache_memcpy.py

* add context_kv_cache_memcpy_kernel.cu

* rm typename MT

* add tail process

* add high_precision

* add high_precision to config.py

* rm unused code

* change the comment for the high_precision parameter

* update test_rotary_embdding_unpad.py

* fix vector_copy_utils.h

* add comment for self.high_precision when using float32
This commit is contained in:
yuehuayingxueluo
2024-03-25 13:40:34 +08:00
committed by GitHub
parent 7ff42cc06d
commit 87079cffe8
15 changed files with 550 additions and 138 deletions

View File

@@ -136,7 +136,8 @@ def benchmark_inference(args):
data = data_gen(mbsz, args.seq_len)
data = data.tolist()
if args.mode == "colossalai" or args.mode == "vllm":
data = data.tolist()
generation_config = GenerationConfig(
pad_token_id=tokenizer.pad_token_id,