傅剑寒
|
7ebdf48ac5
|
add cast and op_functor for cuda build-in types (#5546)
|
2024-04-08 11:38:05 +08:00 |
|
yuehuayingxueluo
|
87079cffe8
|
[Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding (#5461)
* Support FP16/BF16 Flash Attention 2
* fix bugs in test_kv_cache_memcpy.py
* add context_kv_cache_memcpy_kernel.cu
* rm typename MT
* add tail process
* add high_precision
* add high_precision to config.py
* rm unused code
* change the comment for the high_precision parameter
* update test_rotary_embdding_unpad.py
* fix vector_copy_utils.h
* add comment for self.high_precision when using float32
|
2024-03-25 13:40:34 +08:00 |
|
傅剑寒
|
7ff42cc06d
|
add vec_type_trait implementation (#5473)
|
2024-03-19 18:36:40 +08:00 |
|
xs_courtesy
|
48c4f29b27
|
refactor vector utils
|
2024-03-19 11:32:01 +08:00 |
|
xs_courtesy
|
388e043930
|
add implementatino for GetGPULaunchConfig1D
|
2024-03-14 11:13:40 +08:00 |
|
xs_courtesy
|
a46598ac59
|
add reusable utils for cuda
|
2024-03-08 14:53:29 +08:00 |
|