mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-08-19 08:27:23 +00:00
* [inference] Dynamic Batching for Single and Multiple GPUs (#4831) * finish batch manager * 1 * first * fix * fix dynamic batching * llama infer * finish test * support different lengths generating * del prints * del prints * fix * fix bug --------- Co-authored-by: CjhHa1 <cjh18671720497outlook.com> * [inference] Async dynamic batching (#4894) * finish input and output logic * add generate * test forward * 1 * [inference]Re push async dynamic batching (#4901) * adapt to ray server * finish async * finish test * del test --------- Co-authored-by: yuehuayingxueluo <867460659@qq.com> * Revert "[inference]Re push async dynamic batching (#4901)" (#4905) This reverts commitfbf3c09e67
. * Revert "[inference] Async dynamic batching (#4894)" This reverts commitfced140250
. * Revert "[inference] Async dynamic batching (#4894)" (#4909) This reverts commitfced140250
. * Add Ray Distributed Environment Init Scripts * support DynamicBatchManager base function * revert _set_tokenizer version * add driver async generate * add async test * fix bugs in test_ray_dist.py * add get_tokenizer.py * fix code style * fix bugs about No module named 'pydantic' in ci test * fix bugs in ci test * fix bugs in ci test * fix bugs in ci test * [infer]Add Ray Distributed Environment Init Scripts (#4911) * Revert "[inference] Async dynamic batching (#4894)" This reverts commitfced140250
. * Add Ray Distributed Environment Init Scripts * support DynamicBatchManager base function * revert _set_tokenizer version * add driver async generate * add async test * fix bugs in test_ray_dist.py * add get_tokenizer.py * fix code style * fix bugs about No module named 'pydantic' in ci test * fix bugs in ci test * fix bugs in ci test * fix bugs in ci test * support dynamic batch for bloom model and is_running function * [Inference]Test for new Async engine (#4935) * infer engine * infer engine * test engine * test engine * new manager * change step * add * test * fix * fix * finish test * finish test * finish test * finish test * add license --------- Co-authored-by: yuehuayingxueluo <867460659@qq.com> * add assertion for config (#4947) * [Inference] Finish dynamic batching offline test (#4948) * test * fix test * fix quant * add default * fix * fix some bugs * fix some bugs * fix * fix bug * fix bugs * reset param --------- Co-authored-by: yuehuayingxueluo <867460659@qq.com> Co-authored-by: Cuiqing Li <lixx3527@gmail.com> Co-authored-by: CjhHa1 <cjh18671720497outlook.com>
32 lines
1.1 KiB
Python
32 lines
1.1 KiB
Python
try:
|
|
import triton
|
|
|
|
HAS_TRITON = True
|
|
except ImportError:
|
|
HAS_TRITON = False
|
|
print("Triton is not installed. Please install Triton to use Triton kernels.")
|
|
|
|
# There may exist import error even if we have triton installed.
|
|
if HAS_TRITON:
|
|
from .context_attention import bloom_context_attn_fwd, llama_context_attn_fwd
|
|
from .copy_kv_cache_dest import copy_kv_cache_to_dest
|
|
from .fused_layernorm import layer_norm
|
|
from .gptq_triton import gptq_fused_linear_triton
|
|
from .int8_rotary_embedding_kernel import int8_rotary_embedding_fwd
|
|
from .smooth_attention import smooth_llama_context_attn_fwd, smooth_token_attention_fwd
|
|
from .softmax import softmax
|
|
from .token_attention_kernel import token_attention_fwd
|
|
|
|
__all__ = [
|
|
"llama_context_attn_fwd",
|
|
"bloom_context_attn_fwd",
|
|
"softmax",
|
|
"layer_norm",
|
|
"copy_kv_cache_to_dest",
|
|
"token_attention_fwd",
|
|
"gptq_fused_linear_triton",
|
|
"int8_rotary_embedding_fwd",
|
|
"smooth_llama_context_attn_fwd",
|
|
"smooth_token_attention_fwd",
|
|
]
|