1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-08 16:38:15 +00:00
ColossalAI/colossalai
Yuanheng Zhao b21aac5bae
[Inference] Optimize and Refactor Inference Batching/Scheduling ()
* add kvcache manager funcs for batching

* add batch bucket for batching

* revise RunningList struct in handler

* add kvcache/batch funcs for compatibility

* use new batching methods

* fix indexing bugs

* revise abort logic

* use cpu seq lengths/block tables

* rm unused attr in Sequence

* fix type conversion/default arg

* add and revise pytests

* revise pytests, rm unused tests

* rm unused statements

* fix pop finished indexing issue

* fix: use index in batch when retrieving inputs/update seqs

* use dict instead of odict in batch struct

* arg type hinting

* fix make compress

* refine comments

* fix: pop_n_seqs to pop the first n seqs

* add check in request handler

* remove redundant conversion

* fix test for request handler

* fix pop method in batch bucket

* fix prefill adding
2024-02-19 17:18:20 +08:00
..
_analyzer [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00
_C [setup] support pre-build and jit-build of cuda kernels () 2023-01-06 20:50:26 +08:00
accelerator [accelerator] fixed npu api 2024-01-29 14:27:52 +08:00
amp [npu] change device to accelerator api () 2024-01-09 10:20:05 +08:00
auto_parallel [npu] change device to accelerator api () 2024-01-09 10:20:05 +08:00
autochunk [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00
booster Merge branch 'main' into sync/npu 2024-01-18 12:05:21 +08:00
checkpoint_io [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when strict=False, fix llama flash attention forward, add flop estimation by megatron in llama benchmark () 2023-11-16 20:15:59 +08:00
cli [bug] Fix the version check bug in colossalai run when generating the cmd. () 2023-09-22 10:50:47 +08:00
cluster fix-test () 2024-01-03 14:26:13 +08:00
context [moe] merge moe into main () 2023-11-02 02:21:24 +00:00
device [npu] add npu support for hybrid plugin and llama () 2023-11-22 19:23:21 +08:00
fx [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00
inference [Inference] Optimize and Refactor Inference Batching/Scheduling () 2024-02-19 17:18:20 +08:00
interface [lazy] support from_pretrained () 2023-09-26 11:04:11 +08:00
kernel [Inference/opt] Fused KVCahce Memcopy () 2024-02-07 17:15:42 +08:00
lazy [doc] add lazy init docs () 2023-09-27 10:24:04 +08:00
legacy merge commit 2024-01-31 10:41:47 +08:00
logging [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00
moe Merge pull request from hpcaitech/feature/npu 2024-01-29 13:49:39 +08:00
nn [feat] refactored extension module () 2024-01-25 17:01:48 +08:00
pipeline [feat] refactored extension module () 2024-01-25 17:01:48 +08:00
shardformer fix typo change dosen't to doesn't () 2024-01-30 09:57:38 +08:00
tensor fix some typo () 2024-01-25 13:56:27 +08:00
testing [npu] change device to accelerator api () 2024-01-09 10:20:05 +08:00
utils Merge pull request from hpcaitech/feature/npu 2024-01-29 13:49:39 +08:00
zero Merge pull request from hpcaitech/feature/npu 2024-01-29 13:49:39 +08:00
__init__.py [accelerator] init the accelerator module () 2023-11-30 13:25:17 +08:00
initialize.py [npu] change device to accelerator api () 2024-01-09 10:20:05 +08:00