ColossalAI/colossalai
Xu Kai fd6482ad8c
[inference] Refactor inference architecture (#5057)
* [inference] support only TP (#4998)

* support only tp

* enable tp

* add support for bloom (#5008)

* [refactor] refactor gptq and smoothquant llama (#5012)

* refactor gptq and smoothquant llama

* fix import error

* fix linear import torch-int

* fix smoothquant llama import error

* fix import accelerate error

* fix bug

* fix import smooth cuda

* fix smoothcuda

* [Inference Refactor] Merge chatglm2 with pp and tp (#5023)

merge chatglm with pp and tp

* [Refactor] remove useless inference code (#5022)

* remove useless code

* fix quant model

* fix test import bug

* mv original inference legacy

* fix chatglm2

* [Refactor] refactor policy search and quant type controlling in inference (#5035)

* [Refactor] refactor policy search and quant type controling in inference

* [inference] update readme (#5051)

* update readme

* update readme

* fix architecture

* fix table

* fix table

* [inference] udpate example (#5053)

* udpate example

* fix run.sh

* fix rebase bug

* fix some errors

* update readme

* add some features

* update interface

* update readme

* update benchmark

* add requirements-infer

---------

Co-authored-by: Bin Jia <45593998+FoolPlayer@users.noreply.github.com>
Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
2023-11-19 21:05:05 +08:00
..
_analyzer [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
_C [setup] support pre-build and jit-build of cuda kernels (#2374) 2023-01-06 20:50:26 +08:00
amp [feature] Add clip_grad_norm for hybrid_parallel_plugin (#4837) 2023-10-12 11:32:37 +08:00
auto_parallel [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
autochunk [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
booster [gemini] gemini support extra-dp (#5043) 2023-11-16 21:03:04 +08:00
checkpoint_io [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when strict=False, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017) 2023-11-16 20:15:59 +08:00
cli [bug] Fix the version check bug in colossalai run when generating the cmd. (#4713) 2023-09-22 10:50:47 +08:00
cluster [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
context [moe] merge moe into main (#4978) 2023-11-02 02:21:24 +00:00
device [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
fx [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
inference [inference] Refactor inference architecture (#5057) 2023-11-19 21:05:05 +08:00
interface [lazy] support from_pretrained (#4801) 2023-09-26 11:04:11 +08:00
kernel [Kernels]Update triton kernels into 2.1.0 (#5046) 2023-11-16 16:43:15 +08:00
lazy [doc] add lazy init docs (#4808) 2023-09-27 10:24:04 +08:00
legacy [inference] Refactor inference architecture (#5057) 2023-11-19 21:05:05 +08:00
logging [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
moe [hotfix]: modify create_ep_hierarchical_group and add test (#5032) 2023-11-17 10:53:00 +08:00
nn [moe] merge moe into main (#4978) 2023-11-02 02:21:24 +00:00
pipeline [inference] Refactor inference architecture (#5057) 2023-11-19 21:05:05 +08:00
shardformer [inference] Refactor inference architecture (#5057) 2023-11-19 21:05:05 +08:00
tensor [hotfix]: modify create_ep_hierarchical_group and add test (#5032) 2023-11-17 10:53:00 +08:00
testing [test] merge old components to test to model zoo (#4945) 2023-10-20 10:35:08 +08:00
utils [moe] merge moe into main (#4978) 2023-11-02 02:21:24 +00:00
zero [gemini] gemini support extra-dp (#5043) 2023-11-16 21:03:04 +08:00
__init__.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
initialize.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00