1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-05 06:58:09 +00:00
Commit Graph

2860 Commits

Author SHA1 Message Date
Xuanlei Zhao
dc003c304c
[moe] merge moe into main ()
* update moe module
* support openmoe
2023-11-02 02:21:24 +00:00
Hongxin Liu
8993c8a817
[release] update version ()
* [release] update version

* [hotfix] fix ci
2023-11-01 13:41:22 +08:00
Bin Jia
b6696beb04
[Pipeline Inference] Merge pp with tp ()
* refactor pipeline into new CaiInferEngine

* updata llama modeling forward

* merge tp with pp

* update docstring

* optimize test workflow and example

* fix typo

* add assert and todo
2023-11-01 12:46:21 +08:00
ppt0011
335cb105e2
[doc] add supported feature diagram for hybrid parallel plugin () 2023-10-31 19:56:42 +08:00
Baizhou Zhang
c040d70aa0
[hotfix] fix the bug of repeatedly storing param group () 2023-10-31 14:48:01 +08:00
littsk
be82b5d4ca
[hotfix] Fix the bug where process groups were not being properly released. ()
* Fix the bug where process groups were not being properly released.

* test

* Revert "test"

This reverts commit 479900c139.
2023-10-31 14:47:30 +08:00
Cuiqing Li (李崔卿)
4f0234f236
[doc]Update doc for colossal-inference ()
* update doc

* Update README.md

---------

Co-authored-by: cuiqing.li <lixx336@gmail.com>
2023-10-31 10:48:07 +08:00
Yuanchen
abe071b663
fix ColossalEval ()
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
2023-10-31 10:30:03 +08:00
Cuiqing Li
459a88c806
[Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention ()
* adding flash-decoding

* clean

* adding kernel

* adding flash-decoding

* add integration

* add

* adding kernel

* adding kernel

* adding triton 2.1.0 features for inference

* update bloom triton kernel

* remove useless vllm kernels

* clean codes

* fix

* adding files

* fix readme

* update llama flash-decoding

---------

Co-authored-by: cuiqing.li <lixx336@gmail.com>
2023-10-30 14:04:37 +08:00
Jianghai
cf579ff46d
[Inference] Dynamic Batching Inference, online and offline ()
* [inference] Dynamic Batching for Single and Multiple GPUs ()

* finish batch manager

* 1

* first

* fix

* fix dynamic batching

* llama infer

* finish test

* support different lengths generating

* del prints

* del prints

* fix

* fix bug

---------

Co-authored-by: CjhHa1 <cjh18671720497outlook.com>

* [inference] Async dynamic batching  ()

* finish input and output logic

* add generate

* test forward

* 1

* [inference]Re push async dynamic batching ()

* adapt to ray server

* finish async

* finish test

* del test

---------

Co-authored-by: yuehuayingxueluo <867460659@qq.com>

* Revert "[inference]Re push async dynamic batching ()" ()

This reverts commit fbf3c09e67.

* Revert "[inference] Async dynamic batching  ()"

This reverts commit fced140250.

* Revert "[inference] Async dynamic batching  ()" ()

This reverts commit fced140250.

* Add Ray Distributed Environment Init Scripts

* support DynamicBatchManager base function

* revert _set_tokenizer version

* add driver async generate

* add async test

* fix bugs in test_ray_dist.py

* add get_tokenizer.py

* fix code style

* fix bugs about No module named 'pydantic' in ci test

* fix bugs in ci test

* fix bugs in ci test

* fix bugs in ci test

* [infer]Add Ray Distributed Environment Init Scripts ()

* Revert "[inference] Async dynamic batching  ()"

This reverts commit fced140250.

* Add Ray Distributed Environment Init Scripts

* support DynamicBatchManager base function

* revert _set_tokenizer version

* add driver async generate

* add async test

* fix bugs in test_ray_dist.py

* add get_tokenizer.py

* fix code style

* fix bugs about No module named 'pydantic' in ci test

* fix bugs in ci test

* fix bugs in ci test

* fix bugs in ci test

* support dynamic batch for bloom model and is_running function

* [Inference]Test for new Async engine ()

* infer engine

* infer engine

* test engine

* test engine

* new manager

* change step

* add

* test

* fix

* fix

* finish test

* finish test

* finish test

* finish test

* add license

---------

Co-authored-by: yuehuayingxueluo <867460659@qq.com>

* add assertion for config ()

* [Inference] Finish dynamic batching offline test ()

* test

* fix test

* fix quant

* add default

* fix

* fix some bugs

* fix some bugs

* fix

* fix bug

* fix bugs

* reset param

---------

Co-authored-by: yuehuayingxueluo <867460659@qq.com>
Co-authored-by: Cuiqing Li <lixx3527@gmail.com>
Co-authored-by: CjhHa1 <cjh18671720497outlook.com>
2023-10-30 10:52:19 +08:00
アマデウス
4e4a10c97d
updated c++17 compiler flags () 2023-10-27 18:19:56 +08:00
Bin Jia
1db6727678
[Pipeline inference] Combine kvcache with pipeline inference ()
* merge kvcache with pipeline inference and refactor the code structure

* support ppsize > 2

* refactor pipeline code

* do pre-commit

* modify benchmark

* fix bench mark

* polish code

* add docstring and update readme

* refactor the code

* fix some logic bug of ppinfer

* polish readme

* fix typo

* skip infer test
2023-10-27 16:19:54 +08:00
Jianghai
c6cd629e7a
[Inference]ADD Bench Chatglm2 script ()
* add bench chatglm

* fix bug and make utils

---------

Co-authored-by: CjhHa1 <cjh18671720497outlook.com>
2023-10-24 13:11:15 +08:00
Xu Kai
785802e809
[inference] add reference and fix some bugs ()
* add reference and fix some bugs

* update gptq init

---------

Co-authored-by: Xu Kai <xukai16@foxamil.com>
2023-10-20 13:39:34 +08:00
Hongxin Liu
b8e770c832
[test] merge old components to test to model zoo ()
* [test] add custom models in model zoo

* [test] update legacy test

* [test] update model zoo

* [test] update gemini test

* [test] remove components to test
2023-10-20 10:35:08 +08:00
Cuiqing Li
3a41e8304e
[Refactor] Integrated some lightllm kernels into token-attention ()
* add some req for inference

* clean codes

* add codes

* add some lightllm deps

* clean codes

* hello

* delete rms files

* add some comments

* add comments

* add doc

* add lightllm deps

* add lightllm cahtglm2 kernels

* add lightllm cahtglm2 kernels

* replace rotary embedding with lightllm kernel

* add some commnets

* add some comments

* add some comments

* add

* replace fwd kernel att1

* fix a arg

* add

* add

* fix token attention

* add some comments

* clean codes

* modify comments

* fix readme

* fix bug

* fix bug

---------

Co-authored-by: cuiqing.li <lixx336@gmail.com>
Co-authored-by: CjhHa1 <cjh18671720497@outlook.com>
2023-10-19 22:22:47 +08:00
digger yu
11009103be
[nfc] fix some typo with colossalai/ docs/ etc. () 2023-10-18 15:44:04 +08:00
github-actions[bot]
486d06a2d5
[format] applied code formatting on changed files in pull request 4820 ()
Co-authored-by: github-actions <github-actions@github.com>
2023-10-18 11:46:37 +08:00
Zhongkai Zhao
c7aa319ba0
[test] add no master test for low level zero plugin () 2023-10-18 11:41:23 +08:00
Hongxin Liu
1f5d2e8062
[hotfix] fix torch 2.0 compatibility ()
* [hotfix] fix launch

* [test] fix test gemini optim

* [shardformer] fix vit
2023-10-18 11:05:25 +08:00
Baizhou Zhang
21ba89cab6
[gemini] support gradient accumulation ()
* add test

* fix no_sync bug in low level zero plugin

* fix test

* add argument for grad accum

* add grad accum in backward hook for gemini

* finish implementation, rewrite tests

* fix test

* skip stuck model in low level zero test

* update doc

* optimize communication & fix gradient checkpoint

* modify doc

* cleaning codes

* update cpu adam fp16 case
2023-10-17 14:07:21 +08:00
github-actions[bot]
a41cf88e9b
[format] applied code formatting on changed files in pull request 4908 ()
Co-authored-by: github-actions <github-actions@github.com>
2023-10-17 10:48:24 +08:00
Hongxin Liu
4f68b3f10c
[kernel] support pure fp16 for cpu adam and update gemini optim tests ()
* [kernel] support pure fp16 for cpu adam ()

* [kernel] fix cpu adam kernel for pure fp16 and update tests ()

* [kernel] fix cpu adam

* [test] update gemini optim test
2023-10-16 21:56:53 +08:00
Zian(Andy) Zheng
7768afbad0 Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
https://github.com/huggingface/transformers/pull/25598
2023-10-16 14:00:45 +08:00
Xu Kai
611a5a80ca
[inference] Add smmoothquant for llama ()
* [inference] add int8 rotary embedding kernel for smoothquant ()

* [inference] add smoothquant llama attention ()

* add smoothquant llama attention

* remove uselss code

* remove useless code

* fix import error

* rename file name

* [inference] add silu linear fusion for smoothquant llama mlp  ()

* add silu linear

* update skip condition

* catch smoothquant cuda lib exception

* prcocess exception for tests

* [inference] add llama mlp for smoothquant ()

* add llama mlp for smoothquant

* fix down out scale

* remove duplicate lines

* add llama mlp check

* delete useless code

* [inference] add smoothquant llama ()

* add smoothquant llama

* fix attention accuracy

* fix accuracy

* add kv cache and save pretrained

* refactor example

* delete smooth

* refactor code

* [inference] add smooth function and delete useless code for smoothquant ()

* add smooth function and delete useless code

* update datasets

* remove duplicate import

* delete useless file

* refactor codes ()

* rafactor code

* add license

* add torch-int and smoothquant license
2023-10-16 11:28:44 +08:00
Zhongkai Zhao
a0684e7bd6
[feature] support no master weights option for low level zero plugin ()
* [feature] support no master weights for low level zero plugin

* [feature] support no master weights for low level zero plugin, remove data copy when no master weights

* remove data copy and typecasting when no master weights

* not load weights to cpu when using no master weights

* fix grad: use fp16 grad when no master weights

* only do not update working param when no master weights

* fix: only do not update working param when no master weights

* fix: passing params in dict format in hybrid plugin

* fix: remove extra params (tp_process_group) in hybrid_parallel_plugin
2023-10-13 07:57:45 +00:00
Xu Kai
77a9328304
[inference] add llama2 support ()
* add llama2 support

* fix multi group bug
2023-10-13 13:09:23 +08:00
Baizhou Zhang
39f2582e98
[hotfix] fix lr scheduler bug in torch 2.0 () 2023-10-12 14:04:24 +08:00
littsk
83b52c56cd
[feature] Add clip_grad_norm for hybrid_parallel_plugin ()
* Add clip_grad_norm for hibrid_parallel_plugin

* polish code

* add unittests

* Move tp to a higher-level optimizer interface.

* bug fix

* polish code
2023-10-12 11:32:37 +08:00
Hongxin Liu
df63564184
[gemini] support amp o3 for gemini ()
* [gemini] support no reuse fp16 chunk

* [gemini] support no master weight for optim

* [gemini] support no master weight for gemini ddp

* [test] update gemini tests

* [test] update gemini tests

* [plugin] update gemini plugin

* [test] fix gemini checkpointio test

* [test] fix gemini checkpoint io
2023-10-12 10:39:08 +08:00
ppt0011
c1fab951e7
Merge pull request from ppt0011/main
[doc] add reminder for issue encountered with hybrid adam
2023-10-12 10:27:10 +08:00
littsk
ffd9a3cbc9
[hotfix] fix bug in sequence parallel test () 2023-10-11 19:30:41 +08:00
ppt0011
1dcaf249bd [doc] add reminder for issue encountered with hybrid adam 2023-10-11 17:51:14 +08:00
Xu Kai
fdec650bb4
fix test llama () 2023-10-11 17:43:01 +08:00
Bin Jia
08a9f76b2f
[Pipeline Inference] Sync pipeline inference branch to main ()
* [pipeline inference] pipeline inference ()

* add pp stage manager as circle stage

* fix a bug when create process group

* add ppinfer basic framework

* add micro batch manager and support kvcache-pp gpt2 fwd

* add generate schedule

* use mb size to control mb number

* support generate with kv cache

* add output, remove unused code

* add test

* reuse shardformer to build model

* refactor some code and use the same attribute name of hf

* fix review and add test for generation

* remove unused file

* fix CI

* add cache clear

* fix code error

* fix typo

* [Pipeline inference] Modify to tieweight ()

* add pp stage manager as circle stage

* fix a bug when create process group

* add ppinfer basic framework

* add micro batch manager and support kvcache-pp gpt2 fwd

* add generate schedule

* use mb size to control mb number

* support generate with kv cache

* add output, remove unused code

* add test

* reuse shardformer to build model

* refactor some code and use the same attribute name of hf

* fix review and add test for generation

* remove unused file

* modify the way of saving newtokens

* modify to tieweight

* modify test

* remove unused file

* solve review

* add docstring

* [Pipeline inference] support llama pipeline inference ()

* support llama pipeline inference

* remove tie weight operation

* [pipeline inference] Fix the blocking of communication when ppsize is 2 ()

* add benchmark verbose

* fix export tokens

* fix benchmark verbose

* add P2POp style to do p2p communication

* modify schedule as p2p type when ppsize is 2

* remove unused code and add docstring

* [Pipeline inference] Refactor code, add docsting, fix bug ()

* add benchmark script

* update argparse

* fix fp16 load

* refactor code style

* add docstring

* polish code

* fix test bug

* [Pipeline inference] Add pipeline inference docs ()

* add readme doc

* add a ico

* Add performance

* update table of contents

* refactor code ()
2023-10-11 11:40:06 +08:00
Camille Zhong
652adc2215 Update README.md 2023-10-10 23:19:34 +08:00
Camille Zhong
afe10a85fd Update README.md 2023-10-10 23:19:34 +08:00
Camille Zhong
d6c4b9b370 Update main README.md
add modelscope model link
2023-10-10 23:19:34 +08:00
Camille Zhong
3043d5d676 Update modelscope link in README.md
add modelscope link
2023-10-10 23:19:34 +08:00
flybird11111
6a21f96a87
[doc] update advanced tutorials, training gpt with hybrid parallelism ()
* [doc]update advanced tutorials, training gpt with hybrid parallelism

* [doc]update advanced tutorials, training gpt with hybrid parallelism

* update vit tutorials

* update vit tutorials

* update vit tutorials

* update vit tutorials

* update en/train_vit_with_hybrid_parallel.py

* fix

* resolve comments

* fix
2023-10-10 08:18:55 +00:00
Blagoy Simandoff
8aed02b957
[nfc] fix minor typo in README () 2023-10-07 17:51:11 +08:00
Camille Zhong
cd6a962e66 [NFC] polish code style () 2023-10-07 13:36:52 +08:00
Michelle
07ed155e86 [NFC] polish colossalai/inference/quant/gptq/cai_gptq/__init__.py code style () 2023-10-07 13:36:52 +08:00
littsk
eef96e0877 polish code for gptq () 2023-10-07 13:36:52 +08:00
Hongxin Liu
cb3a25a062
[checkpointio] hotfix torch 2.0 compatibility () 2023-10-07 10:45:52 +08:00
ppt0011
ad23460cf8
Merge pull request from KKZ20/test/model_support_for_low_level_zero
[test] remove the redundant code of model output transformation in torchrec
2023-10-06 09:32:33 +08:00
ppt0011
81ee91f2ca
Merge pull request from Shawlleyw/main
[doc]: typo in document of booster low_level_zero plugin
2023-10-06 09:27:54 +08:00
shaoyuw
c97a3523db fix: typo in comment of low_level_zero plugin 2023-10-05 16:30:34 +00:00
Zhongkai Zhao
db40e086c8 [test] modify model supporting part of low_level_zero plugin (including correspoding docs) 2023-10-05 15:10:31 +08:00
Xu Kai
d1fcc0fa4d
[infer] fix test bug ()
* fix test bug

* delete useless code

* fix typo
2023-10-04 10:01:03 +08:00