Commit Graph

2161 Commits

Author SHA1 Message Date
Guangyao Zhang
d9d5e7ea1f [shardformer] Support the T5ForTokenClassification model (#5816)
* t5 token, still pytest fail

* Resolve T5 Pytest Failure

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix typos

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-06-27 16:40:38 +08:00
Hongxin Liu
5dfbcd7746 [zero] use bucket during allgather (#5860)
* [zero] use bucket during allgather

* [zero] rename api
2024-06-27 16:34:44 +08:00
botbw
8e718a1421 [gemini] fixes for benchmarking (#5847)
* [gemini] fix missing return

* [gemini] fix missing arg pass

* [gemini] use gather tensor instead of list

* [test] enable flash attention for benchmark by default

* [test] enable flash attention for benchmark by default

---------

Co-authored-by: genghaozhe <939857490@qq.com>
2024-06-26 15:52:09 +08:00
Edenzzzz
2a25a2aff7 [Feature] optimize PP overlap (#5735)
* update to fully overlap, still debugging

* improve interface

* fixed deadlock bug

* debug NaN loss

* (experimental) use one comm group for send_fw_recv_fw to fix NaN

* cleaned up interfaces; use one batch p2p for all

* clean up; removed the double p2p batch case

* p2p test passsed

* improve overlap: send fwd before backward

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tentatively use 2 p2p batches

* remove two p2p batches

* fix typos

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove pp.sh

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: root <root@notebook-c55824c0-7742-45e8-9591-c855bb77ad29-0.notebook-c55824c0-7742-45e8-9591-c855bb77ad29.colossal-ai.svc.cluster.local>
2024-06-26 14:48:02 +08:00
botbw
8a5c86439a [gemini] fix missing return (#5845) 2024-06-21 11:38:40 +08:00
Yuanheng Zhao
7b249c76e5 [Fix] Fix spec-dec Glide LlamaModel for compatibility with transformers (#5837)
* fix glide llama model

* revise
2024-06-19 15:37:53 +08:00
Kai Lv
0adca5b688 [launch] Support IPv4 host initialization in launch (#5822) 2024-06-18 19:18:29 +08:00
GuangyaoZhang
d84d68601a change 'xxx if xxx else None' to 'xxx or None' 2024-06-18 03:32:42 +00:00
GuangyaoZhang
a83a2336e8 rebase master llama change 2024-06-18 02:56:47 +00:00
GuangyaoZhang
363cde6957 merge model and attention forward 2024-06-18 02:32:41 +00:00
GuangyaoZhang
7a2b08646f Remove CohereLayerNorm and use existing layernorm 2024-06-18 02:32:41 +00:00
GuangyaoZhang
fe2e74c03a fix precommit 2024-06-18 02:31:33 +00:00
GuangyaoZhang
f656d61778 change command 2024-06-18 02:31:33 +00:00
GuangyaoZhang
0b81163bc0 Copy llama to command 2024-06-18 02:31:33 +00:00
Edenzzzz
8795bb2e80 Support 4d parallel + flash attention (#5789)
* support tp + sp + pp

* remove comments

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-06-17 17:40:47 +08:00
flybird11111
2ddf624a86 [shardformer] upgrade transformers to 4.39.3 (#5815)
* [shardformer]upgrade transformers for gpt2/gptj/whisper (#5807)

* [shardformer] fix modeling of gpt2 and gptj

* [shardformer] fix whisper modeling

* [misc] update requirements

---------

Co-authored-by: ver217 <lhx0217@gmail.com>

* [shardformer]upgrade transformers for mistral (#5808)

* upgrade transformers for mistral

* fix

* fix

* [shardformer]upgrade transformers for llama (#5809)

* update transformers

fix

* fix

* fix

* [inference] upgrade transformers (#5810)

* update transformers

fix

* fix

* fix

* fix

* fix

* [gemini] update transformers for gemini (#5814)

---------

Co-authored-by: ver217 <lhx0217@gmail.com>
2024-06-14 10:59:33 +08:00
botbw
3bcbba9262 [gemini] quick fix on possible async operation (#5803)
* [gemini] quick fix on possible async operation

* [gemini] quick fix on possible async operation
2024-06-13 10:35:17 +08:00
Haze188
d9dddf574f [Gemini] Use async stream to prefetch and h2d data moving (#5781)
* use async stream to prefetch and h2d data moving

* Remove redundant code
2024-06-12 15:48:52 +08:00
Li Xingjian
8554585a5f [Inference] Fix flash-attn import and add model test (#5794)
* Fix torch int32 dtype

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Fix flash-attn import

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Add generalized model test

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Remove exposed path to model

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Add default value for use_flash_attn

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Rename model test

Signed-off-by: char-1ee <xingjianli59@gmail.com>

---------

Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-12 14:13:50 +08:00
Hongxin Liu
aa125bcc91 [shardformer] fix modeling of bloom and falcon (#5796) 2024-06-11 17:43:50 +08:00
Runyu Lu
c0948aff97 [Inference]refactor baichuan (#5791)
* refactor baichuan

* remove unused code and add TODO for lazyinit
2024-06-11 10:52:01 +08:00
char-1ee
f5981e808e Remove flash attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 10:02:19 +00:00
char-1ee
ceba662d22 Clean up
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 09:09:29 +00:00
char-1ee
5f398fc000 Pass inference model shard configs for module init
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:52 +00:00
char-1ee
eec77e5702 Fix tests and naming
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:47 +00:00
char-1ee
04386d9eff Refactor modeling by adding attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:47 +00:00
Hongxin Liu
73e88a5553 [shardformer] fix import (#5788) 2024-06-06 19:09:50 +08:00
Hongxin Liu
b9d646fe9e [misc] fix dist logger (#5782) 2024-06-05 15:04:22 +08:00
botbw
3f7e3131d9 [gemini] optimize reduce scatter d2h copy (#5760)
* [gemini] optimize reduce scatter d2h copy

* [fix] fix missing reduce variable

* [refactor] remove legacy async reduce scatter code

* [gemini] missing sync

* Revert "[refactor] remove legacy async reduce scatter code"

This reverts commit 58ad76d466.

* [gemini] further optimize with async all reduce

* [fix] pass flag from manager to chunk
2024-06-05 14:23:13 +08:00
Edenzzzz
79f7a7b211 [misc] Accelerate CI for zero and dist optim (#5758)
* remove fp16 from lamb

* remove d2h copy in checking states

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-06-05 11:25:19 +08:00
flybird11111
50b4c8e8cf [hotfix] fix llama flash attention forward (#5777) 2024-06-05 10:56:47 +08:00
yuehuayingxueluo
b45000f839 [Inference]Add Streaming LLM (#5745)
* Add Streaming LLM

* add some parameters to llama_generation.py

* verify streamingllm config

* add test_streamingllm.py

* modified according to the opinions of review

* add Citation

* change _block_tables tolist
2024-06-05 10:51:19 +08:00
Yuanheng Zhao
406443200f [Hotfix] Add missing init file in inference.executor (#5774) 2024-06-03 22:29:39 +08:00
duanjunwen
1b76564e16 [test] Fix/fix testcase (#5770)
* [fix] branch for fix testcase;

* [fix] fix test_analyzer & test_auto_parallel;

* [fix] remove local change about moe;

* [fix] rm local change moe;
2024-06-03 15:26:01 +08:00
flybird11111
3f2be80530 fix (#5765) 2024-06-03 11:25:18 +08:00
botbw
023ea13cb5 Merge pull request #5749 from hpcaitech/prefetch
[Gemini] Prefetch next chunk before each op
2024-05-29 15:35:54 +08:00
hxwang
8547562884 [chore] remove unnecessary assert since compute list might not be recorded 2024-05-28 05:16:02 +00:00
hxwang
e5e3320948 [bug] continue fix 2024-05-28 02:41:23 +00:00
hxwang
936dd96dbb [bug] workaround for idx fix 2024-05-28 02:33:12 +00:00
Edenzzzz
5f8c0a0ac3 [Feature] auto-cast optimizers to distributed version (#5746)
* auto-cast optimizers to distributed

* fix galore casting

* logger

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-05-24 17:24:16 +08:00
hxwang
ff507b755e Merge branch 'main' of github.com:hpcaitech/ColossalAI into prefetch 2024-05-24 04:05:07 +00:00
botbw
2fc85abf43 [gemini] async grad chunk reduce (all-reduce&reduce-scatter) (#5713)
* [gemini] async grad chunk reduce (all-reduce&reduce-scatter)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [gemini] add test

* [gemini] rename func

* [gemini] update llama benchmark

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [gemini] use tensor counter

* [gemini] change default config in GeminiPlugin and GeminiDDP

* [chore] typo

* [gemini] fix sync issue & add test cases

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-24 10:31:16 +08:00
Jianghai
85946d4236 [Inference]Fix readme and example for API server (#5742)
* fix chatapi readme and example

* updating doc

* add an api and change the doc

* remove

* add credits and del 'API' heading

* readme

* readme
2024-05-24 10:03:05 +08:00
hxwang
15d21a077a Merge remote-tracking branch 'origin/main' into prefetch 2024-05-23 15:49:33 +00:00
binmakeswell
4647ec28c8 [inference] release (#5747)
* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release
2024-05-23 17:44:06 +08:00
Yuanheng Zhao
df6747603f [Colossal-Inference] (v0.1.0) Merge pull request #5739 from hpcaitech/feature/colossal-infer
[Inference] Merge feature/colossal-infer
2024-05-22 14:31:09 +08:00
Yuanheng Zhao
bd38fe6b91 [NFC] Fix code factors on inference triton kernels (#5743) 2024-05-21 22:12:15 +08:00
botbw
13c06d36a3 [bug] fix early return (#5740)
* [bug] fix silly bug

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [chore] add test for prefetch

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-21 14:21:58 +08:00
Haze188
22ce873c3f [Shardformer] Add parallel output for shardformer models(bloom, falcon) (#5702)
* [pre-commit.ci] auto fixes from pre-commit.com hooks

* add parallel cross entropy output for falcon model & fix some typos in bloom.py

* fix module name error, self.model -> self.transformers in bloom, falcon model

* Fix the overflow bug of distributed cross entropy loss function when training with fp16

* add dtype to parallel cross entropy loss function

* fix dtype related typos adn prettify the loss.py

* fix grad dtype and update dtype mismatch error

* fix typo bugs
2024-05-21 11:07:13 +08:00
pre-commit-ci[bot]
b3c0e6d871 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-05-21 02:09:15 +00:00