101 Commits

Author SHA1 Message Date
Hongxin Liu
fae6c92ead Merge branch 'main' into feature/shardformer 2023-09-05 21:54:08 +08:00
Hongxin Liu
ac178ca5c1 [legacy] move builder and registry to legacy (#4603) 2023-09-05 21:53:10 +08:00
Hongxin Liu
89fe027787 [legacy] move trainer to legacy (#4545)
* [legacy] move trainer to legacy

* [doc] update docs related to trainer

* [test] ignore legacy test
2023-09-05 21:53:10 +08:00
flybird11111
ec0866804c [shardformer] update shardformer readme (#4617)
[shardformer] update shardformer readme

[shardformer] update shardformer readme
2023-09-05 13:14:41 +08:00
Hongxin Liu
a39a5c66fe Merge branch 'main' into feature/shardformer 2023-09-04 23:43:13 +08:00
flybird11111
0a94fcd351 [shardformer] update bert finetune example with HybridParallelPlugin (#4584)
* [shardformer] fix opt test hanging

* fix

* test

* test

* test

* fix test

* fix test

* remove print

* add fix

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] fix epoch change

* [shardformer] broadcast add pp group

* [shardformer] fix opt test hanging

* fix

* test

* test

* [shardformer] zero1+pp and the corresponding tests (#4517)

* pause

* finish pp+zero1

* Update test_shard_vit.py

* [shardformer/fix overlap bug] fix overlap bug, add overlap as an option in shardco… (#4516)

* fix overlap bug and support bert, add overlap as an option in shardconfig

* support overlap for chatglm and bloom

* [shardformer] fix emerged bugs after updating transformers (#4526)

* test

* fix test

* fix test

* remove print

* add fix

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] Add overlap support for gpt2 (#4535)

* add overlap support for gpt2

* remove unused code

* remove unused code

* [shardformer] support pp+tp+zero1 tests (#4531)

* [shardformer] fix opt test hanging

* fix

* test

* test

* test

* fix test

* fix test

* remove print

* add fix

* [shardformer] pp+tp+zero1

[shardformer] pp+tp+zero1

[shardformer] pp+tp+zero1

[shardformer] pp+tp+zero1

[shardformer] pp+tp+zero1

[shardformer] pp+tp+zero1

* [shardformer] pp+tp+zero1

* [shardformer] pp+tp+zero1

* [shardformer] pp+tp+zero1

* [shardformer] pp+tp+zero1

* [shardformer] fix submodule replacement bug when enabling pp (#4544)

* [shardformer] support sharded optimizer checkpointIO of HybridParallelPlugin (#4540)

* implement sharded optimizer saving

* add more param info

* finish implementation of sharded optimizer saving

* fix bugs in optimizer sharded saving

* add pp+zero test

* param group loading

* greedy loading of optimizer

* fix bug when loading

* implement optimizer sharded saving

* add optimizer test & arrange checkpointIO utils

* fix gemini sharding state_dict

* add verbose option

* add loading of master params

* fix typehint

* fix master/working mapping in fp16 amp

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] add bert finetune example

* [shardformer] fix epoch change

* [shardformer] broadcast add pp group

* rebase feature/shardformer

* update pipeline

* [shardformer] fix

* [shardformer] fix

* [shardformer] bert finetune fix

* [shardformer] add all_reduce operation to loss

add all_reduce operation to loss

* [shardformer] make compatible with pytree.

make compatible with pytree.

* [shardformer] disable tp

disable tp

* [shardformer] add 3d plugin to ci test

* [shardformer] update num_microbatches to None

* [shardformer] update microbatchsize

* [shardformer] update assert

* update scheduler

* update scheduler

---------

Co-authored-by: Jianghai <72591262+CjhHa1@users.noreply.github.com>
Co-authored-by: Bin Jia <45593998+FoolPlayer@users.noreply.github.com>
Co-authored-by: Baizhou Zhang <eddiezhang@pku.edu.cn>
2023-09-04 21:46:29 +08:00
binmakeswell
8d7b02290f [doc] add llama2 benchmark (#4604)
* [doc] add llama2 benchmark

* [doc] add llama2 benchmark
2023-09-04 13:49:33 +08:00
Hongxin Liu
0b00def881 [example] add llama2 example (#4527)
* [example] transfer llama-1 example

* [example] fit llama-2

* [example] refactor scripts folder

* [example] fit new gemini plugin

* [cli] fix multinode runner

* [example] fit gemini optim checkpoint

* [example] refactor scripts

* [example] update requirements

* [example] update requirements

* [example] rename llama to llama2

* [example] update readme and pretrain script

* [example] refactor scripts
2023-08-28 17:59:11 +08:00
Hongxin Liu
27061426f7 [gemini] improve compatibility and add static placement policy (#4479)
* [gemini] remove distributed-related part from colotensor (#4379)

* [gemini] remove process group dependency

* [gemini] remove tp part from colo tensor

* [gemini] patch inplace op

* [gemini] fix param op hook and update tests

* [test] remove useless tests

* [test] remove useless tests

* [misc] fix requirements

* [test] fix model zoo

* [test] fix model zoo

* [test] fix model zoo

* [test] fix model zoo

* [test] fix model zoo

* [misc] update requirements

* [gemini] refactor gemini optimizer and gemini ddp (#4398)

* [gemini] update optimizer interface

* [gemini] renaming gemini optimizer

* [gemini] refactor gemini ddp class

* [example] update gemini related example

* [example] update gemini related example

* [plugin] fix gemini plugin args

* [test] update gemini ckpt tests

* [gemini] fix checkpoint io

* [example] fix opt example requirements

* [example] fix opt example

* [example] fix opt example

* [example] fix opt example

* [gemini] add static placement policy (#4443)

* [gemini] add static placement policy

* [gemini] fix param offload

* [test] update gemini tests

* [plugin] update gemini plugin

* [plugin] update gemini plugin docstr

* [misc] fix flash attn requirement

* [test] fix gemini checkpoint io test

* [example] update resnet example result (#4457)

* [example] update bert example result (#4458)

* [doc] update gemini doc (#4468)

* [example] update gemini related examples (#4473)

* [example] update gpt example

* [example] update dreambooth example

* [example] update vit

* [example] update opt

* [example] update palm

* [example] update vit and opt benchmark

* [hotfix] fix bert in model zoo (#4480)

* [hotfix] fix bert in model zoo

* [test] remove chatglm gemini test

* [test] remove sam gemini test

* [test] remove vit gemini test

* [hotfix] fix opt tutorial example (#4497)

* [hotfix] fix opt tutorial example

* [hotfix] fix opt tutorial example
2023-08-24 09:29:25 +08:00
binmakeswell
ef4b99ebcd add llama example CI 2023-07-26 14:12:57 +08:00
binmakeswell
7ff11b5537 [example] add llama pretraining (#4257) 2023-07-17 21:07:44 +08:00
digger yu
2d40759a53 fix #3852 path error (#4058) 2023-06-28 15:29:44 +08:00
Baizhou Zhang
4da324cd60 [hotfix]fix argument naming in docs and examples (#4083) 2023-06-26 23:50:04 +08:00
LuGY
160c64c645 [example] fix bucket size in example of gpt gemini (#4028) 2023-06-19 11:22:42 +08:00
Baizhou Zhang
b3ab7fbabf [example] update ViT example using booster api (#3940) 2023-06-12 15:02:27 +08:00
digger yu
33eef714db fix typo examples and docs (#3932) 2023-06-08 16:09:32 +08:00
Baizhou Zhang
e417dd004e [example] update opt example using booster api (#3918) 2023-06-08 11:27:05 +08:00
Liu Ziming
b306cecf28 [example] Modify palm example with the new booster API (#3913)
* Modify torch version requirement to adapt torch 2.0

* modify palm example using new booster API

* roll back

* fix port

* polish

* polish
2023-06-07 16:05:00 +08:00
wukong1992
a55fb00c18 [booster] update bert example, using booster api (#3885) 2023-06-07 15:51:00 +08:00
jiangmingyan
5f79008c4a [example] update gemini examples (#3868)
* [example]update gemini examples

* [example]update gemini examples
2023-05-30 18:41:41 +08:00
digger yu
518b31c059 [docs] change placememt_policy to placement_policy (#3829)
* fix typo colossalai/autochunk auto_parallel amp

* fix typo colossalai/auto_parallel nn utils etc.

* fix typo colossalai/auto_parallel autochunk fx/passes  etc.

* fix typo docs/

* change placememt_policy to placement_policy in docs/ and examples/
2023-05-24 14:51:49 +08:00
binmakeswell
15024e40d9 [auto] fix install cmd (#3772) 2023-05-18 13:33:01 +08:00
digger-yu
b9a8dff7e5 [doc] Fix typo under colossalai and doc(#3618)
* Fixed several spelling errors under colossalai

* Fix the spelling error in colossalai and docs directory

* Cautious Changed the spelling error under the example folder

* Update runtime_preparation_pass.py

revert autograft to autograd

* Update search_chunk.py

utile to until

* Update check_installation.py

change misteach to mismatch in line 91

* Update 1D_tensor_parallel.md

revert to perceptron

* Update 2D_tensor_parallel.md

revert to perceptron in line 73

* Update 2p5D_tensor_parallel.md

revert to perceptron in line 71

* Update 3D_tensor_parallel.md

revert to perceptron in line 80

* Update README.md

revert to resnet in line 42

* Update reorder_graph.py

revert to indice in line 7

* Update p2p.py

revert to megatron in line 94

* Update initialize.py

revert to torchrun in line 198

* Update routers.py

change to detailed in line 63

* Update routers.py

change to detailed in line 146

* Update README.md

revert  random number in line 402
2023-04-26 11:38:43 +08:00
binmakeswell
f1b3d60cae [example] reorganize for community examples (#3557) 2023-04-14 16:27:48 +08:00
mandoxzhang
8f2c55f9c9 [example] remove redundant texts & update roberta (#3493)
* update roberta example

* update roberta example

* modify conflict & update roberta
2023-04-07 11:33:32 +08:00
mandoxzhang
ab5fd127e3 [example] update roberta with newer ColossalAI (#3472)
* update roberta example

* update roberta example
2023-04-07 10:34:51 +08:00
Frank Lee
80eba05b0a [test] refactor tests with spawn (#3452)
* [test] added spawn decorator

* polish code

* polish code

* polish code

* polish code

* polish code

* polish code
2023-04-06 14:51:35 +08:00
ver217
573af84184 [example] update examples related to zero/gemini (#3431)
* [zero] update legacy import

* [zero] update examples

* [example] fix opt tutorial

* [example] fix opt tutorial

* [example] fix opt tutorial

* [example] fix opt tutorial

* [example] fix import
2023-04-04 17:32:51 +08:00
ver217
26b7aac0be [zero] reorganize zero/gemini folder structure (#3424)
* [zero] refactor low-level zero folder structure

* [zero] fix legacy zero import path

* [zero] fix legacy zero import path

* [zero] remove useless import

* [zero] refactor gemini folder structure

* [zero] refactor gemini folder structure

* [zero] refactor legacy zero import path

* [zero] refactor gemini folder structure

* [zero] refactor gemini folder structure

* [zero] refactor gemini folder structure

* [zero] refactor legacy zero import path

* [zero] fix test import path

* [zero] fix test

* [zero] fix circular import

* [zero] update import
2023-04-04 13:48:16 +08:00
Yan Fang
189347963a [auto] fix requirements typo for issue #3125 (#3209) 2023-03-23 10:22:08 +08:00
Zihao
18dbe76cae [auto-parallel] add auto-offload feature (#3154)
* add auto-offload feature

* polish code

* fix syn offload runtime pass bug

* add offload example

* fix offload testing bug

* fix example testing bug
2023-03-21 14:17:41 +08:00
binmakeswell
360674283d [example] fix redundant note (#3065) 2023-03-09 10:59:28 +08:00
Tomek
af3888481d [example] fixed opt model downloading from huggingface 2023-03-09 10:47:41 +08:00
ramos
2ef855c798 support shardinit option to avoid OPT OOM initializing problem (#3037)
Co-authored-by: poe <poe@nemoramo>
2023-03-08 13:45:15 +08:00
Ziyue Jiang
400f63012e [pipeline] Add Simplified Alpa DP Partition (#2507)
* add alpa dp split

* add alpa dp split

* use fwd+bwd instead of fwd only

---------

Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2023-03-07 10:34:31 +08:00
github-actions[bot]
da056285f2 [format] applied code formatting on changed files in pull request 2922 (#2923)
Co-authored-by: github-actions <github-actions@github.com>
2023-02-27 19:29:06 +08:00
binmakeswell
12bafe057f [doc] update installation for GPT (#2922) 2023-02-27 18:28:34 +08:00
Alex_996
a4fc125c34 Fix typos (#2863)
Fix typos, `6.7 -> 6.7b`
2023-02-22 10:59:48 +08:00
dawei-wang
55424a16a5 [doc] fix GPT tutorial (#2860)
Fix hpcaitech/ColossalAI#2851
2023-02-22 10:58:52 +08:00
Jiarui Fang
bf0204604f [exmaple] add bert and albert (#2824) 2023-02-20 10:35:55 +08:00
cloudhuang
43dffdaba5 [doc] fixed a typo in GPT readme (#2736) 2023-02-15 22:24:45 +08:00
Jiatong (Julius) Han
a255a38f7f [example] Polish README.md (#2658)
* [tutorial] polish readme.md

* [example] Update README.md
2023-02-09 20:43:55 +08:00
HELSON
6e0faa70e0 [gemini] add profiler in the demo (#2534) 2023-01-31 14:21:22 +08:00
HELSON
66dfcf5281 [gemini] update the gpt example (#2527) 2023-01-30 17:58:05 +08:00
HELSON
707b11d4a0 [gemini] update ddp strict mode (#2518)
* [zero] add strict ddp mode for chunk init

* [gemini] update gpt example
2023-01-28 14:35:25 +08:00
HELSON
2d1a7dfe5f [zero] add strict ddp mode (#2508)
* [zero] add strict ddp mode

* [polish] add comments for strict ddp mode

* [zero] fix test error
2023-01-20 14:04:38 +08:00
Jiarui Fang
e327e95144 [hotfix] gpt example titans bug #2493 (#2494) 2023-01-18 12:04:18 +08:00
binmakeswell
fcc6d61d92 [example] fix requirements (#2488) 2023-01-17 13:07:25 +08:00
Jiarui Fang
3a21485ead [example] titans for gpt (#2484) 2023-01-16 15:55:41 +08:00
Jiarui Fang
7c31706227 [CI] add test_ci.sh for palm, opt and gpt (#2475) 2023-01-16 14:44:29 +08:00