Commit Graph

860 Commits

Author SHA1 Message Date
Sze-qq
ce8a3eae5b
update GPT-2 experiment result (#666) 2022-04-04 13:47:43 +08:00
HELSON
17e73e62cc
[hotfix] fix bugs for unsharded parameters when restore data (#664) 2022-04-03 22:02:11 +08:00
Jiarui Fang
0aab52301e
[hotfix] fix a bug in model data stats tracing (#655) 2022-04-03 21:48:06 +08:00
YuliangLiu0306
ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627) 2022-04-03 20:46:45 +08:00
HELSON
e5d615aeee
[hotfix] fix bugs in testing (#659)
* remove hybrid adam in test_moe_zero_optim

* fix activation checkpointing and its unitest
2022-04-02 21:58:47 +08:00
Jiarui Fang
036404ca8a
Revert "[zero] polish init context (#645)" (#657) 2022-04-02 18:30:06 +08:00
HELSON
b31daed4cf
fix bugs in CPU adam (#633)
* add cpu adam counter for all cpu adam

* fixed updating error in adam kernel
2022-04-02 17:04:05 +08:00
LuGY
1e2557e801
[zero] fixed the activation offload (#647)
* fixed the activation offload

* polish
2022-04-02 16:21:32 +08:00
Liang Bowen
828e465622
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622) 2022-04-02 16:12:04 +08:00
binmakeswell
e0f875a8e2
[GitHub] Add prefix and label in issue template (#652) 2022-04-02 16:09:25 +08:00
Jiarui Fang
67b4928244
[zero] polish init context (#645) 2022-04-02 15:52:04 +08:00
ver217
f5d3a9c2b0
polish checkpoint docstring (#637) 2022-04-02 13:34:33 +08:00
HELSON
055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601) 2022-04-01 20:10:47 +08:00
KAIYUAN GAN
229382c844
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cuda_util.cu code stype (#625) 2022-04-01 17:45:53 +08:00
アマデウス
354b7954d1
[model checkpoint] added unit tests for checkpoint save/load (#599) 2022-04-01 16:53:32 +08:00
アマデウス
28b515d610
[model checkpoint] updated checkpoint hook (#598) 2022-04-01 16:53:03 +08:00
アマデウス
77ad24bf94
[model checkpoint] updated saving/loading for 3d layers (#597) 2022-04-01 16:52:47 +08:00
アマデウス
93089ed708
[model checkpoint] updated saving/loading for 2.5d layers (#596) 2022-04-01 16:52:33 +08:00
アマデウス
6302069c0e
[model checkpoint] updated communication ops for cpu tensors (#590) 2022-04-01 16:52:20 +08:00
アマデウス
c50bfb807b
[model checkpoint] updated saving/loading for 1d layers (#594) 2022-04-01 16:51:52 +08:00
アマデウス
7636d518e1
[model checkpoint] updated saving/loading for 2d layers (#595) 2022-04-01 16:50:34 +08:00
アマデウス
cd13b63832
[model checkpoint] reworked unified layers for ease of save/load states (#593) 2022-04-01 16:49:56 +08:00
アマデウス
acae68eb04
[model checkpoint] updated checkpoint save/load utils (#592) 2022-04-01 16:49:21 +08:00
Ziyue Jiang
1c40ee8749
[TP] add assert for tp1d (#621) 2022-04-01 16:44:23 +08:00
ver217
369a288bf3
polish utils docstring (#620) 2022-04-01 16:36:47 +08:00
ver217
e619a651fb
polish optimizer docstring (#619) 2022-04-01 16:27:03 +08:00
ver217
8432dc7080
polish moe docsrting (#618) 2022-04-01 16:15:36 +08:00
ver217
c5b488edf8
polish amp docstring (#616) 2022-04-01 16:09:39 +08:00
ver217
f69507dd22
update rst (#615) 2022-04-01 15:46:38 +08:00
FredHuang99
93f14d2a33
[zero] test zero tensor utils (#609) 2022-04-01 15:16:59 +08:00
ver217
0ef8819c67
polish docstring of zero (#612) 2022-04-01 14:50:56 +08:00
LuGY
02b187c14f
[zero] add sampling time for memstats collector (#610) 2022-04-01 14:03:00 +08:00
ver217
9bee119104
[hotfix] fix sharded optim zero grad (#604)
* fix sharded optim zero grad

* polish comments
2022-04-01 12:41:20 +08:00
アマデウス
297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication (#589) 2022-04-01 10:15:52 +08:00
アマデウス
54e688b623
moved ensure_path_exists to utils.common (#591) 2022-04-01 09:46:33 +08:00
Jiarui Fang
e956d93ac2
[refactor] memory utils (#577) 2022-04-01 09:22:33 +08:00
ver217
104cbbb313
[hotfix] add hybrid adam to __init__ (#584) 2022-03-31 19:08:34 +08:00
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
* support existing sharded and unsharded parameters in zero

* add unitest for moe-zero model init

* polish moe gradient handler
2022-03-31 18:34:11 +08:00
LuGY
13ed4b6441
[model zoo] add activation offload for gpt model (#582) 2022-03-31 17:42:20 +08:00
Wesley
46c9ba33da update code format 2022-03-31 17:15:08 +08:00
Wesley
666cfd094a fix parallel_input flag for Linear1D_Col gather_output 2022-03-31 17:15:08 +08:00
BoxiangW
a9f778f1b1
[tool] create .clang-format for pre-commit (#578)
Change the clang-format style to google style
2022-03-31 16:34:00 +08:00
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571) 2022-03-31 16:26:54 +08:00
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr (#558) 2022-03-31 12:25:45 +08:00
Liang Bowen
2c45efc398
html refactor (#555) 2022-03-31 11:36:56 +08:00
Jiarui Fang
d1211148a7
[utils] update colo tensor moving APIs (#553) 2022-03-30 23:13:24 +08:00
LuGY
c44d797072
[docs] updatad docs of hybrid adam and cpu adam (#552) 2022-03-30 18:14:59 +08:00
ver217
014bac0c49
[zero] hijack p.grad in sharded model (#554)
* hijack p.grad in sharded model

* polish comments

* polish comments
2022-03-30 18:14:50 +08:00
Jiarui Fang
f552b11294
[zero] label state for param fp16 and grad (#551) 2022-03-30 15:57:46 +08:00
github-actions[bot]
92f4224867
Automated submodule synchronization (#501) 2022-03-30 14:06:23 +08:00