Commit Graph

1699 Commits

Author SHA1 Message Date
ver217
a9ecb4b244 [zero] polish sharded optimizer v2 (#490) 2022-03-22 15:53:48 +08:00
ver217
62b0a8d644 [zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam

* update unit test

* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang
b334822163 [zero] polish sharded param name (#484)
* [zero] polish sharded param name

* polish code

* polish

* polish code

* polish

* polsih

* polish
2022-03-22 14:36:16 +08:00
HELSON
d7ea63992b [MOE] add FP32LinearGate for MOE in NaiveAMP context (#480) 2022-03-22 10:50:20 +08:00
Jiarui Fang
65c0f380c2 [format] polish name format for MOE (#481) 2022-03-21 23:19:47 +08:00
ver217
8d3250d74b [zero] ZeRO supports pipeline parallel (#477) 2022-03-21 16:55:37 +08:00
Frank Lee
83a847d058 [test] added rerun on exception for testing (#475)
* [test] added rerun on exception function

* polish code
2022-03-21 15:51:57 +08:00
HELSON
7544347145 [MOE] add unitest for MOE experts layout, gradient handler and kernel (#469) 2022-03-21 13:35:04 +08:00
ver217
3cb3fc275e zero init ctx receives a dp process group (#471) 2022-03-21 11:18:55 +08:00
HELSON
aff9d354f7 [MOE] polish moe_env (#467) 2022-03-19 15:36:25 +08:00
HELSON
bccbc15861 [MOE] changed parallelmode to dist process group (#460) 2022-03-19 13:46:29 +08:00
ver217
fc8e6db005 [doc] Update docstring for ZeRO (#459)
* polish sharded model docstr

* polish sharded optim docstr

* polish zero docstr

* polish shard strategy docstr
2022-03-18 16:48:20 +08:00
HELSON
84fd7c1d4d add moe context, moe utilities and refactor gradient handler (#455) 2022-03-18 16:38:32 +08:00
ver217
a241f61b34 [zero] Update initialize for ZeRO (#458)
* polish code

* shard strategy receive pg in shard() / gather()

* update zero engine

* polish code
2022-03-18 16:18:31 +08:00
ver217
642846d6f9 update sharded optim and fix zero init ctx (#457) 2022-03-18 15:44:47 +08:00
Jiarui Fang
e2e9f82588 Revert "[zero] update sharded optim and fix zero init ctx" (#456)
* Revert "polish code"

This reverts commit 8cf7ff08cf.

* Revert "rename variables"

This reverts commit e99af94ab8.

* Revert "remove surplus imports"

This reverts commit 46add4a5c5.

* Revert "update sharded optim and fix zero init ctx"

This reverts commit 57567ee768.
2022-03-18 15:22:43 +08:00
ver217
e99af94ab8 rename variables 2022-03-18 14:25:25 +08:00
ver217
57567ee768 update sharded optim and fix zero init ctx 2022-03-18 14:25:25 +08:00
Jiarui Fang
0fcfb1e00d [test] make zero engine test really work (#447) 2022-03-17 17:24:25 +08:00
Jiarui Fang
237d08e7ee [zero] hybrid cpu adam (#445) 2022-03-17 15:05:41 +08:00
Frank Lee
b72b8445c6 optimized context test time consumption (#446) 2022-03-17 14:40:52 +08:00
Jiarui Fang
496cbb0760 [hotfix] fix initialize bug with zero (#442) 2022-03-17 13:16:22 +08:00
Jiarui Fang
640a6cd304 [refactory] refactory the initialize method for new zero design (#431) 2022-03-16 19:29:37 +08:00
Frank Lee
bffd85bf34 added testing module (#435) 2022-03-16 17:20:05 +08:00
HELSON
dbdc9a7783 added Multiply Jitter and capacity factor eval for MOE (#434) 2022-03-16 16:47:44 +08:00
Frank Lee
b03b3ae99c fixed mem monitor device (#433)
fixed mem monitor device
2022-03-16 15:25:02 +08:00
Frank Lee
14a7094243 fixed fp16 optimizer none grad bug (#432) 2022-03-16 14:35:46 +08:00
ver217
fce9432f08 sync before creating empty grad 2022-03-16 14:24:09 +08:00
ver217
ea6905a898 free param.grad 2022-03-16 14:24:09 +08:00
ver217
9506a8beb2 use double buffer to handle grad 2022-03-16 14:24:09 +08:00
Jiarui Fang
54229cd33e [log] better logging display with rich (#426)
* better logger using rich

* remove deepspeed in zero requirements
2022-03-16 09:51:15 +08:00
HELSON
3f70a2b12f removed noisy function during evaluation of MoE router (#419) 2022-03-15 12:06:09 +08:00
Jiarui Fang
adebb3e041 [zero] cuda margin space for OS (#418) 2022-03-15 12:02:19 +08:00
Jiarui Fang
56bb412e72 [polish] use GLOBAL_MODEL_DATA_TRACER (#417) 2022-03-15 11:29:46 +08:00
Jiarui Fang
23ba3fc450 [zero] refactory ShardedOptimV2 init method (#416) 2022-03-15 10:45:55 +08:00
Frank Lee
e79ea44247 [fp16] refactored fp16 optimizer (#392) 2022-03-15 10:05:38 +08:00
Jiarui Fang
21dc54e019 [zero] memtracer to record cuda memory usage of model data and overall system (#395) 2022-03-14 22:05:30 +08:00
Jiarui Fang
370f567e7d [zero] new interface for ShardedOptimv2 (#406) 2022-03-14 20:48:41 +08:00
LuGY
a9c27be42e Added tensor detector (#393)
* Added tensor detector

* Added the - states

* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
1SAA
907ac4a2dc fixed error when no collective communication in CommProfiler 2022-03-14 17:21:00 +08:00
Frank Lee
2fe68b359a Merge pull request #403 from ver217/feature/shard-strategy
[zero] Add bucket tensor shard strategy
2022-03-14 16:29:28 +08:00
HELSON
dfd0363f68 polished output format for communication profiler and pcie profiler (#404)
fixed typing error
2022-03-14 16:07:45 +08:00
ver217
63469c0f91 polish code 2022-03-14 15:48:55 +08:00
ver217
88804aee49 add bucket tensor shard strategy 2022-03-14 14:48:32 +08:00
HELSON
7c079d9c33 [hotfix] fixed bugs in ShardStrategy and PcieProfiler (#394) 2022-03-11 18:12:46 +08:00
Frank Lee
1e4bf85cdb fixed bug in activation checkpointing test (#387) 2022-03-11 15:50:28 +08:00
Jiarui Fang
3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
* place params on cpu after zero init context

* polish code

* bucketzed cpu gpu tensor transter

* find a bug in sharded optim unittest

* add offload unittest for ShardedOptimV2.

* polish code and make it more robust
2022-03-11 15:50:28 +08:00
Jiang Zhuo
5a4a3b77d9 fix format (#376) 2022-03-11 15:50:28 +08:00
LuGY
de46450461 Added activation offload (#331)
* Added activation offload

* Fixed the import bug, used the pytest
2022-03-11 15:50:28 +08:00
Jiarui Fang
272ebfb57d [bug] shard param during initializing the ShardedModelV2 (#381) 2022-03-11 15:50:28 +08:00