Commit Graph

2276 Commits

Author SHA1 Message Date
Frank Lee
bffd85bf34 added testing module (#435) 2022-03-16 17:20:05 +08:00
HELSON
dbdc9a7783 added Multiply Jitter and capacity factor eval for MOE (#434) 2022-03-16 16:47:44 +08:00
Frank Lee
b03b3ae99c fixed mem monitor device (#433)
fixed mem monitor device
2022-03-16 15:25:02 +08:00
Frank Lee
14a7094243 fixed fp16 optimizer none grad bug (#432) 2022-03-16 14:35:46 +08:00
ver217
fce9432f08 sync before creating empty grad 2022-03-16 14:24:09 +08:00
ver217
ea6905a898 free param.grad 2022-03-16 14:24:09 +08:00
ver217
9506a8beb2 use double buffer to handle grad 2022-03-16 14:24:09 +08:00
Jiarui Fang
54229cd33e [log] better logging display with rich (#426)
* better logger using rich

* remove deepspeed in zero requirements
2022-03-16 09:51:15 +08:00
HELSON
3f70a2b12f removed noisy function during evaluation of MoE router (#419) 2022-03-15 12:06:09 +08:00
Jiarui Fang
adebb3e041 [zero] cuda margin space for OS (#418) 2022-03-15 12:02:19 +08:00
Jiarui Fang
56bb412e72 [polish] use GLOBAL_MODEL_DATA_TRACER (#417) 2022-03-15 11:29:46 +08:00
Jiarui Fang
23ba3fc450 [zero] refactory ShardedOptimV2 init method (#416) 2022-03-15 10:45:55 +08:00
Frank Lee
e79ea44247 [fp16] refactored fp16 optimizer (#392) 2022-03-15 10:05:38 +08:00
Jiarui Fang
21dc54e019 [zero] memtracer to record cuda memory usage of model data and overall system (#395) 2022-03-14 22:05:30 +08:00
Jiarui Fang
370f567e7d [zero] new interface for ShardedOptimv2 (#406) 2022-03-14 20:48:41 +08:00
LuGY
a9c27be42e Added tensor detector (#393)
* Added tensor detector

* Added the - states

* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
1SAA
907ac4a2dc fixed error when no collective communication in CommProfiler 2022-03-14 17:21:00 +08:00
Frank Lee
2fe68b359a Merge pull request #403 from ver217/feature/shard-strategy
[zero] Add bucket tensor shard strategy
2022-03-14 16:29:28 +08:00
HELSON
dfd0363f68 polished output format for communication profiler and pcie profiler (#404)
fixed typing error
2022-03-14 16:07:45 +08:00
ver217
63469c0f91 polish code 2022-03-14 15:48:55 +08:00
ver217
88804aee49 add bucket tensor shard strategy 2022-03-14 14:48:32 +08:00
HELSON
7c079d9c33 [hotfix] fixed bugs in ShardStrategy and PcieProfiler (#394) 2022-03-11 18:12:46 +08:00
Frank Lee
1e4bf85cdb fixed bug in activation checkpointing test (#387) 2022-03-11 15:50:28 +08:00
Jiarui Fang
3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
* place params on cpu after zero init context

* polish code

* bucketzed cpu gpu tensor transter

* find a bug in sharded optim unittest

* add offload unittest for ShardedOptimV2.

* polish code and make it more robust
2022-03-11 15:50:28 +08:00
Jiang Zhuo
5a4a3b77d9 fix format (#376) 2022-03-11 15:50:28 +08:00
LuGY
de46450461 Added activation offload (#331)
* Added activation offload

* Fixed the import bug, used the pytest
2022-03-11 15:50:28 +08:00
Jiarui Fang
272ebfb57d [bug] shard param during initializing the ShardedModelV2 (#381) 2022-03-11 15:50:28 +08:00
HELSON
8c18eb0998 [profiler] Fixed bugs in CommProfiler and PcieProfiler (#377) 2022-03-11 15:50:28 +08:00
Jiarui Fang
b5f43acee3 [zero] find miss code (#378) 2022-03-11 15:50:28 +08:00
Jiarui Fang
6b6002962a [zero] zero init context collect numel of model (#375) 2022-03-11 15:50:28 +08:00
HELSON
1ed7c24c02 Added PCIE profiler to dectect data transmission (#373) 2022-03-11 15:50:28 +08:00
jiaruifang
d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
This reverts commit bef05489b6.
2022-03-11 15:50:28 +08:00
RichardoLuo
8539898ec6 flake8 style change (#363) 2022-03-11 15:50:28 +08:00
Kai Wang (Victor Kai)
53bb3bcc0a fix format (#362) 2022-03-11 15:50:28 +08:00
ziyu huang
a77d73f22b fix format parallel_context.py (#359)
Co-authored-by: huangziyu <202476410arsmart@gmail.com>
2022-03-11 15:50:28 +08:00
Zangwei
c695369af0 fix format constants.py (#358) 2022-03-11 15:50:28 +08:00
Yuer867
4a0f8c2c50 fix format parallel_2p5d (#357) 2022-03-11 15:50:28 +08:00
Liang Bowen
7eb87f516d flake8 style (#352) 2022-03-11 15:50:28 +08:00
Xu Kai
54ee8d1254 Fix/format colossalai/engine/paramhooks/(#350) 2022-03-11 15:50:28 +08:00
Maruyama_Aya
e83970e3dc fix format ColossalAI\colossalai\context\process_group_initializer 2022-03-11 15:50:28 +08:00
yuxuan-lou
3b88eb2259 Flake8 code restyle 2022-03-11 15:50:28 +08:00
xuqifan897
148207048e Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py (#342) 2022-03-11 15:50:28 +08:00
Cautiousss
3a51d909af fix format (#332)
Co-authored-by: 何晓昕 <cautious@r-205-106-25-172.comp.nus.edu.sg>
2022-03-11 15:50:28 +08:00
DouJS
cbb6436ff0 fix format for dir-[parallel_3d] (#333) 2022-03-11 15:50:28 +08:00
ExtremeViscent
eaac03ae1d [formart] format fixed for kernel\cuda_native codes (#335) 2022-03-11 15:50:28 +08:00
Jiarui Fang
00670c870e [zero] bucketized tensor cpu gpu copy (#368) 2022-03-11 15:50:28 +08:00
Jiarui Fang
44e4891f57 [zero] able to place params on cpu after zero init context (#365)
* place params on cpu after zero init context

* polish code
2022-03-11 15:50:28 +08:00
ver217
253e54d98a fix grad shape 2022-03-11 15:50:28 +08:00
Jiarui Fang
ea2872073f [zero] global model data memory tracer (#360) 2022-03-11 15:50:28 +08:00
Jiarui Fang
cb34cd384d [test] polish zero related unitest (#351) 2022-03-11 15:50:28 +08:00