Frank Lee
c35fdbfe5d
set criterion as optional in colossalai initialize ( #336 )
2022-03-09 11:51:22 +08:00
Jie Zhu
345d32c182
[profiler] add adaptive sampling to memory profiler ( #330 )
...
* fix merge conflict
modify unit test
remove unnessesary log info
reformat file
* remove unused module
* remove unnecessary sync function
* change doc string style from Google to Sphinx
2022-03-09 11:07:10 +08:00
ver217
ce5a7dcab0
[zero] Update sharded model v2 using sharded param v2 ( #323 )
2022-03-08 18:18:06 +08:00
jiaruifang
4d07bffd77
using pytest parametrize
2022-03-08 15:10:21 +08:00
jiaruifang
da6bfb1427
show pytest parameterize
2022-03-08 15:10:21 +08:00
Jiarui Fang
cec05b25c9
[zero] update zero context init with the updated test utils ( #327 )
2022-03-08 14:45:01 +08:00
Frank Lee
6afc4f9e11
[test] refactored testing components ( #324 )
2022-03-08 10:19:18 +08:00
HELSON
b0bbf17fa6
fixed strings in profiler outputs ( #325 )
2022-03-07 17:08:56 +08:00
Jiarui Fang
d6abd933f2
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
2022-03-07 16:14:40 +08:00
1SAA
d63d20165d
Added profiler communication operations
...
Fixed bug for learning rate scheduler
2022-03-07 15:17:06 +08:00
binmakeswell
b38ed3934a
add badge and contributor list
2022-03-06 16:45:49 +08:00
LuGY
b73a048ad8
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-04 16:05:15 +08:00
Jiarui Fang
29521cba0a
[zero] yet an improved sharded param ( #311 )
2022-03-04 15:49:23 +08:00
Jiarui Fang
2f6295bf78
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
2022-03-04 15:35:07 +08:00
ver217
b95f9b4670
polish code
2022-03-04 15:27:39 +08:00
ver217
2aa440358d
fix sharded param hook and unit test
2022-03-04 15:27:39 +08:00
ver217
8c2327b93c
impl shard optim v2 and add unit test
2022-03-04 15:27:39 +08:00
Jiarui Fang
88496b5b31
[zero] a shard strategy in granularity of tensor ( #307 )
2022-03-04 11:59:35 +08:00
Jiarui Fang
408cba655b
[zero] sharded tensor ( #305 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
2022-03-04 10:46:13 +08:00
Jie Zhu
ce5d94a604
[profiler] primary memory tracer
2022-03-04 09:35:23 +08:00
FrankLeeeee
fac5d05a8d
update unit testing CI rules
2022-03-03 17:45:09 +08:00
FrankLeeeee
0cd67a8dc0
added compatibility CI and options for release ci
2022-03-03 17:45:09 +08:00
FrankLeeeee
725d81ad21
added pypi publication CI and remove formatting CI
2022-03-03 17:45:09 +08:00
ver217
5cc84d94dc
rename shared adam to sharded optim v2
2022-03-03 16:20:34 +08:00
ver217
df34bd0c7f
fix master params dtype
2022-03-03 16:20:34 +08:00
ver217
6c290dbb08
add fp32 master params in sharded adam
2022-03-03 16:20:34 +08:00
ver217
6185b9772d
add sharded adam
2022-03-03 16:20:34 +08:00
Jiarui Fang
de11a91007
polish license ( #300 )
...
* init shard param from shape tuple
* add more unitest for shard param
2022-03-03 14:11:45 +08:00
Jiarui Fang
6c78946fdd
Polish sharded parameter ( #297 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add more unittests to shareded param
2022-03-03 12:42:57 +08:00
ver217
9b07ac81d4
[zero] add sharded grad and refactor grad hooks for ShardedModel ( #287 )
2022-03-02 18:28:29 +08:00
Frank Lee
4fbb8db586
fixed typo in ShardParam ( #294 )
2022-03-02 17:26:23 +08:00
Frank Lee
a463980aab
added unit test for sharded optimizer ( #293 )
...
* added unit test for sharded optimizer
* refactor for elegance
2022-03-02 17:15:54 +08:00
Frank Lee
193af3a8b7
added buffer sync to naive amp model wrapper ( #291 )
2022-03-02 16:47:17 +08:00
Jiarui Fang
6f22fb1906
add a common util for hooks registered on parameter. ( #292 )
2022-03-02 14:38:22 +08:00
Jie Zhu
3b64dcc439
bug fix: pass hook_list to engine ( #273 )
...
* bug fix: pass hook_list to engine
* change parameter name
2022-03-02 14:25:52 +08:00
Jiarui Fang
3280869358
Feature/zero ( #279 )
...
* add zero1 (#209 )
* add zero1
* add test zero1
* update zero stage 1 develop (#212 )
* Implement naive zero3 (#240 )
* naive zero3 works well
* add zero3 param manager
* add TODOs in comments
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* fix bugs of hook and add unit tests (#252 )
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* polish code and add state dict hook
* fix bug
* update unit test
* refactor reconstructed zero code
* clip_grad support zero3 and add unit test
* add unit test for Zero3ParameterManager
* [WIP] initialize the shard param class
* [WIP] Yet another sharded model implementation (#274 )
* [WIP] initialize the shard param class
* [WIP] Yes another implementation of shardModel. Using a better hook method.
* torch.concat -> torch.cat
* fix test_zero_level_1.py::test_zero_level_1 unitest
* remove deepspeed implementation and refactor for the reconstructed zero module
* polish zero dp unittests
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-01 18:17:01 +08:00
binmakeswell
f84b15719f
add community group and update issue template( #271 )
2022-03-01 14:39:41 +08:00
Sze-qq
ec49c6833a
update experimental visualization ( #253 )
2022-03-01 14:39:41 +08:00
binmakeswell
8836140725
add Chinese README
2022-03-01 14:39:41 +08:00
1SAA
adf7e50325
Added TPExpert for special situation
2022-02-28 13:55:35 +08:00
HELSON
b5b612a26c
Fixed parameter initialization in FFNExpert ( #251 )
2022-02-27 14:01:25 +08:00
アマデウス
726a4abb66
fixed CI dataset directory; fixed import error of 2.5d accuracy ( #255 )
2022-02-24 14:33:45 +08:00
1SAA
e7a1136436
Optimized MoE layer and fixed some bugs;
...
Decreased moe tests;
Added FFNExperts and ViTMoE model
2022-02-22 10:27:12 +08:00
zbian
218c061e2d
fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial
2022-02-22 02:05:43 +00:00
ver217
34d7c0401a
update setup info ( #233 )
2022-02-22 02:05:43 +00:00
github-actions
b9f8521f8c
Automated submodule synchronization
2022-02-15 11:35:37 +08:00
Frank Lee
f5ca88ec97
fixed apex import ( #227 )
2022-02-15 11:31:13 +08:00
Frank Lee
eb3fda4c28
updated readme and change log ( #224 )
2022-02-15 11:31:13 +08:00
ver217
578ea0583b
update setup and workflow ( #222 )
2022-02-15 11:31:13 +08:00
Frank Lee
3a1a9820b0
fixed mkdir conflict and align yapf config with flake ( #220 )
2022-02-15 11:31:13 +08:00