Commit Graph

147 Commits

Author SHA1 Message Date
Jiarui Fang
7650177713
[zero] global model data memory tracer (#360) 2022-03-10 11:20:04 +08:00
Jiarui Fang
56f3d80961
[test] polish zero related unitest (#351) 2022-03-10 09:57:26 +08:00
HELSON
4ac58ac898
Fixed import bug for no-tensorboard environment (#354) 2022-03-09 19:48:04 +08:00
HELSON
6310fb5aae
[profile] added example for ProfilerContext (#349) 2022-03-09 17:35:28 +08:00
ver217
0fb59c5c57
add test sharded optim with cpu adam (#347) 2022-03-09 17:30:02 +08:00
Jiarui Fang
1b82989431
move async memory to an individual directory (#345) 2022-03-09 16:31:25 +08:00
HELSON
1dea827e88
Added Profiler Context to manage all profilers (#340) 2022-03-09 16:12:41 +08:00
ver217
0aab577f19
[zero] update sharded optim v2 (#334) 2022-03-09 16:09:36 +08:00
ver217
af2cfed447 skip bert in test engine 2022-03-09 14:49:55 +08:00
ver217
159bb62c6d install transformers in CI 2022-03-09 14:49:55 +08:00
ver217
212e672d23 fix bert unit test 2022-03-09 14:49:55 +08:00
jiaruifang
107e0e766c polish code 2022-03-09 13:30:38 +08:00
jiaruifang
8bae7559ab polish engine unitest 2022-03-09 13:30:38 +08:00
jiaruifang
73a3e8c574 polish code 2022-03-09 13:30:38 +08:00
jiaruifang
02c12f128b adapting bert unitest interface 2022-03-09 13:30:38 +08:00
jiaruifang
d984dff88c add bert for unitest and sharded model is not able to pass the bert case 2022-03-09 13:30:38 +08:00
Frank Lee
e35f66b159
refactored grad scaler (#338) 2022-03-09 11:52:43 +08:00
Frank Lee
c35fdbfe5d
set criterion as optional in colossalai initialize (#336) 2022-03-09 11:51:22 +08:00
Jie Zhu
345d32c182
[profiler] add adaptive sampling to memory profiler (#330)
* fix merge conflict

modify unit test

remove unnessesary log info

reformat file

* remove unused module

* remove unnecessary sync function

* change doc string style from Google to Sphinx
2022-03-09 11:07:10 +08:00
ver217
ce5a7dcab0
[zero] Update sharded model v2 using sharded param v2 (#323) 2022-03-08 18:18:06 +08:00
jiaruifang
4d07bffd77 using pytest parametrize 2022-03-08 15:10:21 +08:00
jiaruifang
da6bfb1427 show pytest parameterize 2022-03-08 15:10:21 +08:00
Jiarui Fang
cec05b25c9
[zero] update zero context init with the updated test utils (#327) 2022-03-08 14:45:01 +08:00
Frank Lee
6afc4f9e11
[test] refactored testing components (#324) 2022-03-08 10:19:18 +08:00
HELSON
b0bbf17fa6
fixed strings in profiler outputs (#325) 2022-03-07 17:08:56 +08:00
Jiarui Fang
d6abd933f2
[zero] zero init context (#321)
* add zero init context

* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2

* polish code
2022-03-07 16:14:40 +08:00
1SAA
d63d20165d Added profiler communication operations
Fixed bug for learning rate scheduler
2022-03-07 15:17:06 +08:00
binmakeswell
b38ed3934a add badge and contributor list 2022-03-06 16:45:49 +08:00
LuGY
b73a048ad8
[zero] cpu adam kernel (#288)
* Added CPU Adam

* finished the cpu adam

* updated the license

* delete useless parameters, removed resnet

* modified the method off cpu adam unittest

* deleted some useless codes

* removed useless codes

Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-04 16:05:15 +08:00
Jiarui Fang
29521cba0a
[zero] yet an improved sharded param (#311) 2022-03-04 15:49:23 +08:00
Jiarui Fang
2f6295bf78
[zero] polish shard strategy (#310)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code

* add shard stratgy

* move shard and gather logic to shard strategy from shard tensor.

* polish code
2022-03-04 15:35:07 +08:00
ver217
b95f9b4670 polish code 2022-03-04 15:27:39 +08:00
ver217
2aa440358d fix sharded param hook and unit test 2022-03-04 15:27:39 +08:00
ver217
8c2327b93c impl shard optim v2 and add unit test 2022-03-04 15:27:39 +08:00
Jiarui Fang
88496b5b31
[zero] a shard strategy in granularity of tensor (#307) 2022-03-04 11:59:35 +08:00
Jiarui Fang
408cba655b
[zero] sharded tensor (#305)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code
2022-03-04 10:46:13 +08:00
Jie Zhu
ce5d94a604
[profiler] primary memory tracer 2022-03-04 09:35:23 +08:00
FrankLeeeee
fac5d05a8d update unit testing CI rules 2022-03-03 17:45:09 +08:00
FrankLeeeee
0cd67a8dc0 added compatibility CI and options for release ci 2022-03-03 17:45:09 +08:00
FrankLeeeee
725d81ad21 added pypi publication CI and remove formatting CI 2022-03-03 17:45:09 +08:00
ver217
5cc84d94dc rename shared adam to sharded optim v2 2022-03-03 16:20:34 +08:00
ver217
df34bd0c7f fix master params dtype 2022-03-03 16:20:34 +08:00
ver217
6c290dbb08 add fp32 master params in sharded adam 2022-03-03 16:20:34 +08:00
ver217
6185b9772d add sharded adam 2022-03-03 16:20:34 +08:00
Jiarui Fang
de11a91007
polish license (#300)
* init shard param from shape tuple

* add more unitest for shard param
2022-03-03 14:11:45 +08:00
Jiarui Fang
6c78946fdd
Polish sharded parameter (#297)
* init shard param from shape tuple

* add more unitest for shard param

* add more unittests to shareded param
2022-03-03 12:42:57 +08:00
ver217
9b07ac81d4
[zero] add sharded grad and refactor grad hooks for ShardedModel (#287) 2022-03-02 18:28:29 +08:00
Frank Lee
4fbb8db586
fixed typo in ShardParam (#294) 2022-03-02 17:26:23 +08:00
Frank Lee
a463980aab
added unit test for sharded optimizer (#293)
* added unit test for sharded optimizer

* refactor for elegance
2022-03-02 17:15:54 +08:00
Frank Lee
193af3a8b7
added buffer sync to naive amp model wrapper (#291) 2022-03-02 16:47:17 +08:00