Commit Graph

780 Commits

Author SHA1 Message Date
Jiarui Fang
a98319f023 [tensor] torch function return colotensor (#1229) 2022-07-07 18:09:18 +08:00
Frank Lee
5581170890 [fx] fixed huggingface OPT and T5 results misalignment (#1227) 2022-07-07 16:29:58 +08:00
YuliangLiu0306
2b7dca44b5 [fx]get communication size between partitions (#1224)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [fx]get communication size between partitions.

* polish
2022-07-07 16:22:00 +08:00
Frank Lee
84f2298a96 [fx] added patches for tracing swin transformer (#1228) 2022-07-07 15:20:13 +08:00
Frank Lee
37fcf96b7f [fx] fixed timm tracing result misalignment (#1225) 2022-07-07 14:45:15 +08:00
Frank Lee
b6cb5a47ad [fx] added timm model tracing testing (#1221) 2022-07-07 14:02:17 +08:00
Jiarui Fang
15d988f954 [tensor] sharded global process group (#1219) 2022-07-07 13:38:48 +08:00
Frank Lee
11973d892d [fx] added torchvision model tracing testing (#1216)
* [fx] added torchvision model tracing testing

* remove unused imports
2022-07-06 21:37:56 +08:00
Jiarui Fang
52736205d9 [checkpoint] make unitest faster (#1217) 2022-07-06 17:39:46 +08:00
Jiarui Fang
f38006ea83 [checkpoint] checkpoint for ColoTensor Model (#1196) 2022-07-06 17:22:03 +08:00
Jiarui Fang
ae7d3f4927 [refactor] move process group from _DistSpec to ColoTensor. (#1203) 2022-07-06 16:15:16 +08:00
Frank Lee
5da87ce35d [fx] added testing for all albert variants (#1211) 2022-07-06 15:11:08 +08:00
Frank Lee
2d13a45a3b [fx] added testing for all gpt variants (#1210)
* [fx] added testing for all gpt variants

* polish code

* polish code
2022-07-06 14:03:13 +08:00
YuliangLiu0306
189946c5c4 [fx]add uniform policy (#1208)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [fx]add uniform policy
2022-07-06 13:48:11 +08:00
Frank Lee
426a279ce7 [fx] added testing for all bert variants (#1207)
* [fx] added testing for all bert variants

* polish code
2022-07-06 10:50:49 +08:00
Frank Lee
f7878f465c [fx] supported model tracing for huggingface bert (#1201)
* [fx] supported model tracing for huggingface bert

* polish test
2022-07-05 13:19:57 +08:00
Jiarui Fang
060b917daf [refactor] remove gpc dependency in colotensor's _ops (#1189) 2022-07-04 18:54:37 +08:00
Frank Lee
abf6a262dc [fx] added module patch for pooling layers (#1197) 2022-07-04 15:21:26 +08:00
YuliangLiu0306
63d2a93878 [context]support arbitary module materialization. (#1193)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [context]support arbitary module materialization.

* [test]add numerical check for lazy init context.
2022-07-04 10:12:02 +08:00
YuliangLiu0306
2053e138a2 [context]use meta tensor to init model lazily. (#1187)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [context]use meta tensor to init model lazily.

* polish

* make module with device kwargs bypass the normal init.

* change unit test to adapt updated context.
2022-06-29 21:02:30 +08:00
Frank Lee
2c8c05675d [fx] patched conv and normalization (#1188) 2022-06-29 18:58:38 +08:00
Frank Lee
6d86f1bc91 [fx] supported data-dependent control flow in model tracing (#1185)
* [fx] supported data-dependent control flow in model tracing

* polish code
2022-06-29 15:05:25 +08:00
Jiarui Fang
c463f8adf9 [tensor] remove gpc in tensor tests (#1186) 2022-06-29 14:08:40 +08:00
Jiarui Fang
372f791444 [refactor] move chunk and chunkmgr to directory gemini (#1182) 2022-06-29 13:31:02 +08:00
ver217
6b2f2ab9bb [ddp] ColoDDP uses bucket all-reduce (#1177)
* add reducer

* update colo ddp with reducer

* polish unit test

* polish unit test
2022-06-29 10:34:13 +08:00
Jiarui Fang
7487215b95 [ColoTensor] add independent process group (#1179) 2022-06-29 10:03:09 +08:00
Jiarui Fang
1b657f9ce1 [tensor] revert local view back (#1178) 2022-06-27 18:38:34 +08:00
Jiarui Fang
0dd4e2bbfb [Tensor] rename some APIs in TensorSpec and Polish view unittest (#1176) 2022-06-27 15:56:11 +08:00
Jiarui Fang
aa7bef73d4 [Tensor] distributed view supports inter-process hybrid parallel (#1169) 2022-06-27 09:45:26 +08:00
ver217
9e1daa63d2 [zero] sharded optim supports loading local state dict (#1170)
* sharded optim supports loading local state dict

* polish code

* add unit test
2022-06-24 18:05:16 +08:00
ver217
561e90493f [zero] zero optim supports loading local state dict (#1171)
* zero optim supports loading local state dict

* polish code

* add unit test
2022-06-24 17:25:57 +08:00
Jiarui Fang
4b9bba8116 [ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168) 2022-06-24 13:08:54 +08:00
Jiarui Fang
f4ef224358 [Tensor] remove ParallelAction, use ComputeSpec instread (#1166) 2022-06-23 17:34:59 +08:00
Jiarui Fang
177c374401 remove gather out in parallel action (#1163) 2022-06-23 16:35:05 +08:00
Jiarui Fang
07f9c781f9 [graph] improve the graph building. (#1157) 2022-06-22 16:47:20 +08:00
ver217
22717a856f [tensor] add embedding bag op (#1156) 2022-06-22 15:54:03 +08:00
ver217
ae86151968 [tensor] add more element-wise ops (#1155)
* add more element-wise ops

* update test_op

* polish unit test
2022-06-22 15:16:47 +08:00
ver217
ffa025e120 [tensor] dist spec s2s uses all-to-all (#1136)
* dist spec s2s uses all-to-all

* update unit test

* add sanity check

* polish unitest test with titans

* add sanity check for DistMgr

* add sanity check

Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-06-22 11:32:38 +08:00
Jiarui Fang
ff644ee5e4 polish unitest test with titans (#1152) 2022-06-22 09:58:02 +08:00
Jiarui Fang
8cdce0399c [ColoTensor] improves init functions. (#1150) 2022-06-21 18:28:38 +08:00
ver217
8106d7b8c7 [ddp] refactor ColoDDP and ZeroDDP (#1146)
* ColoDDP supports overwriting default process group

* rename ColoDDPV2 to ZeroDDP

* add docstr for ZeroDDP

* polish docstr
2022-06-21 16:35:23 +08:00
ver217
d26902645e [ddp] add save/load state dict for ColoDDP (#1127)
* add save/load state dict for ColoDDP

* add unit test

* refactor unit test folder

* polish unit test

* rename unit test
2022-06-20 10:51:47 +08:00
ver217
789cad301b [hotfix] fix param op hook (#1131)
* fix param op hook

* update zero tp test

* fix bugs
2022-06-17 16:12:05 +08:00
ver217
f0a954f16d [ddp] add set_params_to_ignore for ColoDDP (#1122)
* add set_params_to_ignore for ColoDDP

* polish code

* fix zero hook v2

* add unit test

* polish docstr
2022-06-16 12:54:46 +08:00
YuliangLiu0306
fcf55777dd [fx]add autoparallel passes (#1121)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* feature/add autoparallel passes
2022-06-15 16:36:46 +08:00
Frank Lee
16302a5359 [fx] added unit test for coloproxy (#1119)
* [fx] added unit test for coloproxy

* polish code

* polish code
2022-06-15 15:27:51 +08:00
ver217
7d14b473f0 [gemini] gemini mgr supports "cpu" placement policy (#1118)
* update gemini mgr

* update chunk

* add docstr

* polish placement policy

* update test chunk

* update test zero

* polish unit test

* remove useless unit test
2022-06-15 15:05:19 +08:00
Frank Lee
53297330c0 [test] fixed hybrid parallel test case on 8 GPUs (#1106) 2022-06-14 10:30:54 +08:00
ver217
1f894e033f [gemini] zero supports gemini (#1093)
* add placement policy

* add gemini mgr

* update mem stats collector

* update zero

* update zero optim

* fix bugs

* zero optim monitor os

* polish unit test

* polish unit test

* add assert
2022-06-10 14:48:28 +08:00
Frank Lee
2b2dc1c86b [pipeline] refactor the pipeline module (#1087)
* [pipeline] refactor the pipeline module

* polish code
2022-06-10 11:27:38 +08:00