Commit Graph

188 Commits

Author SHA1 Message Date
Jiarui Fang
a98319f023 [tensor] torch function return colotensor (#1229) 2022-07-07 18:09:18 +08:00
HELSON
280a81243d [tensor] improve robustness of class 'ProcessGroup' (#1223) 2022-07-07 13:55:24 +08:00
Jiarui Fang
15d988f954 [tensor] sharded global process group (#1219) 2022-07-07 13:38:48 +08:00
Jiarui Fang
ae7d3f4927 [refactor] move process group from _DistSpec to ColoTensor. (#1203) 2022-07-06 16:15:16 +08:00
Jiarui Fang
b5f25eb32a [Tensor] add cpu group to ddp (#1200) 2022-07-05 14:58:28 +08:00
Jiarui Fang
060b917daf [refactor] remove gpc dependency in colotensor's _ops (#1189) 2022-07-04 18:54:37 +08:00
Jiarui Fang
c463f8adf9 [tensor] remove gpc in tensor tests (#1186) 2022-06-29 14:08:40 +08:00
Jiarui Fang
372f791444 [refactor] move chunk and chunkmgr to directory gemini (#1182) 2022-06-29 13:31:02 +08:00
Jiarui Fang
7487215b95 [ColoTensor] add independent process group (#1179) 2022-06-29 10:03:09 +08:00
Jiarui Fang
1b657f9ce1 [tensor] revert local view back (#1178) 2022-06-27 18:38:34 +08:00
Jiarui Fang
0dd4e2bbfb [Tensor] rename some APIs in TensorSpec and Polish view unittest (#1176) 2022-06-27 15:56:11 +08:00
Ziyue Jiang
dd0420909f [Tensor] rename parallel_action (#1174)
* rename parallel_action

* polish
2022-06-27 10:04:45 +08:00
Jiarui Fang
aa7bef73d4 [Tensor] distributed view supports inter-process hybrid parallel (#1169) 2022-06-27 09:45:26 +08:00
Jiarui Fang
4b9bba8116 [ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168) 2022-06-24 13:08:54 +08:00
Jiarui Fang
f4ef224358 [Tensor] remove ParallelAction, use ComputeSpec instread (#1166) 2022-06-23 17:34:59 +08:00
Jiarui Fang
177c374401 remove gather out in parallel action (#1163) 2022-06-23 16:35:05 +08:00
ver217
634eecb98e mark sanity_check of dist_spec_mgr as staticmethod (#1161) 2022-06-23 11:35:25 +08:00
ver217
4e67b2a890 fix chunk move device (#1158) 2022-06-22 18:07:10 +08:00
Jiarui Fang
07f9c781f9 [graph] improve the graph building. (#1157) 2022-06-22 16:47:20 +08:00
ver217
ffa025e120 [tensor] dist spec s2s uses all-to-all (#1136)
* dist spec s2s uses all-to-all

* update unit test

* add sanity check

* polish unitest test with titans

* add sanity check for DistMgr

* add sanity check

Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-06-22 11:32:38 +08:00
Jiarui Fang
8cdce0399c [ColoTensor] improves init functions. (#1150) 2022-06-21 18:28:38 +08:00
Frank Lee
0e4e62d30d [tensor] added __repr__ to spec (#1147) 2022-06-21 15:38:05 +08:00
ver217
789cad301b [hotfix] fix param op hook (#1131)
* fix param op hook

* update zero tp test

* fix bugs
2022-06-17 16:12:05 +08:00
ver217
7d14b473f0 [gemini] gemini mgr supports "cpu" placement policy (#1118)
* update gemini mgr

* update chunk

* add docstr

* polish placement policy

* update test chunk

* update test zero

* polish unit test

* remove useless unit test
2022-06-15 15:05:19 +08:00
ver217
f99f56dff4 fix colo parameter torch function (#1117) 2022-06-15 14:23:27 +08:00
ver217
895c1c5ee7 [tensor] refactor param op hook (#1097)
* refactor param op hook

* add docstr

* fix bug
2022-06-13 16:11:53 +08:00
Frank Lee
cb18922c47 [doc] added documentation to chunk and chunk manager (#1094)
* [doc] added documentation to chunk and chunk manager

* polish code

* polish code

* polish code
2022-06-10 15:33:06 +08:00
ver217
1f894e033f [gemini] zero supports gemini (#1093)
* add placement policy

* add gemini mgr

* update mem stats collector

* update zero

* update zero optim

* fix bugs

* zero optim monitor os

* polish unit test

* polish unit test

* add assert
2022-06-10 14:48:28 +08:00
ver217
be01db37c8 [tensor] refactor chunk mgr and impl MemStatsCollectorV2 (#1077)
* polish chunk manager

* polish unit test

* impl add_extern_static_tensor for chunk mgr

* add mem stats collector v2

* polish code

* polish unit test

* polish code

* polish get chunks
2022-06-09 20:56:34 +08:00
ver217
1b17859328 [tensor] chunk manager monitor mem usage (#1076) 2022-06-07 15:00:00 +08:00
ver217
98cdbf49c6 [hotfix] fix chunk comm src rank (#1072) 2022-06-07 11:54:56 +08:00
ver217
c5cd3b0f35 [zero] zero optim copy chunk rather than copy tensor (#1070) 2022-06-07 10:30:46 +08:00
Jiarui Fang
a00644079e reorgnize colotensor directory (#1062)
* reorgnize colotensor directory

* polish code
2022-06-03 18:04:22 +08:00
Ziyue Jiang
df9dcbbff6 [Tensor] add hybrid device demo and fix bugs (#1059) 2022-06-03 12:09:49 +08:00
ver217
e1922ea4f6 [zero] add chunk size search for chunk manager (#1052) 2022-06-02 13:20:20 +08:00
ver217
51b9a49655 [zero] add zero optimizer for ColoTensor (#1046)
* add zero optimizer

* torch ok

* unit test ok

* polish code

* fix bugs

* polish unit test

* polish zero optim

* polish colo ddp v2

* refactor folder structure

* add comment

* polish unit test

* polish zero optim

* polish unit test
2022-06-02 12:13:15 +08:00
ver217
7faef93326 fix dist spec mgr (#1045) 2022-05-31 12:14:39 +08:00
ver217
9492a561c3 [tensor] ColoTensor supports ZeRo (#1015)
* impl chunk manager

* impl param op hook

* add reduce_chunk

* add zero hook v2

* add zero dp

* fix TensorInfo

* impl load balancing when using zero without chunk

* fix zero hook

* polish chunk

* fix bugs

* ddp ok

* zero ok

* polish code

* fix bugs about load balancing

* polish code

* polish code

* add ene-to-end test

* polish code

* polish code

* polish code

* fix typo

* add test_chunk

* fix bugs

* fix bugs

* polish code
2022-05-31 12:00:12 +08:00
Ziyue Jiang
7c530b9de2 [Tensor] add Parameter inheritance for ColoParameter (#1041)
* add Parameter inheritance for ColoParameter

* remove tricks

* remove tricks

* polish

* polish
2022-05-30 17:23:44 +08:00
Ziyue Jiang
6c5996a56e [Tensor] add module check and bert test (#1031)
* add Embedding

* Add bert test

* polish

* add check module test

* polish

* polish

* polish

* polish
2022-05-26 18:15:42 +08:00
Ziyue Jiang
32291dd73f [Tensor] add module handler for linear (#1021)
* add module spec for linear

* polish

* polish

* polish
2022-05-26 11:50:44 +08:00
ver217
a3b66f6def [tensor] refactor parallel action (#1007)
* refactor parallel action

* polish unit tests
2022-05-20 20:19:58 +08:00
ver217
ad536e308e [tensor] refactor colo-tensor (#992)
* refactor colo-tensor and update linear op

* polish code

* polish code

* update ops and unit tests

* update unit tests

* polish code

* rename dist_spec module

* polish code

* polish code

* remove unneeded import

* fix pipelinable
2022-05-19 12:44:59 +08:00
Jiarui Fang
802ac297cc [Tensor] remove useless import in tensor dir (#997) 2022-05-18 14:54:51 +08:00
ver217
c2fdc6a011 [tensor] derive compute pattern from dist spec (#971)
* derive compute pattern from dist spec

* polish code
2022-05-16 14:58:08 +08:00
Ziyue Jiang
797a9dc5a9 add DistSpec for loss and test_model (#947) 2022-05-13 20:29:50 +08:00
ver217
67c33f57eb [tensor] design DistSpec and DistSpecManager for ColoTensor (#934)
* add dist spec

* update linear op

* polish code

* polish code

* update embedding op

* polish unit tests

* polish unit tests

* polish comments

* polish code

* add test_dist_spec_mgr

* polish code

* refactor folder structure

* polish unit tests

* add get_process_group() for TensorSpec

* polish code
2022-05-13 15:13:52 +08:00
ver217
4ca732349e [tensor] colo tensor overrides mul (#927)
* colo tensor overrides mul

* polish code
2022-05-10 16:04:08 +08:00
ver217
45b9124df4 [tensor] hijack addmm for colo tensor (#923)
* hijack addmm for colo tensor

* fix bugs

* polish unit test

* polish comments
2022-05-09 18:55:49 +08:00
Ziyue Jiang
c195d2814c [Tensor] add from_pretrained support and bert pretrained test (#921)
* add from_pretrained support and test

* polish

* polish

* polish

* polish
2022-05-09 16:11:47 +08:00