930 Commits

Author SHA1 Message Date
HELSON
2458659919 [zero] fix error for BEiT models (#2169)
* [zero] fix error for BEiT models

* [ColoParameter] add unpack operation for tuple arguments

* fix bugs

* fix chunkv2 unit testing

* add assertion for gradient state
2022-12-26 15:03:54 +08:00
Jiarui Fang
355ffb386e [builder] unified cpu_optim fused_optim inferface (#2190) 2022-12-23 20:57:41 +08:00
Jiarui Fang
9587b080ba [builder] use runtime builder for fused_optim (#2189) 2022-12-23 17:07:03 +08:00
Jiarui Fang
bc0e271e71 [buider] use builder() for cpu adam and fused optim in setup.py (#2187) 2022-12-23 16:05:13 +08:00
Jiarui Fang
d42afd30f8 [builder] runtime adam and fused_optim builder (#2184) 2022-12-23 14:14:21 +08:00
YuliangLiu0306
550f8f8905 [autoparallel] integrate_gpt_related_tests (#2134)
* [autoparallel] integrate_gpt_related_tests

* polish code

* polish code

* add GPT2Model into runtime test
2022-12-23 12:36:59 +08:00
Jiarui Fang
27327a4c90 [example] add palm pytorch version (#2172) 2022-12-22 10:15:34 +08:00
Jiarui Fang
b87496a66b [hotfix] fix auto policy of test_sharded_optim_v2 (#2157) 2022-12-20 23:03:18 +08:00
YuliangLiu0306
16335cb537 [hotfix] fix aten default bug (#2158) 2022-12-20 22:40:46 +08:00
Jiarui Fang
2827f41898 [Gemini] GeminiDPP convert to PyTorch Module. (#2151) 2022-12-20 10:19:36 +08:00
アマデウス
077a66dd81 updated attention kernel (#2133) 2022-12-16 10:54:03 +08:00
YuliangLiu0306
536560ccc0 [autoparallel] implement softmax handler (#2132) 2022-12-14 16:09:53 +08:00
Jiarui Fang
c89c66a858 [Gemini] update API of the chunkmemstatscollector. (#2129) 2022-12-14 00:47:06 +08:00
Jiarui Fang
2938edf446 [Gemini] update the non model data record method in runtime memory tracer (#2128) 2022-12-13 17:11:31 +08:00
Jiarui Fang
deee317b0f [Gemini] test step-tensor mapping using repeated_computed_layers.py (#2127) 2022-12-13 16:34:10 +08:00
Jiarui Fang
8fac837679 [Gemini] update non model data calculation method (#2126) 2022-12-13 15:44:07 +08:00
Jiarui Fang
5efda69735 [Gemini] hotfix the unittest bugs (#2125) 2022-12-13 14:14:55 +08:00
Jiarui Fang
05bb28aacf [Gemini] mapping of preop timestep and param (#2124) 2022-12-13 12:50:24 +08:00
YuliangLiu0306
cd0af9f7f6 [autoparallel] gpt2lp runtimee test (#2113) 2022-12-12 18:06:40 +08:00
Jiarui Fang
9214d1fe28 [Gemini] chunk init using runtime visited param order (#2115) 2022-12-12 18:06:16 +08:00
HELSON
e7d3afc9cc [optimizer] add div_scale for optimizers (#2117)
* [optimizer] add div_scale for optimizers

* [zero] use div_scale in zero optimizer

* fix testing error
2022-12-12 17:58:57 +08:00
Jiarui Fang
e5aa8333e4 [NFC] update chunk manager API (#2119) 2022-12-12 16:57:22 +08:00
Jiarui Fang
e99edfcb51 [NFC] polish comments for Chunk class (#2116) 2022-12-12 15:39:31 +08:00
Ziyue Jiang
09d69e1c25 [PP Middleware] Add bwd and step for PP middleware (#2111)
* add bwd and step for PP middleware

* pre-commit

Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-12-12 12:40:03 +08:00
HELSON
63fbba3c19 [zero] add L2 gradient clipping for ZeRO (#2112)
* [zero] add L2 gradient clipping

* [testing] add MlpModel

* [zero] add unit test for grad clipping

* fix atol
2022-12-09 18:09:17 +08:00
Jiarui Fang
70a8556946 [gemini] get the param visited order during runtime (#2108) 2022-12-09 16:13:03 +08:00
YuliangLiu0306
d87baa85d9 [autoparallel] support linear function bias addition (#2104) 2022-12-09 10:31:36 +08:00
YuliangLiu0306
0fecbb9e20 [autoparallel] support addbmm computation (#2102) 2022-12-08 21:15:11 +08:00
YuliangLiu0306
d3d4630495 [autoparallel] add sum handler (#2101) 2022-12-08 17:02:54 +08:00
Ziyue Jiang
e4705ba4e2 [Pipeline Middleware] fix data race in Pipeline Scheduler for DAG (#2087)
* add DAG test case

* fix datarace by adjusting theposition of lock

* polish code

* fix pytest for middleware

* remove test

Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-12-08 13:32:27 +08:00
YuliangLiu0306
b175e6d58e [autoparallel] add bias addtion function class (#2098)
* [autoparallel] add bias addtion function class

* polish code

* polish
2022-12-08 11:31:51 +08:00
YuliangLiu0306
3af7e65dea [autoparallel] complete gpt related module search (#2097) 2022-12-08 10:04:09 +08:00
Jiarui Fang
85efb7ac2e [Gemini] gemini use the runtime memory tracer (RMT) (#2099) 2022-12-07 23:04:02 +08:00
Jiarui Fang
978242326a [Gemini] remove eval in gemini unittests! (#2092) 2022-12-07 11:58:37 +08:00
YuliangLiu0306
7f72eb0510 [autoparallel]add embedding handler (#2089)
* [autoparallel] add embedding handler

* fix bugs
2022-12-07 09:41:46 +08:00
Jiarui Fang
1fca5d79ea [Gemini] remove GLOBAL_MODEL_DATA_TRACER (#2091) 2022-12-06 22:30:16 +08:00
Jiarui Fang
25abae6d7f [Gemini] use MemStats in Runtime Memory tracer (#2088) 2022-12-06 19:48:20 +08:00
Jiarui Fang
33f4412102 [Gemini] use MemStats to store the tracing data. Seperate it from Collector. (#2084) 2022-12-06 16:43:06 +08:00
Jiarui Fang
1f99205827 [Gemini] remove static tracer (#2083) 2022-12-06 12:53:58 +08:00
YuliangLiu0306
0e9db368ef [autoparallel] add tensor constructor handler (#2082) 2022-12-06 10:20:10 +08:00
YuliangLiu0306
cdf537a648 [autoparallel] add non_split linear strategy (#2078)
* [autoparallel] add non_split linear stategy

* polish
2022-12-06 10:19:33 +08:00
Boyuan Yao
cf0268da93 [autoparallel] Add F.conv metainfo (#2069)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input

* [autoparallel] add pooling metainfo

* [autoparallel] add F.linear metainfo generator

* [autoparallel] add binary elementwise metainfo

* [fx] recover profiler

* [autoparallel] fix forward memory calculation

* [autoparallel] modify constants.py

* [autoparallel] remove redundant print

* [autoparallel] add F.conv metainfo

* [autoparallel] linear fix
2022-12-06 10:17:57 +08:00
YuliangLiu0306
f123476666 [autoparallel] complete gpt block searching (#2065)
* [autoparallel] complete gpt block searching

* fix test
2022-12-06 10:17:10 +08:00
Ziyue Jiang
597cdd3006 [Pipeline Middleware] Adapt scheduler for Topo (#2066)
* adapt scheduler for Topo

* remoove comment

* fix set input

Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-12-05 20:23:41 +08:00
Jiarui Fang
4f21c9e8d9 [Gemini] polish runtime tracer tests (#2077) 2022-12-05 16:22:49 +08:00
Jiarui Fang
a7adad9ccb [Gemini] rename hooks related to runtime mem tracer (#2076) 2022-12-05 15:00:03 +08:00
Jiarui Fang
40b7d55bf3 [Gemini] add albert in test models. (#2075) 2022-12-05 14:09:34 +08:00
Jiarui Fang
616ed91ecd [test] bert test in non-distributed way (#2074) 2022-12-05 13:32:16 +08:00
Jiarui Fang
223332ff7e [Gemini] rename ParamTracerWrapper -> RuntimeMemTracer (#2073) 2022-12-05 12:45:11 +08:00
Jiarui Fang
9f828ef36f [Gemini] remove not used MemtracerWrapper (#2072) 2022-12-05 11:57:59 +08:00