Commit Graph

1098 Commits

Author SHA1 Message Date
YuliangLiu0306
f027ef7913 [hotfix] fix fp16 optimzier bug (#2273) 2023-01-03 16:53:43 +08:00
YuliangLiu0306
fb87322773 [autoparallel] fix spelling error (#2270) 2023-01-03 16:13:00 +08:00
Jiarui Fang
af32022f74 [Gemini] fix the convert_to_torch_module bug (#2269) 2023-01-03 15:55:35 +08:00
YuliangLiu0306
4b29112ab2 [autoparallel] gpt2 autoparallel examples (#2267)
* [autoparallel] gpt2 autoparallel examples

* polish code

* polish code
2023-01-03 14:23:33 +08:00
Ziyue Jiang
8b045b3c1f [Pipeline Middleware] Reduce comm redundancy by getting accurate output (#2232)
* move to cpu to avoid dead lock

* get output by offsets

Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2023-01-03 13:43:57 +08:00
Boyuan Yao
c8c79102f0 [autoparallel] patch torch.flatten metainfo for autoparallel (#2247)
* [autoparallel] patch torch.flatten
2023-01-02 15:51:03 +08:00
YuliangLiu0306
8897b8f753 [autoparallel] autoparallel initialize (#2238) 2022-12-31 01:02:14 +08:00
xcnick
85178a397a [hotfix] fix error for torch 2.0 (#2243) 2022-12-30 23:11:55 +08:00
Super Daniel
b7d0990c61 [autoparallel] fix construct meta info. (#2245) 2022-12-30 19:56:44 +08:00
Ziyue Jiang
57929a6210 fix type of num_worker_threads (#2237)
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-12-30 11:04:01 +08:00
Jiarui Fang
db4cbdc7fb [builder] builder for scaled_upper_triang_masked_softmax (#2234) 2022-12-30 09:58:00 +08:00
Super Daniel
78483a9fdd [logger] hotfix, missing _FORMAT (#2231) 2022-12-29 22:59:39 +08:00
Jiarui Fang
54de05da5d [builder] polish builder with better base class (#2216)
* [builder] polish builder

* remove print
2022-12-28 19:45:49 +08:00
YuliangLiu0306
3b1b91eaf4 [autoparallel] record parameter attribute in colotracer (#2217)
* [autoparallel] record parameter attribute in collotracer

* [autoparallel] fix construct_meta_info bug
2022-12-28 19:29:08 +08:00
Jiarui Fang
7675792100 [builder] raise Error when CUDA_HOME is not set (#2213) 2022-12-28 16:07:08 +08:00
Jiarui Fang
d5e3e3ec01 [example] update gpt example for larger model scale (#2211) 2022-12-28 13:54:08 +08:00
Boyuan Yao
24246f7aa5 [autoparallel] Attach input, buffer and output tensor to MetaInfo class (#2162)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input

* [autoparallel] add pooling metainfo

* [autoparallel] add F.linear metainfo generator

* [autoparallel] add binary elementwise metainfo

* [fx] recover profiler

* [autoparallel] fix forward memory calculation

* [autoparallel] modify constants.py

* [autoparallel] remove redundant print

* [autoparallel] add F.conv metainfo

* [autoparallel] linear fix

* [autoparallel] memory estimation for communication actions

* [autoparallel] fix docstring

* [autoparallel] fix variables name

* [autoparallel] attach tensor to metainfo class

* [autoparallel] fix dangerous try except

* [autoparallel] attach memory cost to shape consistency node

* [autoparallel] attach shape consistency node's metainfo to the node

* [autoparallel] remove todo in shape consistency memory estimation

* [autoparallel] fix the annotation
2022-12-28 13:37:40 +08:00
Boyuan Yao
d0bc5a1b34 [autoparallel] new metainfoprop based on metainfo class (#2179)
* [autoparallel] new metainfoprop to combine SPMD solver and checkpoint solver

* [autoparallel] new metainfoprop to combine SPMD solver and checkpoint solver

* [autoparallel] modify placeholder handler

* [autoparallel] modify metainfoprop

* [autoparallel] fix function typo

* [autoparallel] fix placeholder handler
2022-12-28 13:35:08 +08:00
YuliangLiu0306
78509124d3 [autoparallel] update getitem handler (#2207) 2022-12-27 19:58:32 +08:00
Jiarui Fang
1cb532ffec [builder] multihead attn runtime building (#2203)
* [hotfix] correcnt cpu_optim runtime compilation

* [builder] multihead attn

* fix bug

* fix a bug
2022-12-27 16:06:09 +08:00
Tongping Liu
8e22c38b89 [hotfix] Fixing the bug related to ipv6 support
Co-authored-by: ByteDance <tongping.liu@bytedance.com>
2022-12-27 12:42:46 +08:00
YuliangLiu0306
4851f2d607 [autoparallel] update_getattr_handler (#2193) 2022-12-26 21:57:39 +08:00
Jiarui Fang
5682e6d346 [hotfix] correcnt cpu_optim runtime compilation (#2197) 2022-12-26 16:45:14 +08:00
HELSON
2458659919 [zero] fix error for BEiT models (#2169)
* [zero] fix error for BEiT models

* [ColoParameter] add unpack operation for tuple arguments

* fix bugs

* fix chunkv2 unit testing

* add assertion for gradient state
2022-12-26 15:03:54 +08:00
Jiarui Fang
355ffb386e [builder] unified cpu_optim fused_optim inferface (#2190) 2022-12-23 20:57:41 +08:00
Jiarui Fang
9587b080ba [builder] use runtime builder for fused_optim (#2189) 2022-12-23 17:07:03 +08:00
Jiarui Fang
bc0e271e71 [buider] use builder() for cpu adam and fused optim in setup.py (#2187) 2022-12-23 16:05:13 +08:00
Jiarui Fang
d42afd30f8 [builder] runtime adam and fused_optim builder (#2184) 2022-12-23 14:14:21 +08:00
YuliangLiu0306
550f8f8905 [autoparallel] integrate_gpt_related_tests (#2134)
* [autoparallel] integrate_gpt_related_tests

* polish code

* polish code

* add GPT2Model into runtime test
2022-12-23 12:36:59 +08:00
Ziyue Jiang
59e343328d [Pipeline Middleware ] Fix deadlock when num_microbatch=num_stage (#2156)
* add splitter

* polish code

* remove comment

* fix async nan by moving to cpu first

Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-12-23 11:38:43 +08:00
Tongping Liu
ab54fed292 [hotfix] add kwargs for colo_addmm (#2171) 2022-12-22 13:25:30 +08:00
アマデウス
622f863291 [hotfix] Jit type hint #2161 (#2164) 2022-12-22 10:17:03 +08:00
Zihao
12e7bcd720 register meta func for rnn (#2159) 2022-12-21 23:06:18 +08:00
Boyuan Yao
cfe2a9bd90 [autoparallel] memory estimation for shape consistency (#2144)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input

* [autoparallel] add pooling metainfo

* [autoparallel] add F.linear metainfo generator

* [autoparallel] add binary elementwise metainfo

* [fx] recover profiler

* [autoparallel] fix forward memory calculation

* [autoparallel] modify constants.py

* [autoparallel] remove redundant print

* [autoparallel] add F.conv metainfo

* [autoparallel] linear fix

* [autoparallel] memory estimation for communication actions

* [autoparallel] fix docstring

* [autoparallel] fix variables name
2022-12-21 10:39:37 +08:00
Jiarui Fang
b87496a66b [hotfix] fix auto policy of test_sharded_optim_v2 (#2157) 2022-12-20 23:03:18 +08:00
YuliangLiu0306
16335cb537 [hotfix] fix aten default bug (#2158) 2022-12-20 22:40:46 +08:00
HELSON
a7d95b7024 [example] add zero1, zero2 example in GPT examples (#2146)
* [example] add zero1 and zero2 for GPT

* update readme in gpt example

* polish code

* change init value

* update readme
2022-12-20 14:30:27 +08:00
YuliangLiu0306
1cce6e36ca [autoparallel] use metainfo in handler (#2149) 2022-12-20 10:31:22 +08:00
Jiarui Fang
2827f41898 [Gemini] GeminiDPP convert to PyTorch Module. (#2151) 2022-12-20 10:19:36 +08:00
Jiarui Fang
bdef9dfdbe [NFC] remove useless graph node code (#2150) 2022-12-20 00:33:58 +08:00
BlueRum
b3f73ce1c8 [Gemini] Update coloinit_ctx to support meta_tensor (#2147) 2022-12-19 22:37:07 +08:00
Zihao
a128eec9d5 register aten._convolution.default (#2137) 2022-12-18 19:27:01 +08:00
Jiarui Fang
ee287620f0 [Gemini] revert ZeROInitCtx related tracer (#2138) 2022-12-16 12:37:06 +08:00
アマデウス
077a66dd81 updated attention kernel (#2133) 2022-12-16 10:54:03 +08:00
YuliangLiu0306
a3c6924deb [autoparallel] process size nodes in runtime pass (#2130)
* [autoparallel] process size nodes in runtime pass

* polish code
2022-12-14 16:10:50 +08:00
YuliangLiu0306
536560ccc0 [autoparallel] implement softmax handler (#2132) 2022-12-14 16:09:53 +08:00
Jiarui Fang
c89c66a858 [Gemini] update API of the chunkmemstatscollector. (#2129) 2022-12-14 00:47:06 +08:00
Jiarui Fang
2938edf446 [Gemini] update the non model data record method in runtime memory tracer (#2128) 2022-12-13 17:11:31 +08:00
Jiarui Fang
8fac837679 [Gemini] update non model data calculation method (#2126) 2022-12-13 15:44:07 +08:00
Jiarui Fang
5efda69735 [Gemini] hotfix the unittest bugs (#2125) 2022-12-13 14:14:55 +08:00