Commit Graph

1781 Commits

Author SHA1 Message Date
Boyuan Yao
6cd784ffee [autoparallel] Add metainfo support for F.linear (#1987)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input

* [autoparallel] add pooling metainfo

* [autoparallel] add F.linear metainfo generator
2022-11-23 14:12:34 +08:00
Super Daniel
2edbef13cc [fx] add more meta_registry for MetaTensor execution. (#2000)
* [sc] add examples for auto checkpoint.

* merge upstream

* [fx] add more meta_registry for MetaTensor execution.
2022-11-23 10:55:46 +08:00
Jiarui Fang
a2d3266648 [hotfix] make Gemini work for conv DNN (#1998) 2022-11-22 14:52:36 +08:00
YuliangLiu0306
155891113e [autoparallel] use pytree map style to process data (#1989) 2022-11-21 10:44:22 +08:00
YuliangLiu0306
35e6b9ec82 [autoparallel] adapt handlers with attention block (#1990)
* [autoparallel] adapt handlers with attention block

* polish
2022-11-21 10:44:11 +08:00
YuliangLiu0306
05020e50d0 [autoparallel] support more flexible data type (#1967) 2022-11-18 17:01:06 +08:00
Boyuan Yao
c26f21d365 [autoparallel] add pooling metainfo (#1968)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input

* [autoparallel] add pooling metainfo
2022-11-18 15:13:03 +08:00
Jiarui Fang
3712ac7f90 [Gemini] add bert for MemtracerWrapper unintests (#1982) 2022-11-18 14:58:28 +08:00
Jiarui Fang
e481489aa6 [Gemini] MemtracerWrapper unittests (#1981) 2022-11-18 14:19:40 +08:00
Jiarui Fang
31922110ad [Gemini] memory trace hook (#1978) 2022-11-18 11:52:55 +08:00
Jiarui Fang
0529fcde06 [Gemini] independent runtime tracer (#1974) 2022-11-18 10:53:42 +08:00
YuliangLiu0306
0da1d00399 [autoparallel] support distributed dataloader option (#1906)
* [autoparallel] support distributed dataloader option

* update output handler to support ddp dataloader

* poish code
2022-11-17 20:11:53 +08:00
Genghan Zhang
6630d45546 [autoparallel] Add alpha beta (#1973)
* Add alpha beta

* Fix test

* Fix test
2022-11-17 16:01:14 +08:00
Jiarui Fang
cc0ed7cf33 [Gemini] ZeROHookV2 -> GeminiZeROHook (#1972) 2022-11-17 14:43:49 +08:00
ver217
f8a7148dec [kernel] move all symlinks of kernel to colossalai._C (#1971) 2022-11-17 13:42:33 +08:00
Jiarui Fang
7e24b9b9ee [Gemini] clean no used MemTraceOp (#1970) 2022-11-17 13:41:54 +08:00
Boyuan Yao
7c7921f71b [autoparallel] add torch.nn.ReLU metainfo (#1868)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input
2022-11-16 23:12:31 +08:00
Jiarui Fang
8c66a1d0aa [polish] remove useless file _mem_tracer_hook.py (#1963) 2022-11-16 15:55:10 +08:00
Jiarui Fang
c4739a725a [Gemini] polish memstats collector (#1962) 2022-11-16 15:45:57 +08:00
YuliangLiu0306
fea3cb661c [autoparallel] support addmm in tracer and solver (#1961)
* [fx] patch addmm

* [autoparallel] support addmm in tracer and solver
2022-11-16 14:59:18 +08:00
Jiarui Fang
f7e276fa71 [Gemini] add GeminiAdamOptimizer (#1960) 2022-11-16 14:44:28 +08:00
HELSON
7066dfbf82 [zero] fix memory leak for zero2 (#1955) 2022-11-16 11:43:24 +08:00
Jiarui Fang
52c6ad26e0 [ColoTensor] reconfig ColoInitContext, decouple default_pg and default_dist_spec. (#1953) 2022-11-15 16:24:16 +08:00
zbian
598d456d0e fixed logger 2022-11-15 16:00:07 +08:00
zbian
6877121377 updated flash attention api 2022-11-15 15:25:39 +08:00
YuliangLiu0306
36c0f3ea5b [autoparallel] remove redundancy comm node (#1893) 2022-11-15 10:53:41 +08:00
アマデウス
e52f9d9109 [tensorparallel] fixed tp layers (#1938) 2022-11-14 17:34:03 +08:00
Jiarui Fang
9f4fb3f28a [ColoTensor] ColoInitContext initialize parameters in shard mode. (#1937) 2022-11-14 16:05:09 +08:00
Boyuan Yao
d5c5bc219e [SC] add GPT example for auto checkpoint (#1889)
* [sc] SC tutorial for auto checkpoint

* [sc] polish examples

* [sc] polish readme

* [sc] polish readme and help information

* [sc] polish readme and help information
2022-11-11 23:17:25 +08:00
Junming Wu
14a0b18305 [NFC] polish colossalai/amp/naive_amp/__init__.py code style (#1905) 2022-11-11 17:49:18 +08:00
HELSON
6e51d296f0 [zero] migrate zero1&2 (#1878)
* add zero1&2 optimizer

* rename test ditectory

* rename test files

* change tolerance in test
2022-11-11 09:26:40 +08:00
Super Daniel
cc55ff0aa4 [autoparallel] user-friendly API for CheckpointSolver. (#1879)
Merge for SC tutorial
2022-11-10 20:59:28 +08:00
Super Daniel
448248b27c [fx] metainfo_trace as an API. (#1873)
* [fx] metainfo_trace as an API.

* [fx] add return.
2022-11-10 20:58:37 +08:00
Jiarui Fang
986f8cbaa7 [inference] overlap comm and compute in Linear1D_Row when stream_chunk_num > 1 (#1876) 2022-11-10 17:36:42 +08:00
YuliangLiu0306
1b494ad73c [autoparallel] fix linear logical convert issue (#1857) 2022-11-10 17:19:22 +08:00
Jiarui Fang
c2947dadf1 [inference] streaming Linear 1D Row inference (#1874) 2022-11-10 17:03:21 +08:00
Frank Lee
e6ec99d389 [utils] fixed lazy init context (#1867) 2022-11-10 15:17:20 +08:00
zbian
653b0a620e added skip_bias_add for non-tp linear 2022-11-09 15:41:08 +08:00
LuGY
94329fc139 [NFC] polish colossalai/amp/apex_amp/__init__.py code style (#1853) 2022-11-09 14:49:42 +08:00
zbian
1559a09fb7 [NFC] polish amp.naive_amp.grad_scaler code style 2022-11-09 13:38:15 +08:00
HELSON
72c9448920 [NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/operator_handler.py code style (#1845) 2022-11-09 12:08:47 +08:00
Genghan Zhang
b25030cc07 [NFC] polish ./colossalai/amp/torch_amp/__init__.py code style (#1836) 2022-11-09 12:08:47 +08:00
Sze-qq
95ac4f88ea [NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/conv_handler.py code style (#1829)
Co-authored-by: siqi <siqi@siqis-MacBook-Pro.local>
2022-11-09 12:08:47 +08:00
Ziyue Jiang
5da03c936d [NFC] polish colossalai/amp/torch_amp/_grad_scaler.py code style (#1823)
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-11-09 12:08:47 +08:00
Fazzie-Maqianli
399f84d8f6 [NFC] polish colossalai/amp/naive_amp/_fp16_optimizer.py code style (#1819) 2022-11-09 12:08:47 +08:00
CsRic
9623ec1b02 [NFC] polish colossalai/amp/naive_amp/_utils.py code style (#1816)
* [NFC] polish colossalai/nn/metric/accuracy_2p5d.py code style (#1714)

* [NFC] polish colossalai/zero/sharded_param/__init__.py code style

* [NFC] polish colossalai/amp/naive_amp/_utils.py code style

Co-authored-by: shenggan <csg19971016@gmail.com>
Co-authored-by: ric <mkkt_bkkt@mail.ustc.edu.cn>
2022-11-09 12:08:47 +08:00
binmakeswell
3c3714fc2a [NFC] polish strategies_constructor.py code style (#1806) 2022-11-09 12:08:47 +08:00
Jiarui Fang
3ce4463fe6 [utils] remove lazy_memory_allocate from ColoInitContext (#1844) 2022-11-09 11:50:33 +08:00
Jiarui Fang
fba34efb5a version to 0.1.11rc2 (#1832) 2022-11-08 17:25:15 +08:00
YuliangLiu0306
49216d7ab1 [autoparallel] fix bugs caused by negative dim key (#1808)
* [autoparallel] fix bugs caused by negative dim key

* fix import error

* fix matmul test issue

* fix unit test issue
2022-11-08 17:03:50 +08:00