188 Commits

Author SHA1 Message Date
Jiarui Fang
d16671da75 [Tensor] initialize the ColoOptimizer (#898)
* [Tensor] activation is an attr of ColoTensor

* [Tensor] add optimizer

* only detach parameters in context

* polish code
2022-04-28 15:23:40 +08:00
Jiarui Fang
e76f76c08b [Tensor] test parameters() as member function (#896) 2022-04-28 10:57:14 +08:00
Ziyue Jiang
cb182da7c5 [tensor] refine linear and add gather for laynorm (#893)
* refine linear and add function to ColoTensor

* add gather for layernorm

* polish

* polish
2022-04-28 10:55:40 +08:00
Jiarui Fang
26c49639d8 [Tensor] overriding paramters() for Module using ColoTensor (#889) 2022-04-27 15:28:59 +08:00
Ziyue Jiang
1d0aba4153 [tensor] add ColoTensor 1Dcol (#888) 2022-04-27 14:13:55 +08:00
Jiarui Fang
a0e5971692 [Tensor] test model check results for a simple net (#887) 2022-04-27 12:00:18 +08:00
Jiarui Fang
72cdc06875 [Tensor] make ColoTensor more robust for getattr (#886)
* [Tensor] make ColoTensor more robust for getattr

* polish

* polish
2022-04-27 10:57:49 +08:00
Ziyue Jiang
9bc5a77c31 [tensor] wrap function in the torch_tensor to ColoTensor (#881) 2022-04-26 20:13:56 +08:00
Jiarui Fang
7f76517a85 [Tensor] make a simple net works with 1D row TP (#879) 2022-04-26 18:11:47 +08:00
ver217
c4d903e64a [gemini] accelerate adjust_layout() (#878)
* add lru cache

* polish code

* update unit test

* fix sharded optim
2022-04-26 18:08:31 +08:00
Jiarui Fang
909211453b [Tensor] Add some attributes to ColoTensor (#877)
* [Tensor] add some function to ColoTensor

* torch.allclose

* rm torch.add
2022-04-26 15:10:47 +08:00
Jiarui Fang
e43f83aa5c [Tensor] get named parameters for model using ColoTensors (#874) 2022-04-26 14:08:01 +08:00
Jiarui Fang
96211c2cc8 [tensor] customized op returns ColoTensor (#875)
* [tensor] customized op returns ColoTensor

* polish

* polish code
2022-04-26 13:23:59 +08:00
Ziyue Jiang
26d4ab8b03 [Tensor] Add function to spec and update linear 1Drow and unit tests (#869) 2022-04-26 10:15:26 +08:00
Jiarui Fang
1190b2c4a4 [tensor] add cross_entrophy_loss (#868) 2022-04-25 16:01:52 +08:00
HELSON
3107817172 [gemini] add stateful tensor container (#867) 2022-04-25 14:58:16 +08:00
Jiarui Fang
d01d3b8cb0 colo init context add device attr. (#866) 2022-04-25 14:24:26 +08:00
Jiarui Fang
126ba573a8 [Tensor] add layer norm Op (#852) 2022-04-25 11:49:20 +08:00
Frank Lee
1258af71cc [ci] cache cuda extension (#860) 2022-04-25 10:03:47 +08:00
Ziyue Jiang
bcc8655021 [Tensor ] Add 1Drow weight reshard by spec (#854) 2022-04-24 18:30:20 +08:00
Jiarui Fang
62f059251b [Tensor] init a tp network training unittest (#849) 2022-04-24 16:43:44 +08:00
Ziyue Jiang
2a0a427e04 [tensor]add assert for colo_tensor 1Drow (#846) 2022-04-24 14:12:45 +08:00
Ziyue Jiang
05023ecfee [Tensor] TP Linear 1D row (#843) 2022-04-24 13:43:12 +08:00
HELSON
e5ea3fdeef [gemini] add GeminiMemoryManger (#832)
* refactor StatefulTensor, tensor utilities

* add unitest for GeminiMemoryManager
2022-04-24 13:08:48 +08:00
YuliangLiu0306
35ea6e1023 [pipelinable]use pipelinable context to initialize non-pipeline model (#816)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [pipeline]add module lazy init feature to support large model initization.

* [pipeline]add to_layer_list and partition method to support arbitrary non-pp model

* refactor the module structure

* polish

* [pipelinable]add unit test for pipelinable

* polish

* polish

* Fix CodeFactor issues.
2022-04-24 13:03:12 +08:00
Jiarui Fang
ea0a2ed25f [hotfix] the bug of numel() in ColoTensor (#845) 2022-04-24 12:32:10 +08:00
Jiarui Fang
8789850eea Init Conext supports lazy allocate model memory (#842) 2022-04-22 18:03:35 +08:00
Frank Lee
943982d29a [unittest] refactored unit tests for change in dependency (#838) 2022-04-22 15:39:07 +08:00
Frank Lee
01e9f834f5 [dependency] removed torchvision (#833)
* [dependency] removed torchvision

* fixed transforms
2022-04-22 15:24:35 +08:00
Jiarui Fang
cb5a4778e1 Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)" (#835)
This reverts commit ac88de6dfc.
2022-04-22 14:45:57 +08:00
Jiarui Fang
ac88de6dfc [WIP] Applying ColoTensor on TP-1D-row Linear. (#831)
* revert zero tensors back

* [tensor] init row 1d linear
2022-04-22 14:03:26 +08:00
Jiarui Fang
294a6060d0 [tensor] ZeRO use ColoTensor as the base class. (#828)
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.

* [tensor] ZeRO use ColoTensor as the base class.

* polish
2022-04-22 12:00:48 +08:00
Ziyue Jiang
8e6fdb4f29 [tensor]fix test_linear (#826) 2022-04-21 17:18:56 +08:00
Ziyue Jiang
1a9e2c2dff [tensor] fix kwargs in colo_tensor torch_funtion (#825) 2022-04-21 16:47:35 +08:00
Jiarui Fang
2ecc3d7a55 [tensor] lazy init (#823) 2022-04-21 15:40:23 +08:00
Jiarui Fang
660d2d1f1b [Tensor] apply ColoTensor on Torch functions (#821)
* Revert "[zero] add ZeroTensorShardStrategy (#793)"

This reverts commit 88759e289e.

* [gemini] set cpu memory capacity

* [log] local throughput collecting

* polish

* polish

* polish

* polish code

* polish

* polish code

* add a new tensor structure and override linear for it

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* [tensor] renaming and reorganize directory structure.

* rm useless dir

* polish

* polish

* [tensor] hander the function not wrapped
2022-04-21 14:21:10 +08:00
Jiarui Fang
0ce8924ceb [tensor] reorganize files (#820) 2022-04-21 14:15:48 +08:00
Jiarui Fang
ab962b9735 [gemini] a new tensor structure (#818)
* Revert "[zero] add ZeroTensorShardStrategy (#793)"

This reverts commit 88759e289e.

* [gemini] set cpu memory capacity

* [log] local throughput collecting

* polish

* polish

* polish

* polish code

* polish

* polish code

* add a new tensor structure and override linear for it

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish
2022-04-21 11:42:37 +08:00
Jiarui Fang
e761ad2cd7 Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806) 2022-04-19 14:40:02 +08:00
HELSON
88759e289e [zero] add ZeroTensorShardStrategy (#793) 2022-04-19 14:32:45 +08:00
Jiarui Fang
681addb512 [refactor] moving grad acc logic to engine (#804) 2022-04-19 14:03:21 +08:00
Jiarui Fang
4d9332b4c5 [refactor] moving memtracer to gemini (#801) 2022-04-19 10:13:08 +08:00
HELSON
4c4388c46e [hotfix] fix memory leak in zero (#781) 2022-04-18 13:57:03 +08:00
Frank Lee
5a1a095b92 [test] refactored with the new rerun decorator (#763)
* [test] refactored with the new rerun decorator

* polish test case
2022-04-15 00:33:04 +08:00
Jiarui Fang
10ef8afdd2 [gemini] init genimi individual directory (#754) 2022-04-14 16:40:26 +08:00
ver217
dcca614eee [hotfix] fix test_stateful_tensor_mgr (#762) 2022-04-14 15:50:09 +08:00
ver217
a93a7d7364 [hotfix] fix reuse_fp16_shard of sharded model (#756)
* fix reuse_fp16_shard

* disable test stm

* polish code
2022-04-14 14:56:46 +08:00
HELSON
84c6700b2a [zero] refactor memstats_collector (#746) 2022-04-14 12:01:12 +08:00
ver217
e396bb71f2 [zero] add tensor placement policies (#743)
* add tensor placement policies

* polish comments

* polish comments

* update moe unit tests
2022-04-13 15:00:48 +08:00
HELSON
22c4b88d56 [zero] refactor ShardedParamV2 for convenience (#742) 2022-04-13 14:54:26 +08:00