1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-10 01:17:45 +00:00
Commit Graph

2881 Commits

Author SHA1 Message Date
アマデウス
0fedef4f3c
Layer integration ()
* integrated parallel layers for ease of building models

* integrated 2.5d layers

* cleaned codes and unit tests

* added log metric by step hook; updated imagenet benchmark; fixed some bugs

* reworked initialization; cleaned codes

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-27 15:04:32 +08:00
shenggan
5c3843dc98
add colossalai kernel module () 2021-12-21 12:19:52 +08:00
Xin Zhang
648f806315
add example of self-supervised SimCLR training - V2 ()
* add example of self-supervised SimCLR training

* simclr v2, replace nvidia dali dataloader

* updated

* sync to latest code writing style

* sync to latest code writing style and modify README

* detail README & standardize dataset path
2021-12-21 08:07:18 +08:00
ver217
8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer () 2021-12-20 23:26:19 +08:00
Frank Lee
91c327cb44
fixed zero level 3 dtype bug () 2021-12-20 17:00:53 +08:00
HELSON
632e622de8
overlap computation and communication in 2d operations () 2021-12-16 16:05:15 +08:00
Frank Lee
cd9c28e055
added CI for unit testing () 2021-12-16 10:32:08 +08:00
Frank Lee
45355a62f7
Update issue templates () 2021-12-14 12:01:46 +08:00
Frank Lee
35813ed3c4
update examples and sphnix docs for the new api () 2021-12-13 22:07:01 +08:00
ver217
7d3711058f
fix zero3 fp16 and add zero3 model context () 2021-12-10 17:48:50 +08:00
Frank Lee
9a0466534c
update markdown docs (english) () 2021-12-10 14:37:33 +08:00
Frank Lee
da01c234e1
Develop/experiments ()
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000

* Integrate 1d tensor parallel in Colossal-AI ()

* fixed 1D and 2D convergence ()

* optimized 2D operations

* fixed 1D ViT convergence problem

* Feature/ddp ()

* remove redundancy func in setup () ()

* use env to control the language of doc () ()

* Support TP-compatible Torch AMP and Update trainer API ()

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB ()

* add explanation for ViT example () ()

* support torch ddp

* fix loss accumulation

* add log for ddp

* change seed

* modify timing hook

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* Feature/pipeline ()

* remove redundancy func in setup () ()

* use env to control the language of doc () ()

* Support TP-compatible Torch AMP and Update trainer API ()

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB ()

* add explanation for ViT example () ()

* optimize communication of pipeline parallel

* fix grad clip for pipeline

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers ()

* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset

* update api for better usability ()

update api for better usability

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-09 15:08:29 +08:00
ver217
eb2f8b1f6b
add how to build tfrecord dataset () 2021-12-02 16:31:23 +08:00
ver217
4da256a584
add some details in vit-b16 example () 2021-12-02 09:29:27 +08:00
ver217
e67dab92a9
add some details in vit-b16 example () () 2021-12-02 08:55:11 +08:00
binmakeswell
2528adc62f
add explanation for ViT example () () 2021-11-29 10:25:38 +08:00
ver217
dbe62c67b8
add an example of ViT-B/16 and remove w_norm clipping in LAMB () 2021-11-18 23:45:09 +08:00
Frank Lee
3defa32aee
Support TP-compatible Torch AMP and Update trainer API ()
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
2021-11-18 19:45:06 +08:00
ver217
2b05de4c64
use env to control the language of doc () () 2021-11-15 16:53:56 +08:00
ver217
9942fd5bfa
remove redundancy func in setup () () 2021-11-15 16:43:28 +08:00
ver217
0aa07e600c
Merge pull request from hpcaitech/feature/zhdoc
made some modifications to the documents
2021-11-04 14:26:28 +08:00
binmakeswell
05e7069a5b fixed some typos in the documents, added blog link and paper author information in README 2021-11-03 17:18:43 +08:00
Frank Lee
ccb44882e1
Merge pull request from hpcaitech/feature/zhdoc
added Chinese documents and fixed some typos in English documents
2021-11-03 11:38:06 +08:00
Fan Cui
18ba66e012 added Chinese documents and fixed some typos in English documents 2021-11-02 23:28:44 +08:00
Frank Lee
ccbc918c11
Merge pull request from hpcaitech/hotfix/doc
reoder parallelization methods in parallelization documentation
2021-11-02 14:35:06 +08:00
ver217
50982c0b7d reoder parallelization methods in parallelization documentation 2021-11-01 14:31:55 +08:00
ver217
3c7604ba30 update documentation 2021-10-29 09:29:20 +08:00
アマデウス
3245a69fc2
cleaned test scripts 2021-10-29 00:48:14 +08:00
アマデウス
da2042f5c1
updated readme 2021-10-29 00:39:21 +08:00
zbian
404ecbdcc6 Migrated project 2021-10-28 18:21:23 +02:00
アマデウス
2ebaefc542
Initial commit 2021-10-29 00:19:45 +08:00