1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-03 05:58:09 +00:00
Commit Graph

6 Commits

Author SHA1 Message Date
Jiarui Fang
aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel () 2022-06-27 09:45:26 +08:00
Jiarui Fang
a445e118cf
[polish] polish singleton and global context () 2022-03-23 18:03:39 +08:00
HELSON
aff9d354f7
[MOE] polish moe_env () 2022-03-19 15:36:25 +08:00
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler () 2022-03-18 16:38:32 +08:00
Frank Lee
da01c234e1
Develop/experiments ()
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000

* Integrate 1d tensor parallel in Colossal-AI ()

* fixed 1D and 2D convergence ()

* optimized 2D operations

* fixed 1D ViT convergence problem

* Feature/ddp ()

* remove redundancy func in setup () ()

* use env to control the language of doc () ()

* Support TP-compatible Torch AMP and Update trainer API ()

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB ()

* add explanation for ViT example () ()

* support torch ddp

* fix loss accumulation

* add log for ddp

* change seed

* modify timing hook

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* Feature/pipeline ()

* remove redundancy func in setup () ()

* use env to control the language of doc () ()

* Support TP-compatible Torch AMP and Update trainer API ()

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB ()

* add explanation for ViT example () ()

* optimize communication of pipeline parallel

* fix grad clip for pipeline

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers ()

* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset

* update api for better usability ()

update api for better usability

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-09 15:08:29 +08:00
zbian
404ecbdcc6 Migrated project 2021-10-28 18:21:23 +02:00