1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-06 23:48:26 +00:00
Commit Graph

49 Commits

Author SHA1 Message Date
digger-yu
b7141c36dd
[CI] fix some spelling errors ()
* fix spelling error with examples/comminity/

* fix spelling error with tests/

* fix some spelling error with tests/ colossalai/ etc.
2023-05-10 17:12:03 +08:00
digger-yu
b9a8dff7e5
[doc] Fix typo under colossalai and doc()
* Fixed several spelling errors under colossalai

* Fix the spelling error in colossalai and docs directory

* Cautious Changed the spelling error under the example folder

* Update runtime_preparation_pass.py

revert autograft to autograd

* Update search_chunk.py

utile to until

* Update check_installation.py

change misteach to mismatch in line 91

* Update 1D_tensor_parallel.md

revert to perceptron

* Update 2D_tensor_parallel.md

revert to perceptron in line 73

* Update 2p5D_tensor_parallel.md

revert to perceptron in line 71

* Update 3D_tensor_parallel.md

revert to perceptron in line 80

* Update README.md

revert to resnet in line 42

* Update reorder_graph.py

revert to indice in line 7

* Update p2p.py

revert to megatron in line 94

* Update initialize.py

revert to torchrun in line 198

* Update routers.py

change to detailed in line 63

* Update routers.py

change to detailed in line 146

* Update README.md

revert  random number in line 402
2023-04-26 11:38:43 +08:00
yuxuan-lou
198a74b9fd
[NFC] polish colossalai/context/random/__init__.py code style () 2023-03-30 14:19:26 +08:00
RichardoLuo
1ce9d0c531 [NFC] polish initializer_data.py code style () 2023-03-29 15:22:21 +08:00
Kai Wang (Victor Kai)
964a28678f [NFC] polish initializer_3d.py code style () 2023-03-29 15:22:21 +08:00
Arsmart1
8af977f223 [NFC] polish colossalai/context/parallel_context.py code style () 2023-03-29 15:22:21 +08:00
Zirui Zhu
c9e3ee389e
[NFC] polish colossalai/context/process_group_initializer/initializer_2d.py code style () 2023-02-15 22:27:13 +08:00
Ziyue Jiang
4603538ddd
[NFC] posh colossalai/context/process_group_initializer/initializer_sequence.py code style ()
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2023-02-15 10:53:38 +08:00
アマデウス
534f68c83c
[NFC] polish pipeline process group code style () 2023-02-14 18:12:01 +08:00
LuGY
56ff1921e9
[NFC] polish colossalai/context/moe_context.py code style () 2023-02-14 18:02:45 +08:00
アマデウス
99d9713b02 Revert "Update parallel_context.py ()"
This reverts commit 7d5640b9db.
2023-01-19 12:27:48 +08:00
Haofan Wang
7d5640b9db
Update parallel_context.py () 2023-01-10 11:27:23 +08:00
Tongping Liu
8e22c38b89
[hotfix] Fixing the bug related to ipv6 support
Co-authored-by: ByteDance <tongping.liu@bytedance.com>
2022-12-27 12:42:46 +08:00
kurisusnowdeng
0b8161fab8 updated tp layers 2022-11-02 12:19:38 +08:00
HELSON
1468e4bcfc
[zero] add constant placement policy ()
* fixes memory leak when paramter is in fp16 in ZeroDDP init.
* bans chunk releasement in CUDA. Only when a chunk is about to offload, it is allowed to release.
* adds a constant placement policy. With it, users can allocate a reserved caching memory space for parameters.
2022-10-14 17:53:16 +08:00
HELSON
95c35f73bd
[moe] initialize MoE groups by ProcessGroup () 2022-09-23 17:20:41 +08:00
Frank Lee
27fe8af60c
[autoparallel] refactored shape consistency to remove redundancy ()
* [autoparallel] refactored shape consistency to remove redundancy

* polish code

* polish code

* polish code
2022-09-13 18:30:18 +08:00
ver217
d068af81a3
[doc] update rst and docstring ()
* update rst

* add zero docstr

* fix docstr

* remove fx.tracer.meta_patch

* fix docstr

* fix docstr

* update fx rst

* fix fx docstr

* remove useless rst
2022-07-21 15:54:53 +08:00
Frank Lee
2238758c2e
[usability] improved error messages in the context module () 2022-04-25 13:42:31 +08:00
Frank Lee
920fe31526
[compatibility] used backward-compatible API for global process group () 2022-04-14 17:20:35 +08:00
Frank Lee
04ff5ea546
[utils] support detection of number of processes on current node () 2022-04-12 09:28:19 +08:00
Cautiousss
055d0270c8 [NFC] polish colossalai/context/process_group_initializer/initializer_sequence.py colossalai/context/process_group_initializer initializer_tensor.py code style ()
Co-authored-by: 何晓昕 <cautious@r-236-100-25-172.comp.nus.edu.sg>
2022-04-06 11:40:59 +08:00
Jiang Zhuo
0a96338b13 [NFC] polish <colossalai/context/process_group_initializer/initializer_data.py> code stype ()
Co-authored-by: 姜卓 <jiangzhuo@jiangzhuodeMacBook-Pro.local>
2022-04-06 11:40:59 +08:00
ziyu huang
701bad439b [NFC] polish colossalai/context/process_group_initializer/process_group_initializer.py code stype ()
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-04-06 11:40:59 +08:00
アマデウス
297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication () 2022-04-01 10:15:52 +08:00
Liang Bowen
2c45efc398
html refactor () 2022-03-31 11:36:56 +08:00
Liang Bowen
ec5086c49c Refactored docstring to google style 2022-03-29 17:17:47 +08:00
Jiarui Fang
a445e118cf
[polish] polish singleton and global context () 2022-03-23 18:03:39 +08:00
HELSON
f24b5ed201
[MOE] remove old MoE legacy () 2022-03-22 17:37:16 +08:00
Jiarui Fang
65c0f380c2
[format] polish name format for MOE () 2022-03-21 23:19:47 +08:00
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel () 2022-03-21 13:35:04 +08:00
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler () 2022-03-18 16:38:32 +08:00
Frank Lee
b72b8445c6
optimized context test time consumption () 2022-03-17 14:40:52 +08:00
Frank Lee
1e4bf85cdb fixed bug in activation checkpointing test () 2022-03-11 15:50:28 +08:00
RichardoLuo
8539898ec6 flake8 style change () 2022-03-11 15:50:28 +08:00
ziyu huang
a77d73f22b fix format parallel_context.py ()
Co-authored-by: huangziyu <202476410arsmart@gmail.com>
2022-03-11 15:50:28 +08:00
Maruyama_Aya
e83970e3dc fix format ColossalAI\colossalai\context\process_group_initializer 2022-03-11 15:50:28 +08:00
アマデウス
9ee197d0e9 moved env variables to global variables; ()
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
HELSON
0f8c7f9804
Fixed docstring in colossalai () 2022-01-21 10:44:30 +08:00
Frank Lee
e2089c5c15
adapted for sequence parallel () 2022-01-20 13:44:51 +08:00
HELSON
dceae85195
Added MoE parallel () 2022-01-07 15:08:36 +08:00
ver217
a951bc6089
update default logger () () 2022-01-04 20:03:26 +08:00
ver217
96780e6ee4
Optimize pipeline schedule ()
* add pipeline shared module wrapper and update load batch

* added model parallel process group for amp and clip grad ()

* added model parallel process group for amp and clip grad

* update amp and clip with model parallel process group

* remove pipeline_prev/next group ()

* micro batch offload

* optimize pipeline gpu memory usage

* pipeline can receive tensor shape ()

* optimize pipeline gpu memory usage

* fix grad accumulation step counter

* rename classes and functions

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
アマデウス
01a80cd86d
Hotfix/Colossalai layers ()
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests

* fixed 2.5d runtime issue

* reworked split batch, now called in trainer.schedule.load_batch

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-29 23:32:10 +08:00
アマデウス
0fedef4f3c
Layer integration ()
* integrated parallel layers for ease of building models

* integrated 2.5d layers

* cleaned codes and unit tests

* added log metric by step hook; updated imagenet benchmark; fixed some bugs

* reworked initialization; cleaned codes

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-27 15:04:32 +08:00
ver217
8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer () 2021-12-20 23:26:19 +08:00
Frank Lee
35813ed3c4
update examples and sphnix docs for the new api () 2021-12-13 22:07:01 +08:00
Frank Lee
da01c234e1
Develop/experiments ()
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000

* Integrate 1d tensor parallel in Colossal-AI ()

* fixed 1D and 2D convergence ()

* optimized 2D operations

* fixed 1D ViT convergence problem

* Feature/ddp ()

* remove redundancy func in setup () ()

* use env to control the language of doc () ()

* Support TP-compatible Torch AMP and Update trainer API ()

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB ()

* add explanation for ViT example () ()

* support torch ddp

* fix loss accumulation

* add log for ddp

* change seed

* modify timing hook

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* Feature/pipeline ()

* remove redundancy func in setup () ()

* use env to control the language of doc () ()

* Support TP-compatible Torch AMP and Update trainer API ()

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel ()

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule ()

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB ()

* add explanation for ViT example () ()

* optimize communication of pipeline parallel

* fix grad clip for pipeline

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers ()

* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset

* update api for better usability ()

update api for better usability

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-09 15:08:29 +08:00
zbian
404ecbdcc6 Migrated project 2021-10-28 18:21:23 +02:00