[legacy] clean up legacy code (#4743)

* [legacy] remove outdated codes of pipeline (#4692)

* [legacy] remove cli of benchmark and update optim (#4690)

* [legacy] remove cli of benchmark and update optim

* [doc] fix cli doc test

* [legacy] fix engine clip grad norm

* [legacy] remove outdated colo tensor (#4694)

* [legacy] remove outdated colo tensor

* [test] fix test import

* [legacy] move outdated zero to legacy (#4696)

* [legacy] clean up utils (#4700)

* [legacy] clean up utils

* [example] update examples

* [legacy] clean up amp

* [legacy] fix amp module

* [legacy] clean up gpc (#4742)

* [legacy] clean up context

* [legacy] clean core, constants and global vars

* [legacy] refactor initialize

* [example] fix examples ci

* [example] fix examples ci

* [legacy] fix tests

* [example] fix gpt example

* [example] fix examples ci

* [devops] fix ci installation

* [example] fix examples ci
This commit is contained in:
Hongxin Liu
2023-09-18 16:31:06 +08:00
committed by GitHub
parent 32e7f99416
commit b5f9e37c70
342 changed files with 2919 additions and 4182 deletions

View File

@@ -31,7 +31,7 @@ global context for users to easily manage their process groups. If you wish to a
define a new class and set it in your configuration file. To define your own way of creating process groups, you can
follow the steps below to create a new distributed initialization.
1. Add your parallel mode in `colossalai.context.parallel_mode.ParallelMode`.
1. Add your parallel mode in `colossalai.legacy.context.parallel_mode.ParallelMode`.
```python
class ParallelMode(Enum):
GLOBAL = 'global'

View File

@@ -37,7 +37,7 @@ import torch.nn as nn
from colossalai import nn as col_nn
from colossalai.amp import AMP_TYPE
from colossalai.legacy.builder.pipeline import partition_uniform
from colossalai.context.parallel_mode import ParallelMode
from colossalai.legacy.context.parallel_mode import ParallelMode
from colossalai.core import global_context as gpc
from colossalai.legacy.engine.schedule import (InterleavedPipelineSchedule,
PipelineSchedule)

View File

@@ -30,24 +30,4 @@ This command will inform you information regarding the version compatibility and
To launch distributed jobs on single or multiple nodes, the command `colossalai run` can be used for process launching.
You may refer to [Launch Colossal-AI](./launch_colossalai.md) for more details.
## Tensor Parallel Micro-Benchmarking
As Colossal-AI provides an array of tensor parallelism methods, it is not intuitive to choose one for your hardware and
model. Therefore, we provide a simple benchmarking to evaluate the performance of various tensor parallelisms on your system.
This benchmarking is run on a simple MLP model where the input data is of the shape `(batch_size, seq_length, hidden_size)`.
Based on the number of GPUs, the CLI will look for all possible tensor parallel configurations and display the benchmarking results.
You can customize the benchmarking configurations by checking out `colossalai benchmark --help`.
```shell
# run on 4 GPUs
colossalai benchmark --gpus 4
# run on 8 GPUs
colossalai benchmark --gpus 8
```
:::caution
Only single-node benchmarking is supported currently.
:::
<!-- doc-test-command: echo -->