mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-01 17:17:05 +00:00
[example] make gpt example directory more clear (#2353)
This commit is contained in:
@@ -57,12 +57,6 @@ The `train_gpt_demo.py` provides three distributed plans, you can choose the pla
|
||||
- Pytorch ZeRO
|
||||
|
||||
|
||||
### Pipeline Parallel
|
||||
```bash
|
||||
bash run_pp.sh
|
||||
```
|
||||
|
||||
|
||||
## Performance
|
||||
|
||||
Testbed: a cluster of 8xA100 (80GB) and 1xAMD EPYC 7543 32-Core Processor (512 GB). GPUs are connected via PCI-e.
|
||||
@@ -119,3 +113,9 @@ Touch the bar of model scale and batch size.
|
||||
| model | #GPU | policy | TP | batch per DP | Tflops |
|
||||
| ---------- | --------- |--------- |--------- |--------- |--------- |
|
||||
| gpt2_20b | 8 | cpu | 2 | 8 | 46.895 |
|
||||
|
||||
|
||||
### Experimental Features
|
||||
|
||||
#### [Pipeline Parallel](./experiments/pipeline_parallel/)
|
||||
#### [Auto Parallel](./experiments/auto_parallel_with_gpt/)
|
||||
|
Reference in New Issue
Block a user