[tutorial] added synthetic data for sequence parallel (#1927)

* [tutorial] added synthetic data for sequence parallel

* polish code
This commit is contained in:
Frank Lee
2022-11-13 03:24:02 +08:00
committed by GitHub
parent abf4c27f6a
commit 807cbdb87d
4 changed files with 74 additions and 47 deletions

View File

@@ -133,7 +133,7 @@ machine setting.
start your script. A sample command is like below:
```bash
python -m torch.distributed.launch --nproc_per_node <num_gpus_on_this_machine> --master_addr localhost --master_port 29500 train.py
colossalai run --nproc_per_node <num_gpus_on_this_machine> --master_addr localhost --master_port 29500 train.py
```
- If you are using multiple machines with multiple GPUs, we suggest that you refer to `colossalai