mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-07 20:10:17 +00:00
[tutorial] added synthetic data for sequence parallel (#1927)
* [tutorial] added synthetic data for sequence parallel * polish code
This commit is contained in:
@@ -133,7 +133,7 @@ machine setting.
|
||||
start your script. A sample command is like below:
|
||||
|
||||
```bash
|
||||
python -m torch.distributed.launch --nproc_per_node <num_gpus_on_this_machine> --master_addr localhost --master_port 29500 train.py
|
||||
colossalai run --nproc_per_node <num_gpus_on_this_machine> --master_addr localhost --master_port 29500 train.py
|
||||
```
|
||||
|
||||
- If you are using multiple machines with multiple GPUs, we suggest that you refer to `colossalai
|
||||
|
Reference in New Issue
Block a user