mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-07 20:10:17 +00:00
[doc]fix
This commit is contained in:
@@ -128,7 +128,7 @@ for idx, (img, label) in enumerate(train_dataloader):
|
||||
### Step 6. Invoke Training Scripts
|
||||
To verify gradient accumulation, we can just check the change of parameter values. When gradient accumulation is set, parameters are only updated in the last step. You can run the script using this command:
|
||||
```shell
|
||||
colossalai run --nproc_per_node 1 train.py --config config.py
|
||||
colossalai run --nproc_per_node 1 train.py
|
||||
```
|
||||
|
||||
You will see output similar to the text below. This shows gradient is indeed accumulated as the parameter is not updated
|
||||
|
@@ -136,7 +136,7 @@ for idx, (img, label) in enumerate(train_dataloader):
|
||||
You can run the script using this command:
|
||||
|
||||
```shell
|
||||
colossalai run --nproc_per_node 1 train.py --config config/config.py
|
||||
colossalai run --nproc_per_node 1 train.py
|
||||
```
|
||||
|
||||
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 gradient_clipping_with_booster.py -->
|
||||
|
Reference in New Issue
Block a user