mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2026-01-19 17:16:39 +00:00
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699.
* improved consistency between trainer, engine and schedule (#23)
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
7 lines
67 B
Plaintext
7 lines
67 B
Plaintext
torch>=1.8
|
|
torchvision>=0.9
|
|
numpy
|
|
tqdm
|
|
psutil
|
|
tensorboard
|
|
packaging |