mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-05-30 19:05:26 +00:00
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23)
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
6 lines
123 B
ReStructuredText
6 lines
123 B
ReStructuredText
colossalai.utils.checkpointing
|
|
==============================
|
|
|
|
.. automodule:: colossalai.utils.checkpointing
|
|
:members:
|