1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-06 07:28:12 +00:00
ColossalAI/colossalai/pipeline
Wenhao Chen e614aa34f3
[shardformer, pipeline] add gradient_checkpointing_ratio and heterogenous shard policy for llama ()
* feat: add `GradientCheckpointConfig` and `PipelineGradientCheckpointConfig`

* feat: apply `GradientCheckpointConfig` to policy and llama_forward

* feat: move `distribute_layer` and `get_stage_index` to PipelineStageManager

* fix: add optional args for `distribute_layer` and `get_stage_index`

* fix: fix changed API calls

* test: update llama tests

* style: polish `GradientCheckpointConfig`

* fix: fix pipeline utils tests
2024-04-01 11:34:58 +08:00
..
schedule [hotfix] set return_outputs=False in examples and polish code () 2024-03-25 12:31:09 +08:00
__init__.py [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00
p2p.py [pipeline] A more general _communicate in p2p () 2024-01-08 15:37:27 +08:00
stage_manager.py [shardformer, pipeline] add gradient_checkpointing_ratio and heterogenous shard policy for llama () 2024-04-01 11:34:58 +08:00