1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-08 16:38:15 +00:00
ColossalAI/applications/ColossalMoE/colossal_moe
Wenhao Chen e614aa34f3
[shardformer, pipeline] add gradient_checkpointing_ratio and heterogenous shard policy for llama ()
* feat: add `GradientCheckpointConfig` and `PipelineGradientCheckpointConfig`

* feat: apply `GradientCheckpointConfig` to policy and llama_forward

* feat: move `distribute_layer` and `get_stage_index` to PipelineStageManager

* fix: add optional args for `distribute_layer` and `get_stage_index`

* fix: fix changed API calls

* test: update llama tests

* style: polish `GradientCheckpointConfig`

* fix: fix pipeline utils tests
2024-04-01 11:34:58 +08:00
..
models [shardformer, pipeline] add gradient_checkpointing_ratio and heterogenous shard policy for llama () 2024-04-01 11:34:58 +08:00
__init__.py [moe] init mixtral impl 2024-02-07 19:21:02 +08:00
utils.py [moe] support mixtral () 2024-02-07 19:21:02 +08:00