mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-07 03:52:01 +00:00
[doc]update moe chinese document. (#3890)
* [doc]update-moe * [doc]update-moe * [doc]update-moe * [doc]update-moe * [doc]update-moe
This commit is contained in:
@@ -137,3 +137,4 @@ criterion = MoeLoss(
|
||||
|
||||
Finally, just use trainer or engine in `colossalai` to do your training.
|
||||
Otherwise, you should take care of gradient by yourself.
|
||||
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 integrate_mixture_of_experts_into_your_model.py -->
|
||||
|
@@ -19,7 +19,7 @@ We aim to make Colossal-AI easy to use and non-intrusive to user code. There is
|
||||
|
||||
1. Prepare a configuration file where specifies the features you want to use and your parameters.
|
||||
2. Initialize distributed backend with `colossalai.launch`
|
||||
3. Inject the training features into your training components (e.g. model, optimizer) with `colossalai.initialize`.
|
||||
3. Inject the training features into your training components (e.g. model, optimizer) with `colossalai.booster`.
|
||||
4. Run training and testing
|
||||
|
||||
We will cover the whole workflow in the `basic tutorials` section.
|
||||
@@ -34,3 +34,5 @@ The Colossal-AI system will be expanded to include more training skills, these n
|
||||
4. expansion of existing parallelism methods
|
||||
|
||||
We welcome ideas and contribution from the community and you can post your idea for future development in our forum.
|
||||
|
||||
<!-- doc-test-command: echo "colossalai_overview.md does not need test" -->
|
||||
|
Reference in New Issue
Block a user