[doc]update moe chinese document. (#3890)

* [doc]update-moe

* [doc]update-moe

* [doc]update-moe

* [doc]update-moe

* [doc]update-moe
This commit is contained in:
jiangmingyan
2023-06-05 15:57:54 +08:00
committed by GitHub
parent 8065cc5fba
commit 07cb21142f
4 changed files with 33 additions and 64 deletions

View File

@@ -137,3 +137,4 @@ criterion = MoeLoss(
Finally, just use trainer or engine in `colossalai` to do your training.
Otherwise, you should take care of gradient by yourself.
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 integrate_mixture_of_experts_into_your_model.py -->

View File

@@ -19,7 +19,7 @@ We aim to make Colossal-AI easy to use and non-intrusive to user code. There is
1. Prepare a configuration file where specifies the features you want to use and your parameters.
2. Initialize distributed backend with `colossalai.launch`
3. Inject the training features into your training components (e.g. model, optimizer) with `colossalai.initialize`.
3. Inject the training features into your training components (e.g. model, optimizer) with `colossalai.booster`.
4. Run training and testing
We will cover the whole workflow in the `basic tutorials` section.
@@ -34,3 +34,5 @@ The Colossal-AI system will be expanded to include more training skills, these n
4. expansion of existing parallelism methods
We welcome ideas and contribution from the community and you can post your idea for future development in our forum.
<!-- doc-test-command: echo "colossalai_overview.md does not need test" -->