mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-12 12:47:21 +00:00
[doc] migrate the markdown files (#2652)
This commit is contained in:
19
docs/source/en/get_started/reading_roadmap.md
Normal file
19
docs/source/en/get_started/reading_roadmap.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Reading Roadmap
|
||||
|
||||
Colossal-AI provides a collection of parallel training components for you. We aim to support you with your development
|
||||
of distributed deep learning models just like how you write single-GPU deep learning models. ColossalAI provides easy-to-use
|
||||
APIs to help you kickstart your training process. To better how ColossalAI works, we recommend you to read this documentation
|
||||
in the following order.
|
||||
|
||||
- If you are not familiar with distributed system or have never used Colossal-AI, you should first jump into the `Concepts`
|
||||
section to get a sense of what we are trying to achieve. This section can provide you with some background knowledge on
|
||||
distributed training as well.
|
||||
- Next, you can follow the `basics` tutorials. This section will cover the details about how to use Colossal-AI.
|
||||
- Afterwards, you can try out the features provided in Colossal-AI by reading `features` section. We will provide a codebase for each tutorial. These tutorials will cover the
|
||||
basic usage of Colossal-AI to realize simple functions such as data parallel and mixed precision training.
|
||||
- Lastly, if you wish to apply more complicated techniques such as how to run hybrid parallel on GPT-3, the
|
||||
`advanced tutorials` section is the place to go!
|
||||
|
||||
**We always welcome suggestions and discussions from the community, and we would be more than willing to help you if you
|
||||
encounter any issue. You can raise an [issue](https://github.com/hpcaitech/ColossalAI/issues) here or create a discussion
|
||||
topic in the [forum](https://github.com/hpcaitech/ColossalAI/discussions).**
|
Reference in New Issue
Block a user