fixed some typos in the documents, added blog link and paper author information in README

This commit is contained in:
binmakeswell
2021-11-03 16:07:28 +08:00
committed by Fan Cui
parent ccb44882e1
commit 05e7069a5b
21 changed files with 86 additions and 119 deletions

View File

@@ -1,8 +1,10 @@
# ColossalAI
# Colossal-AI
An integrated large-scale model training system with efficient parallelization techniques.
arXiv: [Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training](https://arxiv.org/abs/2110.14883)
Paper: [Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training](https://arxiv.org/abs/2110.14883)
Blog: [Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training](https://www.hpcaitech.com/blog)
## Installation
@@ -91,16 +93,25 @@ class MLP_2D(nn.Module):
## Features
ColossalAI provides a collection of parallel training components for you. We aim to support you to write your
Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your
distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart
distributed training in a few lines.
- [Data Parallelism](./docs/parallelization.md)
- [Pipeline Parallelism](./docs/parallelization.md)
- [1D, 2D, 2.5D, 3D and sequence parallelism](./docs/parallelization.md)
- [friendly trainer and engine](./docs/trainer_engine.md)
- [Friendly trainer and engine](./docs/trainer_engine.md)
- [Extensible for new parallelism](./docs/add_your_parallel.md)
- [Mixed Precision Training](./docs/amp.md)
- [Zero Redundancy Optimizer (ZeRO)](./docs/zero.md)
## Cite Us
```
@article{bian2021colossal,
title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
journal={arXiv preprint arXiv:2110.14883},
year={2021}
}
```