# Colossal-AI
- 14x larger batch size, and 5x faster training for Tensor Parallel = 64
### GPT-3
- Free 50% GPU resources, or 10.7% acceleration
### GPT-2
- 11x lower GPU RAM, or superlinear scaling
### BERT
- 2x faster training, or 50% longer sequence length
Please visit our [documentation and tutorials](https://www.colossalai.org/) for more details.
## Installation
### PyPI
```bash
pip install colossalai
```
This command will install CUDA extension if your have installed CUDA, NVCC and torch.
If you don't want to install CUDA extension, you should add `--global-option="--no_cuda_ext"`, like:
```bash
pip install colossalai --global-option="--no_cuda_ext"
```
If you want to use `ZeRO`, you can run:
```bash
pip install colossalai[zero]
```
### Install From Source
> The version of Colossal-AI will be in line with the main branch of the repository. Feel free to raise an issue if you encounter any problem. :)
```shell
git clone https://github.com/hpcaitech/ColossalAI.git
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt
# install colossalai
pip install .
```
If you don't want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):
```shell
pip install --global-option="--no_cuda_ext" .
```
## Use Docker
Run the following command to build a docker image from Dockerfile provided.
```bash
cd ColossalAI
docker build -t colossalai ./docker
```
Run the following command to start the docker container in interactive mode.
```bash
docker run -ti --gpus all --rm --ipc=host colossalai bash
```
## Community
Join the Colossal-AI community on [Forum](https://github.com/hpcaitech/ColossalAI/discussions),
[Slack](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w),
and [WeChat](https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/WeChat.png "qrcode") to share your suggestions, advice, and questions with our engineering team.
## Contributing
If you wish to contribute to this project, please follow the guideline in [Contributing](./CONTRIBUTING.md).
Thanks so much to all of our amazing contributors!