mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-07 03:52:01 +00:00
[doc] moved doc test command to bottom (#3075)
This commit is contained in:
@@ -1,4 +1,3 @@
|
||||
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 nvme_offload.py -->
|
||||
# NVMe offload
|
||||
|
||||
Author: Hongxin Liu
|
||||
@@ -259,3 +258,6 @@ NVME offload saves about 294 MB memory. Note that enabling `pin_memory` of Gemin
|
||||
{{ autodoc:colossalai.nn.optimizer.HybridAdam }}
|
||||
|
||||
{{ autodoc:colossalai.nn.optimizer.CPUAdam }}
|
||||
|
||||
|
||||
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 nvme_offload.py -->
|
||||
|
@@ -1,12 +1,10 @@
|
||||
<!-- doc-test-command: echo "installation.md does not need test" -->
|
||||
|
||||
# Setup
|
||||
|
||||
Requirements:
|
||||
- PyTorch >= 1.11 (PyTorch 2.x in progress)
|
||||
- Python >= 3.7
|
||||
- CUDA >= 11.0
|
||||
|
||||
|
||||
If you encounter any problem about installation, you may want to raise an [issue](https://github.com/hpcaitech/ColossalAI/issues/new/choose) in this repository.
|
||||
|
||||
|
||||
@@ -47,3 +45,6 @@ If you don't want to install and enable CUDA kernel fusion (compulsory installat
|
||||
```shell
|
||||
CUDA_EXT=1 pip install .
|
||||
```
|
||||
|
||||
|
||||
<!-- doc-test-command: echo "installation.md does not need test" -->
|
||||
|
Reference in New Issue
Block a user