mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-06-23 06:00:44 +00:00
* Add dpo. Fix sft, ppo, lora. Refactor all * fix and tested ppo * 2 nd round refactor * add ci tests * fix ci * fix ci * fix readme, style * fix readme style * fix style, fix benchmark * reproduce benchmark result, remove useless files * rename to ColossalChat * use new image * fix ci workflow * fix ci * use local model/tokenizer for ci tests * fix ci * fix ci * fix ci * fix ci timeout * fix rm progress bar. fix ci timeout * fix ci * fix ci typo * remove 3d plugin from ci temporary * test environment * cannot save optimizer * support chat template * fix readme * fix path * test ci locally * restore build_or_pr * fix ci data path * fix benchmark * fix ci, move ci tests to 3080, disable fast tokenizer * move ci to 85 * support flash attention 2 * add all-in-one data preparation script. Fix colossal-llama2-chat chat template * add hardware requirements * move ci test data * fix save_model, add unwrap * fix missing bos * fix missing bos; support grad accumulation with gemini * fix ci * fix ci * fix ci * fix llama2 chat template config * debug sft * debug sft * fix colossalai version requirement * fix ci * add sanity check to prevent NaN loss * fix requirements * add dummy data generation script * add dummy data generation script * add dummy data generation script * add dummy data generation script * update readme * update readme * update readme and ignore * fix logger bug * support parallel_output * modify data preparation logic * fix tokenization * update lr * fix inference * run pre-commit --------- Co-authored-by: Tong Li <tong.li352711588@gmail.com>
38 lines
970 B
Markdown
Executable File
38 lines
970 B
Markdown
Executable File
# Benchmarks
|
|
|
|
## Benchmark OPT with LoRA on dummy prompt data
|
|
|
|
We provide various OPT models (string in parentheses is the corresponding model name used in this script):
|
|
|
|
- OPT-125M (125m)
|
|
- OPT-350M (350m)
|
|
- OPT-700M (700m)
|
|
- OPT-1.3B (1.3b)
|
|
- OPT-2.7B (2.7b)
|
|
- OPT-3.5B (3.5b)
|
|
- OPT-5.5B (5.5b)
|
|
- OPT-6.7B (6.7b)
|
|
- OPT-10B (10b)
|
|
- OPT-13B (13b)
|
|
|
|
We also provide various training strategies:
|
|
|
|
- gemini: ColossalAI GeminiPlugin with `placement_policy="cuda"`, like zero3
|
|
- gemini_auto: ColossalAI GeminiPlugin with `placement_policy="cpu"`, like zero3-offload
|
|
- zero2: ColossalAI zero2
|
|
- zero2_cpu: ColossalAI zero2-offload
|
|
- 3d: ColossalAI HybridParallelPlugin with TP, DP support
|
|
|
|
## How to Run
|
|
```bash
|
|
cd ../tests
|
|
# Prepare data for benchmark
|
|
SFT_DATASET=/path/to/sft/data/ \
|
|
PROMPT_DATASET=/path/to/prompt/data/ \
|
|
PRETRAIN_DATASET=/path/to/ptx/data/ \
|
|
PREFERENCE_DATASET=/path/to/preference/data \
|
|
./test_data_preparation.sh
|
|
# Start benchmark
|
|
./benchmark_ppo.sh
|
|
```
|