mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-07-12 06:39:01 +00:00
* Add dpo. Fix sft, ppo, lora. Refactor all * fix and tested ppo * 2 nd round refactor * add ci tests * fix ci * fix ci * fix readme, style * fix readme style * fix style, fix benchmark * reproduce benchmark result, remove useless files * rename to ColossalChat * use new image * fix ci workflow * fix ci * use local model/tokenizer for ci tests * fix ci * fix ci * fix ci * fix ci timeout * fix rm progress bar. fix ci timeout * fix ci * fix ci typo * remove 3d plugin from ci temporary * test environment * cannot save optimizer * support chat template * fix readme * fix path * test ci locally * restore build_or_pr * fix ci data path * fix benchmark * fix ci, move ci tests to 3080, disable fast tokenizer * move ci to 85 * support flash attention 2 * add all-in-one data preparation script. Fix colossal-llama2-chat chat template * add hardware requirements * move ci test data * fix save_model, add unwrap * fix missing bos * fix missing bos; support grad accumulation with gemini * fix ci * fix ci * fix ci * fix llama2 chat template config * debug sft * debug sft * fix colossalai version requirement * fix ci * add sanity check to prevent NaN loss * fix requirements * add dummy data generation script * add dummy data generation script * add dummy data generation script * add dummy data generation script * update readme * update readme * update readme and ignore * fix logger bug * support parallel_output * modify data preparation logic * fix tokenization * update lr * fix inference * run pre-commit --------- Co-authored-by: Tong Li <tong.li352711588@gmail.com>
31 lines
1.4 KiB
Markdown
Executable File
31 lines
1.4 KiB
Markdown
Executable File
:warning: **This content may be outdated since the major update of Colossal Chat. We will update this content soon.**
|
|
|
|
# Add Peft support for SFT and Prompts model training
|
|
|
|
The original implementation just adopts the loralib and merges the layers into the final model. The huggingface peft is a better lora model implementation and can be easily training and distributed.
|
|
|
|
Since reward model is relative small, I just keep it as original one. I suggest train full model to get the proper reward/critic model.
|
|
|
|
# Preliminary installation
|
|
|
|
Since the current pypi peft package(0.2) has some bugs, please install the peft package using source.
|
|
|
|
```
|
|
git clone https://github.com/huggingface/peft
|
|
cd peft
|
|
pip install .
|
|
```
|
|
|
|
# Usage
|
|
|
|
For SFT training, just call train_peft_sft.py
|
|
|
|
Its arguments are almost identical to train_sft.py instead adding a new eval_dataset if you have an eval_dataset file. The data file is just a plain datafile, please check the format in the easy_dataset.py.
|
|
|
|
For stage-3 rlhf training, call train_peft_prompts.py.
|
|
Its arguments are almost identical to train_prompts.py. The only difference is that I use text files to indicate the prompt and pretrained data file. The models are included in easy_models.py. Currently only bloom models are tested, but technically gpt2/opt/llama should be supported.
|
|
|
|
# Dataformat
|
|
|
|
Please refer the formats in test_sft.txt, test_prompts.txt, test_pretrained.txt.
|