Commit Graph

9 Commits

Author SHA1 Message Date
Hongxin Liu
641b1ee71a
[devops] remove post commit ci (#5566)
* [devops] remove post commit ci

* [misc] run pre-commit on all files

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-08 15:09:40 +08:00
Camille Zhong
743e7fad2f
[colossal-llama2] add stream chat examlple for chat version model (#5428)
* add stream chat for chat version

* remove os.system clear

* modify function name
2024-03-07 14:58:56 +08:00
Camille Zhong
44ca61a22b
[llama] fix neftune & pbar with start_step (#5364) 2024-02-05 18:04:23 +08:00
Hongxin Liu
a4cec1715b
[llama] add flash attn patch for npu (#5362) 2024-02-05 16:48:34 +08:00
Desperado-Jia
ddf879e2db
fix bug for mefture (#5299) 2024-01-22 22:17:54 +08:00
Yuanchen
b397104438
[Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878)
* Add finetuning Colossal-Llama-2 example

* Add finetuning Colossal-Llama-2 example 2

* Add finetuning Colossal-Llama-2 example and support NEFTuning

* Add inference example and refine neftune

* Modify readme file

* update the imports

---------

Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
2023-12-07 14:02:03 +08:00
github-actions[bot]
a41cf88e9b
[format] applied code formatting on changed files in pull request 4908 (#4918)
Co-authored-by: github-actions <github-actions@github.com>
2023-10-17 10:48:24 +08:00
Zian(Andy) Zheng
7768afbad0 Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
https://github.com/huggingface/transformers/pull/25598
2023-10-16 14:00:45 +08:00
Tong Li
74aa7d964a
initial commit: add colossal llama 2 (#4784) 2023-09-24 23:12:26 +08:00