mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-04-27 19:36:13 +00:00
* add SimPO
* fix dataloader
* remove debug code
* add orpo
* fix style
* fix colossalai, transformers version
* fix colossalai, transformers version
* fix colossalai, transformers version
* fix torch colossalai version
* update transformers version
* [shardformer] DeepseekMoE support (#5871)
* [Feature] deepseek moe expert parallel implement
* [misc] fix typo, remove redundant file (#5867)
* [misc] fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] deepseek support & unit test
* [misc] remove debug code & useless print
* [misc] fix typos (#5872)
* [Feature] remove modeling file, use auto config. (#5884)
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [Deepseek] remove redundant code (#5888)
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [misc] remove redundant code
* [Feature/deepseek] resolve comment. (#5889)
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [misc] remove redundant code
* [misc] mv module replacement into if branch
* [misc] add some warning message and modify some code in unit test
* [misc] fix typos
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Hoxfix] Fix CUDA_DEVICE_MAX_CONNECTIONS for comm overlap
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [Feat] Diffusion Model(PixArtAlpha/StableDiffusion3) Support (#5838)
* Diffusion Model Inference support
* Stable Diffusion 3 Support
* pixartalpha support
* [HotFix] CI,import,requirements-test for #5838 (#5892)
* [Hot Fix] CI,import,requirements-test
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] Enable PP + SP for llama (#5868)
* fix cross-PP-stage position id length diff bug
* fix typo
* fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* use a one cross entropy func for all shardformer models
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [ShardFormer] Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM (#5897)
* add benchmark for sft, dpo, simpo, orpo. Add benchmarking result. Support lora with gradient checkpoint
* fix style
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix eval
* hotfix citation
* [zero] support all-gather overlap (#5898)
* [zero] support all-gather overlap
* [zero] add overlap all-gather flag
* [misc] fix typo
* [zero] update api
* fix orpo cross entropy loss
* [Auto Parallel]: Speed up intra-op plan generation by 44% (#5446)
* Remove unnecessary calls to deepcopy
* Build DimSpec's difference dict only once
This change considerably speeds up construction speed of DimSpec objects. The difference_dict is the same for each DimSpec object, so a single copy of it is enough.
* Fix documentation of DimSpec's difference method
* [ShardFormer] fix qwen2 sp (#5903)
* [compatibility] support torch 2.2 (#5875)
* Support Pytorch 2.2.2
* keep build_on_pr file and update .compatibility
* fix object_to_tensor usage when torch>=2.3.0 (#5820)
* [misc] support torch2.3 (#5893)
* [misc] support torch2.3
* [devops] update compatibility ci
* [devops] update compatibility ci
* [devops] add debug
* [devops] add debug
* [devops] add debug
* [devops] add debug
* [devops] remove debug
* [devops] remove debug
* [release] update version (#5912)
* [plugin] support all-gather overlap for hybrid parallel (#5919)
* [plugin] fixed all-gather overlap support for hybrid parallel
* add kto
* fix style, add kto data sample
* [Examples] Add lazy init to OPT and GPT examples (#5924)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [ColossalChat] Hotfix for ColossalChat (#5910)
* add ignore and tiny llama
* fix path issue
* run style
* fix issue
* update bash
* add ignore and tiny llama
* fix path issue
* run style
* fix issue
* update bash
* fix ddp issue
* add Qwen 1.5 32B
* refactor tokenization
* [FIX BUG] UnboundLocalError: cannot access local variable 'default_conversation' where it is not associated with a value (#5931)
* cannot access local variable 'default_conversation' where it is not associated with a value
set default value for 'default_conversation'
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* fix test data
* refactor evaluation
* remove real data path
* remove real data path
* Add n_fused as an input from native_module (#5894)
* [FIX BUG] convert env param to int in (#5934)
* [Hotfix] Fix ZeRO typo #5936
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [Feature] Add a switch to control whether the model checkpoint needs to be saved after each epoch ends (#5941)
* Add a switch to control whether the model checkpoint needs to be saved after each epoch ends
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* fix style
* fix style
* fix style
* [shardformer] hotfix attn mask (#5945)
* [shardformer] hotfix attn mask (#5947)
* [Feat] Distrifusion Acceleration Support for Diffusion Inference (#5895)
* Distrifusion Support source
* comp comm overlap optimization
* sd3 benchmark
* pixart distrifusion bug fix
* sd3 bug fix and benchmark
* generation bug fix
* naming fix
* add docstring, fix counter and shape error
* add reference
* readme and requirement
* [zero] hotfix update master params (#5951)
* [release] update version (#5952)
* [Chat] Fix lora (#5946)
* fix merging
* remove filepath
* fix style
* Update README.md (#5958)
* [hotfix] Remove unused plan section (#5957)
* remove readme
* fix readme
* update
* [test] add mixtral for sequence classification
* [test] add mixtral transformer test
* [moe] fix plugin
* [test] mixtra pp shard test
* [chore] handle non member group
* [zero] solve hang
* [test] pass mixtral shardformer test
* [moe] implement transit between non moe tp and ep
* [zero] solve hang
* [misc] solve booster hang by rename the variable
* solve hang when parallel mode = pp + dp
* [moe] implement submesh initialization
* [moe] add mixtral dp grad scaling when not all experts are activated
* [chore] manually revert unintended commit
* [chore] trivial fix
* [chore] arg pass & remove drop token
* [test] add mixtral modelling test
* [moe] implement tp
* [moe] test deepseek
* [moe] clean legacy code
* [Feature] MoE Ulysses Support (#5918)
* moe sp support
* moe sp bug solve
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [chore] minor fix
* [moe] init moe plugin comm setting with sp
* moe sp + ep bug fix
* [moe] finalize test (no pp)
* [moe] full test for deepseek and mixtral (pp + sp to fix)
* [chore] minor fix after rebase
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [chore] solve moe ckpt test failure and some other arg pass failure
* [moe] remove ops
* [test] fix test: test_zero1_2
* [bug] fix: somehow logger hangs the program
* [moe] deepseek moe sp support
* [test] add check
* [deepseek] replace attn (a workaround for bug in transformers)
* [misc] skip redunant test
* [misc] remove debug/print code
* [moe] refactor mesh assignment
* Revert "[moe] implement submesh initialization"
This reverts commit 2f9bce6686
.
* [chore] change moe_pg_mesh to private
* [misc] remove incompatible test config
* [misc] fix ci failure: change default value to false in moe plugin
* [misc] remove useless condition
* [chore] docstring
* [moe] remove force_overlap_comm flag and add warning instead
* [doc] add MoeHybridParallelPlugin docstring
* [moe] solve dp axis issue
* [chore] remove redundant test case, print string & reduce test tokens
* [feat] Dist Loader for Eval (#5950)
* support auto distributed data loader
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* support auto distributed data loader
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix tp error
* remove unused parameters
* remove unused
* update inference
* update docs
* update inference
---------
Co-authored-by: Michelle <qianranma8@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [lora] lora support hybrid parallel plugin (#5956)
* lora support hybrid plugin
* fix
* fix
* fix
* fix
* Support overall loss, update KTO logging
* [Docs] clarify launch port
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [Hotfix] README link (#5966)
* update ignore
* update readme
* run style
* update readme
* [Hotfix] Avoid fused RMSnorm import error without apex (#5985)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [Chat] fix readme (#5989)
* fix readme
* fix readme, tokenization fully tested
* fix readme, tokenization fully tested
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: root <root@notebook-8f919155-6035-47b4-9c6f-1be133b9e2c9-0.notebook-8f919155-6035-47b4-9c6f-1be133b9e2c9.colossal-ai.svc.cluster.local>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* fix sync condition (#6000)
* [plugin] add cast inputs option for zero (#6003)
* [pre-commit.ci] pre-commit autoupdate (#5995)
updates:
- [github.com/psf/black-pre-commit-mirror: 24.4.2 → 24.8.0](https://github.com/psf/black-pre-commit-mirror/compare/24.4.2...24.8.0)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [misc] Bypass the huggingface bug to solve the mask mismatch problem (#5991)
* [Feature] Zigzag Ring attention (#5905)
* halfway
* fix cross-PP-stage position id length diff bug
* fix typo
* fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* unified cross entropy func for all shardformer models
* remove redundant lines
* add basic ring attn; debug cross entropy
* fwd bwd logic complete
* fwd bwd logic complete; add experimental triton rescale
* precision tests passed
* precision tests passed
* fix typos and remove misc files
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add sp_mode to benchmark; fix varlen interface
* update softmax_lse shape by new interface
* change tester name
* remove buffer clone; support packed seq layout
* add varlen tests
* fix typo
* all tests passed
* add dkv_group; fix mask
* remove debug statements
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [misc] update compatibility (#6008)
* [misc] update compatibility
* [misc] update requirements
* [devops] disable requirements cache
* [test] fix torch ddp test
* [test] fix rerun on address in use
* [test] fix lazy init
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix the merge
* fix the merge
* overlap kv comm with output rescale (#6017)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* fix the merge
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix the merge
* fix
* fix
* fix the merge
* fix
* [misc] Use dist logger in plugins (#6011)
* use dist logger in plugins
* remove trash
* print on rank 0
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* fix
* fix
* fix
* fix
* fix the merge
* fix
* fix
* fix
* fix
---------
Co-authored-by: YeAnbang <anbangy2@outlook.com>
Co-authored-by: Haze188 <haze188@qq.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: Runyu Lu <77330637+LRY89757@users.noreply.github.com>
Co-authored-by: Guangyao Zhang <xjtu521@qq.com>
Co-authored-by: YeAnbang <44796419+YeAnbang@users.noreply.github.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: Stephan Kö <stephankoe@users.noreply.github.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: zhurunhua <1281592874@qq.com>
Co-authored-by: Insu Jang <insujang@umich.edu>
Co-authored-by: Gao, Ruiyuan <905370712@qq.com>
Co-authored-by: hxwang <wang1570@e.ntu.edu.sg>
Co-authored-by: Michelle <qianranma8@gmail.com>
Co-authored-by: root <root@notebook-8f919155-6035-47b4-9c6f-1be133b9e2c9-0.notebook-8f919155-6035-47b4-9c6f-1be133b9e2c9.colossal-ai.svc.cluster.local>
396 lines
16 KiB
Python
Executable File
396 lines
16 KiB
Python
Executable File
#!/usr/bin/env python3
|
|
# -*- coding: utf-8 -*-
|
|
"""
|
|
tokenization utils for constructing dataset for ppo, dpo, sft, rm
|
|
"""
|
|
|
|
import warnings
|
|
from copy import deepcopy
|
|
from typing import Any, Dict, List, Union
|
|
|
|
from coati.dataset.conversation import Conversation
|
|
from coati.dataset.utils import split_templated_prompt_into_chunks, tokenize_and_concatenate
|
|
from datasets import dataset_dict
|
|
from torch.utils.data import ConcatDataset, Dataset
|
|
from transformers import PreTrainedTokenizer
|
|
|
|
from colossalai.logging import get_dist_logger
|
|
|
|
logger = get_dist_logger()
|
|
|
|
IGNORE_INDEX = -100
|
|
|
|
DSType = Union[Dataset, ConcatDataset, dataset_dict.Dataset]
|
|
|
|
|
|
def tokenize_sft(
|
|
data_point: Dict[str, str],
|
|
tokenizer: PreTrainedTokenizer,
|
|
conversation_template: Conversation = None,
|
|
max_length: int = 4096,
|
|
) -> Dict[str, Union[int, str, List[int]]]:
|
|
"""
|
|
A tokenization function to tokenize an original pretraining data point as following
|
|
and calculate corresponding labels for sft training:
|
|
"Something here can be system message[user_line_start]User line[User line end][Assistant line start]Assistant line[Assistant line end]...[Assistant line end]Something here"
|
|
^
|
|
end_of_system_line_position
|
|
|
|
Args:
|
|
data_point: the data point of the following format
|
|
{"messages": [{"from": "user", "content": "xxx"}, {"from": "assistant", "content": "xxx"}]}
|
|
tokenizer: the tokenizer whose
|
|
conversation_template: the conversation template to apply
|
|
ignore_index: the ignore index when calculate loss during training
|
|
max_length: the maximum context length
|
|
"""
|
|
|
|
ignore_index = IGNORE_INDEX
|
|
|
|
messages = data_point["messages"]
|
|
template = deepcopy(conversation_template)
|
|
|
|
if messages[0]["from"] == "system":
|
|
template.system_message = str(messages[0]["content"])
|
|
messages.pop(0)
|
|
template.messages = []
|
|
for idx, mess in enumerate(messages):
|
|
if mess["from"] != template.roles[idx % 2]:
|
|
raise ValueError(
|
|
f"Message should iterate between user and assistant and starts with a \
|
|
line from the user. Got the following data:\n{messages}"
|
|
)
|
|
template.append_message(mess["from"], mess["content"])
|
|
|
|
if len(template.messages) % 2 != 0:
|
|
# Force to end with assistant response
|
|
template.messages = template.messages[0:-1]
|
|
|
|
# tokenize and calculate masked labels -100 for positions corresponding to non-assistant lines
|
|
prompt = template.get_prompt()
|
|
chunks, require_loss = split_templated_prompt_into_chunks(
|
|
template.messages, prompt, conversation_template.end_of_assistant
|
|
)
|
|
tokenized, starts, ends = tokenize_and_concatenate(tokenizer, chunks, require_loss, max_length=max_length)
|
|
if tokenized is None:
|
|
return dict(
|
|
input_ids=None,
|
|
labels=None,
|
|
inputs_decode=None,
|
|
labels_decode=None,
|
|
seq_length=None,
|
|
seq_category=None,
|
|
)
|
|
|
|
labels = [ignore_index] * len(tokenized)
|
|
for start, end in zip(starts, ends):
|
|
labels[start:end] = tokenized[start:end]
|
|
|
|
if tokenizer.bos_token_id is not None:
|
|
# Force to add bos token at the beginning of the tokenized sequence if the input ids doesn;t starts with bos
|
|
if tokenized[0] != tokenizer.bos_token_id:
|
|
# Some chat templates already include bos token
|
|
tokenized = [tokenizer.bos_token_id] + tokenized
|
|
labels = [-100] + labels
|
|
|
|
# log decoded inputs and labels for debugging
|
|
inputs_decode = tokenizer.decode(tokenized)
|
|
start = 0
|
|
end = 0
|
|
label_decode = []
|
|
for i in range(len(labels)):
|
|
if labels[i] == ignore_index:
|
|
if start != end:
|
|
label_decode.append(tokenizer.decode(labels[start + 1 : i], skip_special_tokens=False))
|
|
start = i
|
|
end = i
|
|
else:
|
|
end = i
|
|
if i == len(labels) - 1:
|
|
label_decode.append(tokenizer.decode(labels[start + 1 :], skip_special_tokens=False))
|
|
|
|
# Check if all labels are ignored, this may happen when the tokenized length is too long
|
|
if labels.count(ignore_index) == len(labels):
|
|
return dict(
|
|
input_ids=None,
|
|
labels=None,
|
|
inputs_decode=None,
|
|
labels_decode=None,
|
|
seq_length=None,
|
|
seq_category=None,
|
|
)
|
|
|
|
return dict(
|
|
input_ids=tokenized,
|
|
labels=labels,
|
|
inputs_decode=inputs_decode,
|
|
labels_decode=label_decode,
|
|
seq_length=len(tokenized),
|
|
seq_category=data_point["category"] if "category" in data_point else "None",
|
|
)
|
|
|
|
|
|
def tokenize_prompt(
|
|
data_point: Dict[str, str],
|
|
tokenizer: PreTrainedTokenizer,
|
|
conversation_template: Conversation = None,
|
|
max_length: int = 4096,
|
|
) -> Dict[str, Union[int, str, List[int]]]:
|
|
"""
|
|
A tokenization function to tokenize an original pretraining data point as following for ppo training:
|
|
"Something here can be system message[user_line_start]User line[User line end][Assistant line start]Assistant line[Assistant line end]...[Assistant line start]"
|
|
Args:
|
|
data_point: the data point of the following format
|
|
{"messages": [{"from": "user", "content": "xxx"}, {"from": "assistant", "content": "xxx"}]}
|
|
tokenizer: the tokenizer whose
|
|
conversation_template: the conversation template to apply
|
|
ignore_index: the ignore index when calculate loss during training
|
|
max_length: the maximum context length
|
|
"""
|
|
|
|
messages = data_point["messages"]
|
|
template = deepcopy(conversation_template)
|
|
template.messages = []
|
|
|
|
if messages[0]["from"] == "system":
|
|
template.system_message = str(messages[0]["content"])
|
|
messages.pop(0)
|
|
|
|
for idx, mess in enumerate(messages):
|
|
if mess["from"] != template.roles[idx % 2]:
|
|
raise ValueError(
|
|
f"Message should iterate between user and assistant and starts with a line from the user. Got the following data:\n{messages}"
|
|
)
|
|
template.append_message(mess["from"], mess["content"])
|
|
|
|
# `target_turn_index` is the number of turns which exceeds `max_length - 1` for the first time.
|
|
if len(template.messages) % 2 != 1:
|
|
# exclude the answer if provided. keep only the prompt
|
|
template.messages = template.messages[:-1]
|
|
|
|
# Prepare data
|
|
prompt = template.get_prompt(length=len(template.messages), add_generation_prompt=True)
|
|
tokenized = tokenizer([prompt], add_special_tokens=False)["input_ids"][0]
|
|
|
|
if tokenizer.bos_token_id is not None:
|
|
if tokenized[0] != tokenizer.bos_token_id:
|
|
tokenized = [tokenizer.bos_token_id] + tokenized
|
|
|
|
if len(tokenized) > max_length:
|
|
return dict(
|
|
input_ids=None,
|
|
inputs_decode=None,
|
|
seq_length=None,
|
|
seq_category=None,
|
|
)
|
|
|
|
# `inputs_decode` can be used to check whether the tokenization method is true.
|
|
return dict(
|
|
input_ids=tokenized,
|
|
inputs_decode=prompt,
|
|
seq_length=len(tokenized),
|
|
seq_category=data_point["category"] if "category" in data_point else "None",
|
|
)
|
|
|
|
|
|
def apply_rlhf_data_format(template: Conversation, tokenizer: Any):
|
|
target_turn = int(len(template.messages) / 2)
|
|
prompt = template.get_prompt(target_turn * 2)
|
|
chunks, require_loss = split_templated_prompt_into_chunks(
|
|
template.messages[: 2 * target_turn], prompt, template.end_of_assistant
|
|
)
|
|
# no truncation applied
|
|
tokenized, starts, ends = tokenize_and_concatenate(tokenizer, chunks, require_loss, max_length=None)
|
|
|
|
loss_mask = [0] * len(tokenized)
|
|
label_decode = []
|
|
# only the last round (chosen/rejected) is used to calculate loss
|
|
for i in range(starts[-1], ends[-1]):
|
|
loss_mask[i] = 1
|
|
label_decode.append(tokenizer.decode(tokenized[starts[-1] : ends[-1]], skip_special_tokens=False))
|
|
if tokenizer.bos_token_id is not None:
|
|
if tokenized[0] != tokenizer.bos_token_id:
|
|
tokenized = [tokenizer.bos_token_id] + tokenized
|
|
loss_mask = [0] + loss_mask
|
|
|
|
return {"input_ids": tokenized, "loss_mask": loss_mask, "label_decode": label_decode}
|
|
|
|
|
|
def tokenize_rlhf(
|
|
data_point: Dict[str, str],
|
|
tokenizer: PreTrainedTokenizer,
|
|
conversation_template: Conversation = None,
|
|
max_length: int = 4096,
|
|
) -> Dict[str, Union[int, str, List[int]]]:
|
|
"""
|
|
A tokenization function to tokenize an original pretraining data point as following:
|
|
{"context": [{"from": "user", "content": "xxx"}, {"from": "assistant", "content": "xxx"}],
|
|
"chosen": {"from": "assistant", "content": "xxx"}, "rejected": {"from": "assistant", "content": "xxx"}}
|
|
"""
|
|
|
|
context = data_point["context"]
|
|
template = deepcopy(conversation_template)
|
|
template.clear()
|
|
|
|
if context[0]["from"] == "system":
|
|
template.system_message = str(context[0]["content"])
|
|
context.pop(0)
|
|
|
|
for idx, mess in enumerate(context):
|
|
if mess["from"] != template.roles[idx % 2]:
|
|
raise ValueError(
|
|
f"Message should iterate between user and assistant and starts with a \
|
|
line from the user. Got the following data:\n{context}"
|
|
)
|
|
template.append_message(mess["from"], mess["content"])
|
|
|
|
if len(template.messages) % 2 != 1:
|
|
warnings.warn(
|
|
"Please make sure leading context starts and ends with a line from user\nLeading context: "
|
|
+ str(template.messages)
|
|
)
|
|
return dict(
|
|
chosen_input_ids=None,
|
|
chosen_loss_mask=None,
|
|
chosen_label_decode=None,
|
|
rejected_input_ids=None,
|
|
rejected_loss_mask=None,
|
|
rejected_label_decode=None,
|
|
)
|
|
|
|
assert context[-1]["from"].lower() == template.roles[0], "The last message in context should be from user."
|
|
chosen = deepcopy(template)
|
|
rejected = deepcopy(template)
|
|
chosen_continuation = data_point["chosen"]
|
|
rejected_continuation = data_point["rejected"]
|
|
for round in range(len(chosen_continuation)):
|
|
if chosen_continuation[round]["from"] != template.roles[(round + 1) % 2]:
|
|
raise ValueError(
|
|
f"Message should iterate between user and assistant and starts with a \
|
|
line from the user. Got the following data:\n{chosen_continuation}"
|
|
)
|
|
chosen.append_message(chosen_continuation[round]["from"], chosen_continuation[round]["content"])
|
|
|
|
for round in range(len(rejected_continuation)):
|
|
if rejected_continuation[round]["from"] != template.roles[(round + 1) % 2]:
|
|
raise ValueError(
|
|
f"Message should iterate between user and assistant and starts with a \
|
|
line from the user. Got the following data:\n{rejected_continuation}"
|
|
)
|
|
rejected.append_message(rejected_continuation[round]["from"], rejected_continuation[round]["content"])
|
|
|
|
(
|
|
chosen_input_ids,
|
|
chosen_loss_mask,
|
|
chosen_label_decode,
|
|
rejected_input_ids,
|
|
rejected_loss_mask,
|
|
rejected_label_decode,
|
|
) = (None, None, None, None, None, None)
|
|
|
|
chosen_data_packed = apply_rlhf_data_format(chosen, tokenizer)
|
|
(chosen_input_ids, chosen_loss_mask, chosen_label_decode) = (
|
|
chosen_data_packed["input_ids"],
|
|
chosen_data_packed["loss_mask"],
|
|
chosen_data_packed["label_decode"],
|
|
)
|
|
|
|
rejected_data_packed = apply_rlhf_data_format(rejected, tokenizer)
|
|
(rejected_input_ids, rejected_loss_mask, rejected_label_decode) = (
|
|
rejected_data_packed["input_ids"],
|
|
rejected_data_packed["loss_mask"],
|
|
rejected_data_packed["label_decode"],
|
|
)
|
|
|
|
if len(chosen_input_ids) > max_length or len(rejected_input_ids) > max_length:
|
|
return dict(
|
|
chosen_input_ids=None,
|
|
chosen_loss_mask=None,
|
|
chosen_label_decode=None,
|
|
rejected_input_ids=None,
|
|
rejected_loss_mask=None,
|
|
rejected_label_decode=None,
|
|
)
|
|
# Check if loss mask is all 0s (no loss), this may happen when the tokenized length is too long
|
|
if chosen_loss_mask.count(1) == 0 or rejected_loss_mask.count(1) == 0:
|
|
return dict(
|
|
chosen_input_ids=None,
|
|
chosen_loss_mask=None,
|
|
chosen_label_decode=None,
|
|
rejected_input_ids=None,
|
|
rejected_loss_mask=None,
|
|
rejected_label_decode=None,
|
|
)
|
|
|
|
return {
|
|
"chosen_input_ids": chosen_input_ids,
|
|
"chosen_loss_mask": chosen_loss_mask,
|
|
"chosen_label_decode": chosen_label_decode,
|
|
"rejected_input_ids": rejected_input_ids,
|
|
"rejected_loss_mask": rejected_loss_mask,
|
|
"rejected_label_decode": rejected_label_decode,
|
|
}
|
|
|
|
|
|
def tokenize_kto(
|
|
data_point: Dict[str, str],
|
|
tokenizer: PreTrainedTokenizer,
|
|
conversation_template: Conversation = None,
|
|
max_length: int = 4096,
|
|
) -> Dict[str, Union[int, str, List[int]]]:
|
|
"""
|
|
Tokenize a dataset for KTO training
|
|
The raw input data is conversation that have the following format
|
|
{
|
|
"prompt": [{"from": "user", "content": "xxx"}...],
|
|
"completion": {"from": "assistant", "content": "xxx"},
|
|
"label": true/false
|
|
}
|
|
It returns three fields
|
|
The context, which contain the query and the assistant start,
|
|
the completion, which only contains the assistance's answer,
|
|
and a binary label, which indicates if the sample is prefered or not
|
|
"""
|
|
prompt = data_point["prompt"]
|
|
completion = data_point["completion"]
|
|
template = deepcopy(conversation_template)
|
|
template.clear()
|
|
|
|
if prompt[0]["from"] == "system":
|
|
template.system_message = str(prompt[0]["content"])
|
|
prompt.pop(0)
|
|
|
|
if prompt[0].get("from", None) != "user":
|
|
raise ValueError("conversation should start with user")
|
|
if completion.get("from", None) != "assistant":
|
|
raise ValueError("conversation should end with assistant")
|
|
|
|
for mess in prompt:
|
|
if mess.get("from", None) == "user":
|
|
template.append_message("user", mess["content"])
|
|
elif mess.get("from", None) == "assistant":
|
|
template.append_message("assistant", mess["content"])
|
|
else:
|
|
raise ValueError(f"Unsupported role {mess.get('from', None)}")
|
|
generation_prompt = template.get_prompt(len(prompt), add_generation_prompt=True)
|
|
template.append_message("assistant", completion["content"])
|
|
full_prompt = template.get_prompt(len(prompt) + 1, add_generation_prompt=False)
|
|
tokenized_full_prompt = tokenizer(full_prompt, add_special_tokens=False)["input_ids"]
|
|
if len(tokenized_full_prompt) + 1 > max_length:
|
|
return dict(prompt=None, completion=None, label=None, input_id_decode=None, completion_decode=None)
|
|
tokenized_generation_prompt = tokenizer(generation_prompt, add_special_tokens=False)["input_ids"]
|
|
tokenized_completion = tokenized_full_prompt[len(tokenized_generation_prompt) :]
|
|
tokenized_completion = deepcopy(tokenized_completion)
|
|
if tokenizer.bos_token_id is not None and tokenized_generation_prompt[0] != tokenizer.bos_token_id:
|
|
tokenized_generation_prompt = [tokenizer.bos_token_id] + tokenized_generation_prompt
|
|
decoded_full_prompt = tokenizer.decode(tokenized_full_prompt, skip_special_tokens=False)
|
|
decoded_completion = tokenizer.decode(tokenized_completion, skip_special_tokens=False)
|
|
|
|
return {
|
|
"prompt": tokenized_generation_prompt,
|
|
"completion": tokenized_completion,
|
|
"label": data_point["label"],
|
|
"input_id_decode": decoded_full_prompt,
|
|
"completion_decode": decoded_completion,
|
|
}
|