[ColossalEval] support for vllm (#6056)

* support vllm

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* modify vllm and update readme

* run pre-commit

* remove dupilicated lines and refine code

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update param name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refine code

* update readme

* refine code

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Camille Zhong
2024-09-18 17:09:45 +08:00
committed by GitHub
parent 4fa6b9509c
commit f9546ba0be
19 changed files with 576 additions and 35 deletions

View File

@@ -28,7 +28,7 @@ class ChatGLMModel(HuggingFaceModel):
@torch.no_grad()
def get_loss(
self, batch_prompt: List[str], batch_target: List[List[str]], pretrain: bool = False
self, batch_prompt: List[str], batch_target: List[List[str]], calculate_overall_loss: bool = False
) -> List[List[float]]:
"""
Calculate loss only on target tokens.
@@ -225,7 +225,7 @@ class ChatGLM2Model(ChatGLMModel):
@torch.no_grad()
def get_loss(
self, batch_prompt: List[str], batch_target: List[List[str]], pretrain: bool = False
self, batch_prompt: List[str], batch_target: List[List[str]], calculate_overall_loss: bool = False
) -> List[List[float]]:
"""
Calculate loss only on target tokens.