mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-05 11:12:47 +00:00
community: Add token_usage and model_name metadata to ChatZhipuAI stream() and astream() response (#27677)
Thank you for contributing to LangChain! - **Description:** Add token_usage and model_name metadata to ChatZhipuAI stream() and astream() response - **Issue:** None - **Dependencies:** None - **Twitter handle:** None - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. Co-authored-by: jianfehuang <jianfehuang@tencent.com>
This commit is contained in:
parent
8a5807a6b4
commit
18cfb4c067
@ -591,13 +591,19 @@ class ChatZhipuAI(BaseChatModel):
|
||||
if len(chunk["choices"]) == 0:
|
||||
continue
|
||||
choice = chunk["choices"][0]
|
||||
usage = chunk.get("usage", None)
|
||||
model_name = chunk.get("model", "")
|
||||
chunk = _convert_delta_to_message_chunk(
|
||||
choice["delta"], default_chunk_class
|
||||
)
|
||||
finish_reason = choice.get("finish_reason", None)
|
||||
|
||||
generation_info = (
|
||||
{"finish_reason": finish_reason}
|
||||
{
|
||||
"finish_reason": finish_reason,
|
||||
"token_usage": usage,
|
||||
"model_name": model_name,
|
||||
}
|
||||
if finish_reason is not None
|
||||
else None
|
||||
)
|
||||
@ -678,13 +684,19 @@ class ChatZhipuAI(BaseChatModel):
|
||||
if len(chunk["choices"]) == 0:
|
||||
continue
|
||||
choice = chunk["choices"][0]
|
||||
usage = chunk.get("usage", None)
|
||||
model_name = chunk.get("model", "")
|
||||
chunk = _convert_delta_to_message_chunk(
|
||||
choice["delta"], default_chunk_class
|
||||
)
|
||||
finish_reason = choice.get("finish_reason", None)
|
||||
|
||||
generation_info = (
|
||||
{"finish_reason": finish_reason}
|
||||
{
|
||||
"finish_reason": finish_reason,
|
||||
"token_usage": usage,
|
||||
"model_name": model_name,
|
||||
}
|
||||
if finish_reason is not None
|
||||
else None
|
||||
)
|
||||
|
Loading…
Reference in New Issue
Block a user