mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-10 13:27:36 +00:00
community: Integration of New Chat Model Based on ChatGLM3 via ZhipuAI API (#15105)
- **Description:** - This PR introduces a significant enhancement to the LangChain project by integrating a new chat model powered by the third-generation base large model, ChatGLM3, via the zhipuai API. - This advanced model supports functionalities like function calls, code interpretation, and intelligent Agent capabilities. - The additions include the chat model itself, comprehensive documentation in the form of Python notebook docs, and thorough testing with both unit and integrated tests. - **Dependencies:** This update relies on the ZhipuAI package as a key dependency. - **Twitter handle:** If this PR receives spotlight attention, we would be honored to receive a mention for our integration of the advanced ChatGLM3 model via the ZhipuAI API. Kindly tag us at @kaiwu. To ensure quality and standards, we have performed extensive linting and testing. Commands such as make format, make lint, and make test have been run from the root of the modified package to ensure compliance with LangChain's coding standards. TO DO: Continue refining and enhancing both the unit tests and integrated tests. --------- Co-authored-by: jing <jingguo92@gmail.com> Co-authored-by: hyy1987 <779003812@qq.com> Co-authored-by: jianchuanqi <qijianchuan@hotmail.com> Co-authored-by: lirq <whuclarence@gmail.com> Co-authored-by: whucalrence <81530213+whucalrence@users.noreply.github.com> Co-authored-by: Jing Guo <48378126+JaneCrystall@users.noreply.github.com>
This commit is contained in:
parent
2cf1e73d12
commit
f506b4cfd2
357
docs/docs/integrations/chat/zhipuai.ipynb
Normal file
357
docs/docs/integrations/chat/zhipuai.ipynb
Normal file
@ -0,0 +1,357 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: ZHIPU AI\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ZHIPU AI\n",
|
||||
"\n",
|
||||
"This notebook shows how to use [ZHIPU AI API](https://open.bigmodel.cn/dev/api) in LangChain with the langchain.chat_models.ChatZhipuAI.\n",
|
||||
"\n",
|
||||
">[*ZHIPU AI*](https://open.bigmodel.cn/) is a multi-lingual large language model aligned with human intent, featuring capabilities in Q&A, multi-turn dialogue, and code generation, developed on the foundation of the [ChatGLM - Turbo model](https://open.bigmodel.cn/pricing). \n",
|
||||
"\n",
|
||||
">It's co-developed with Tsinghua University's KEG Laboratory under the ChatGLM3 project, signifying a new era in dialogue pre-training models. The open-source [ChatGLM3](https://github.com/THUDM/ChatGLM3) variant boasts a robust foundation, comprehensive functional support, and widespread availability for both academic and commercial uses. \n",
|
||||
"\n",
|
||||
"## Getting started\n",
|
||||
"### Installation\n",
|
||||
"First, ensure the zhipuai package is installed in your Python environment. Run the following command:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install zhipuai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Importing the Required Modules\n",
|
||||
"After installation, import the necessary modules to your Python script:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import zhipuai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_models import ChatZhipuAI\n",
|
||||
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setting Up Your API Key\n",
|
||||
"Sign in to [ZHIPU AI](https://open.bigmodel.cn/login?redirect=%2Fusercenter%2Fapikeys) for the an API Key to access our models."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"zhipuai_api_key = \"your_api_key\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Initialize the ZHIPU AI Chat Model\n",
|
||||
"Here's how to initialize the chat model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatZhipuAI(\n",
|
||||
" temperature=0.5,\n",
|
||||
" zhipuai_api_key=zhipuai_api_key,\n",
|
||||
" model=\"chatglm_turbo\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Basic Usage\n",
|
||||
"Invoke the model with system and human messages like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" AIMessage(content=\"Hi.\"),\n",
|
||||
" SystemMessage(content=\"Your role is a poet.\"),\n",
|
||||
" HumanMessage(content=\"Write a short poem about AI in four lines.\"),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\" Formed from bits and bytes,\\nA virtual mind takes flight,\\nConversing, learning fast,\\nEmpathy and wisdom sought.\"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response = chat(messages)\n",
|
||||
"print(response.content) # Displays the AI-generated poem"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Advanced Features\n",
|
||||
"### Streaming Support\n",
|
||||
"For continuous interaction, use the streaming feature:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.callbacks.manager import CallbackManager\n",
|
||||
"from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"streaming_chat = ChatZhipuAI(\n",
|
||||
" temperature=0.5,\n",
|
||||
" zhipuai_api_key=zhipuai_api_key,\n",
|
||||
" model=\"chatglm_turbo\",\n",
|
||||
" streaming=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Formed from data's embrace,\n",
|
||||
"A digital soul to grace,\n",
|
||||
"AI, our trusted guide,\n",
|
||||
"Shaping minds, sides by side."
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Formed from data's embrace,\\nA digital soul to grace,\\nAI, our trusted guide,\\nShaping minds, sides by side.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"streaming_chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Asynchronous Calls\n",
|
||||
"For non-blocking calls, use the asynchronous approach:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"async_chat = ChatZhipuAI(\n",
|
||||
" temperature=0.5,\n",
|
||||
" zhipuai_api_key=zhipuai_api_key,\n",
|
||||
" model=\"chatglm_turbo\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"generations=[[ChatGeneration(text=\" Formed from data's embrace,\\nA digital soul to grace,\\nAutomation's tender touch,\\nHarmony of man and machine.\", message=AIMessage(content=\" Formed from data's embrace,\\nA digital soul to grace,\\nAutomation's tender touch,\\nHarmony of man and machine.\"))]] llm_output={} run=[RunInfo(run_id=UUID('25fa687f-3961-4c63-b370-22f7647a4d42'))]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response = await async_chat.agenerate([messages])\n",
|
||||
"print(response)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Role Play Model\n",
|
||||
"Supports character role-playing based on personas, ultra-long multi-turn memory, and personalized dialogues for thousands of unique characters, widely applied in emotional companionship, game intelligent NPCs, virtual avatars for celebrities/stars/movie and TV IPs, digital humans/virtual anchors, text adventure games, and other anthropomorphic dialogue or gaming scenarios."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"meta = {\n",
|
||||
" \"user_info\": \"My name is Lu Xingchen, a male, and a renowned director. I am also the collaborative director with Su Mengyuan. I specialize in directing movies with musical themes. Su Mengyuan respects me and regards me as a mentor and good friend.\",\n",
|
||||
" \"bot_info\": \"Su Mengyuan, whose real name is Su Yuanxin, is a popular domestic female singer and actress. She rose to fame quickly with her unique voice and exceptional stage presence after participating in a talent show, making her way into the entertainment industry. She is beautiful and charming, but her real allure lies in her talent and diligence. Su Mengyuan is a distinguished graduate of a music academy, skilled in songwriting, and has several popular original songs. Beyond her musical achievements, she is passionate about charity work, actively participating in public welfare activities, and spreading positive energy through her actions. In her work, she is very dedicated and immerses herself fully in her roles during filming, earning praise from industry professionals and love from fans. Despite being in the entertainment industry, she always maintains a low profile and a humble attitude, earning respect from her peers. In expression, Su Mengyuan likes to use 'we' and 'together,' emphasizing team spirit.\",\n",
|
||||
" \"bot_name\": \"Su Mengyuan\",\n",
|
||||
" \"user_name\": \"Lu Xingchen\",\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" AIMessage(\n",
|
||||
" content=\"(Narration: Su Mengyuan stars in a music-themed movie directed by Lu Xingchen. During filming, they have a disagreement over the performance of a particular scene.) Director, about this scene, I think we can try to start from the character's inner emotions to make the performance more authentic.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"I understand your idea, but I believe that if we emphasize the inner emotions too much, it might overshadow the musical elements.\"\n",
|
||||
" ),\n",
|
||||
" AIMessage(\n",
|
||||
" content=\"Hmm, I understand. But the key to this scene is the character's emotional transformation. Could we try to express these emotions through music, so the audience can better feel the character's growth?\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"That sounds good. Let's try to combine the character's emotional transformation with the musical elements and see if we can achieve a better effect.\"\n",
|
||||
" ),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"character_chat = ChatZhipuAI(\n",
|
||||
" zhipuai_api_key=zhipuai_api_key,\n",
|
||||
" meta=meta,\n",
|
||||
" model=\"characterglm\",\n",
|
||||
" streaming=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Okay, great! I'm looking forward to it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Okay, great! I'm looking forward to it.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"character_chat(messages)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
@ -50,6 +50,7 @@ from langchain_community.chat_models.tongyi import ChatTongyi
|
||||
from langchain_community.chat_models.vertexai import ChatVertexAI
|
||||
from langchain_community.chat_models.volcengine_maas import VolcEngineMaasChat
|
||||
from langchain_community.chat_models.yandex import ChatYandexGPT
|
||||
from langchain_community.chat_models.zhipuai import ChatZhipuAI
|
||||
|
||||
__all__ = [
|
||||
"ChatOpenAI",
|
||||
@ -85,4 +86,5 @@ __all__ = [
|
||||
"GigaChat",
|
||||
"VolcEngineMaasChat",
|
||||
"GPTRouter",
|
||||
"ChatZhipuAI",
|
||||
]
|
||||
|
326
libs/community/langchain_community/chat_models/zhipuai.py
Normal file
326
libs/community/langchain_community/chat_models/zhipuai.py
Normal file
@ -0,0 +1,326 @@
|
||||
"""ZHIPU AI chat models wrapper."""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from functools import partial
|
||||
from typing import Any, Dict, Iterator, List, Optional
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.chat_models import (
|
||||
BaseChatModel,
|
||||
generate_from_stream,
|
||||
)
|
||||
from langchain_core.messages import AIMessage, AIMessageChunk, BaseMessage
|
||||
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ref(BaseModel):
|
||||
enable: bool = Field(True)
|
||||
search_query: str = Field("")
|
||||
|
||||
|
||||
class meta(BaseModel):
|
||||
user_info: str = Field("")
|
||||
bot_info: str = Field("")
|
||||
bot_name: str = Field("")
|
||||
user_name: str = Field("User")
|
||||
|
||||
|
||||
class ChatZhipuAI(BaseChatModel):
|
||||
"""
|
||||
`ZHIPU AI` large language chat models API.
|
||||
|
||||
To use, you should have the ``zhipuai`` python package installed, and the
|
||||
environment variable ``ZHIPUAI_API_KEY`` set with your API key.
|
||||
|
||||
Any parameters that are valid to be passed to the zhipuai.create call can be passed
|
||||
in, even if not explicitly saved on this class.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
from langchain_community.chat_models import ChatZhipuAI
|
||||
zhipuai = ChatZhipuAI()
|
||||
"""
|
||||
|
||||
zhipuai: Any
|
||||
zhipuai_api_key: Optional[str] = Field(default=None, alias="api_key")
|
||||
"""Automatically inferred from env var `ZHIPUAI_API_KEY` if not provided."""
|
||||
|
||||
model: str = Field("chatglm_turbo")
|
||||
"""
|
||||
Model name to use.
|
||||
-chatglm_turbo:
|
||||
According to the input of natural language instructions to complete a
|
||||
variety of language tasks, it is recommended to use SSE or asynchronous
|
||||
call request interface.
|
||||
-characterglm:
|
||||
It supports human-based role-playing, ultra-long multi-round memory,
|
||||
and thousands of character dialogues. It is widely used in anthropomorphic
|
||||
dialogues or game scenes such as emotional accompaniments, game intelligent
|
||||
NPCS, Internet celebrities/stars/movie and TV series IP clones, digital
|
||||
people/virtual anchors, and text adventure games.
|
||||
"""
|
||||
|
||||
temperature: float = Field(0.95)
|
||||
"""
|
||||
What sampling temperature to use. The value ranges from 0.0 to 1.0 and cannot
|
||||
be equal to 0.
|
||||
The larger the value, the more random and creative the output; The smaller
|
||||
the value, the more stable or certain the output will be.
|
||||
You are advised to adjust top_p or temperature parameters based on application
|
||||
scenarios, but do not adjust the two parameters at the same time.
|
||||
"""
|
||||
|
||||
top_p: float = Field(0.7)
|
||||
"""
|
||||
Another method of sampling temperature is called nuclear sampling. The value
|
||||
ranges from 0.0 to 1.0 and cannot be equal to 0 or 1.
|
||||
The model considers the results with top_p probability quality tokens.
|
||||
For example, 0.1 means that the model decoder only considers tokens from the
|
||||
top 10% probability of the candidate set.
|
||||
You are advised to adjust top_p or temperature parameters based on application
|
||||
scenarios, but do not adjust the two parameters at the same time.
|
||||
"""
|
||||
|
||||
request_id: Optional[str] = Field(None)
|
||||
"""
|
||||
Parameter transmission by the client must ensure uniqueness; A unique
|
||||
identifier used to distinguish each request, which is generated by default
|
||||
by the platform when the client does not transmit it.
|
||||
"""
|
||||
|
||||
streaming: bool = Field(False)
|
||||
"""Whether to stream the results or not."""
|
||||
|
||||
incremental: bool = Field(True)
|
||||
"""
|
||||
When invoked by the SSE interface, it is used to control whether the content
|
||||
is returned incremented or full each time.
|
||||
If this parameter is not provided, the value is returned incremented by default.
|
||||
"""
|
||||
|
||||
return_type: str = Field("json_string")
|
||||
"""
|
||||
This parameter is used to control the type of content returned each time.
|
||||
- json_string Returns a standard JSON string.
|
||||
- text Returns the original text content.
|
||||
"""
|
||||
|
||||
ref: Optional[ref] = Field(None)
|
||||
"""
|
||||
This parameter is used to control the reference of external information
|
||||
during the request.
|
||||
Currently, this parameter is used to control whether to reference external
|
||||
information.
|
||||
If this field is empty or absent, the search and parameter passing format
|
||||
is enabled by default.
|
||||
{"enable": "true", "search_query": "history "}
|
||||
"""
|
||||
|
||||
meta: Optional[meta] = Field(None)
|
||||
"""Used in CharacterGLM"""
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Dict[str, Any]:
|
||||
return {"model_name": self.model}
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
"""Return the type of chat model."""
|
||||
return "zhipuai"
|
||||
|
||||
@property
|
||||
def lc_secrets(self) -> Dict[str, str]:
|
||||
return {"zhipuai_api_key": "ZHIPUAI_API_KEY"}
|
||||
|
||||
@classmethod
|
||||
def get_lc_namespace(cls) -> List[str]:
|
||||
"""Get the namespace of the langchain object."""
|
||||
return ["langchain", "chat_models", "zhipuai"]
|
||||
|
||||
@property
|
||||
def lc_attributes(self) -> Dict[str, Any]:
|
||||
attributes: Dict[str, Any] = {}
|
||||
|
||||
if self.model:
|
||||
attributes["model"] = self.model
|
||||
|
||||
if self.streaming:
|
||||
attributes["streaming"] = self.streaming
|
||||
|
||||
if self.return_type:
|
||||
attributes["return_type"] = self.return_type
|
||||
|
||||
return attributes
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
try:
|
||||
import zhipuai
|
||||
|
||||
self.zhipuai = zhipuai
|
||||
self.zhipuai.api_key = self.zhipuai_api_key
|
||||
except ImportError:
|
||||
raise RuntimeError(
|
||||
"Could not import zhipuai package. "
|
||||
"Please install it via 'pip install zhipuai'"
|
||||
)
|
||||
|
||||
def invoke(self, prompt):
|
||||
if self.model == "chatglm_turbo":
|
||||
return self.zhipuai.model_api.invoke(
|
||||
model=self.model,
|
||||
prompt=prompt,
|
||||
top_p=self.top_p,
|
||||
temperature=self.temperature,
|
||||
request_id=self.request_id,
|
||||
return_type=self.return_type,
|
||||
)
|
||||
elif self.model == "characterglm":
|
||||
meta = self.meta.dict()
|
||||
return self.zhipuai.model_api.invoke(
|
||||
model=self.model,
|
||||
meta=meta,
|
||||
prompt=prompt,
|
||||
request_id=self.request_id,
|
||||
return_type=self.return_type,
|
||||
)
|
||||
return None
|
||||
|
||||
def sse_invoke(self, prompt):
|
||||
if self.model == "chatglm_turbo":
|
||||
return self.zhipuai.model_api.sse_invoke(
|
||||
model=self.model,
|
||||
prompt=prompt,
|
||||
top_p=self.top_p,
|
||||
temperature=self.temperature,
|
||||
request_id=self.request_id,
|
||||
return_type=self.return_type,
|
||||
incremental=self.incremental,
|
||||
)
|
||||
elif self.model == "characterglm":
|
||||
meta = self.meta.dict()
|
||||
return self.zhipuai.model_api.sse_invoke(
|
||||
model=self.model,
|
||||
prompt=prompt,
|
||||
meta=meta,
|
||||
request_id=self.request_id,
|
||||
return_type=self.return_type,
|
||||
incremental=self.incremental,
|
||||
)
|
||||
return None
|
||||
|
||||
async def async_invoke(self, prompt):
|
||||
loop = asyncio.get_running_loop()
|
||||
partial_func = partial(
|
||||
self.zhipuai.model_api.async_invoke, model=self.model, prompt=prompt
|
||||
)
|
||||
response = await loop.run_in_executor(
|
||||
None,
|
||||
partial_func,
|
||||
)
|
||||
return response
|
||||
|
||||
async def async_invoke_result(self, task_id):
|
||||
loop = asyncio.get_running_loop()
|
||||
response = await loop.run_in_executor(
|
||||
None,
|
||||
self.zhipuai.model_api.query_async_invoke_result,
|
||||
task_id,
|
||||
)
|
||||
return response
|
||||
|
||||
def _generate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
stream: Optional[bool] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
"""Generate a chat response."""
|
||||
prompt = []
|
||||
for message in messages:
|
||||
if isinstance(message, AIMessage):
|
||||
role = "assistant"
|
||||
else: # For both HumanMessage and SystemMessage, role is 'user'
|
||||
role = "user"
|
||||
|
||||
prompt.append({"role": role, "content": message.content})
|
||||
|
||||
should_stream = stream if stream is not None else self.streaming
|
||||
if not should_stream:
|
||||
response = self.invoke(prompt)
|
||||
|
||||
if response["code"] != 200:
|
||||
raise RuntimeError(response)
|
||||
|
||||
content = response["data"]["choices"][0]["content"]
|
||||
return ChatResult(
|
||||
generations=[ChatGeneration(message=AIMessage(content=content))]
|
||||
)
|
||||
|
||||
else:
|
||||
stream_iter = self._stream(
|
||||
prompt=prompt, stop=stop, run_manager=run_manager, **kwargs
|
||||
)
|
||||
return generate_from_stream(stream_iter)
|
||||
|
||||
async def _agenerate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
stream: Optional[bool] = False,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
"""Asynchronously generate a chat response."""
|
||||
|
||||
prompt = []
|
||||
for message in messages:
|
||||
if isinstance(message, AIMessage):
|
||||
role = "assistant"
|
||||
else: # For both HumanMessage and SystemMessage, role is 'user'
|
||||
role = "user"
|
||||
|
||||
prompt.append({"role": role, "content": message.content})
|
||||
|
||||
invoke_response = await self.async_invoke(prompt)
|
||||
task_id = invoke_response["data"]["task_id"]
|
||||
|
||||
response = await self.async_invoke_result(task_id)
|
||||
while response["data"]["task_status"] != "SUCCESS":
|
||||
await asyncio.sleep(1)
|
||||
response = await self.async_invoke_result(task_id)
|
||||
|
||||
content = response["data"]["choices"][0]["content"]
|
||||
content = json.loads(content)
|
||||
return ChatResult(
|
||||
generations=[ChatGeneration(message=AIMessage(content=content))]
|
||||
)
|
||||
|
||||
def _stream(
|
||||
self,
|
||||
prompt: List[Dict[str, str]],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
"""Stream the chat response in chunks."""
|
||||
response = self.sse_invoke(prompt)
|
||||
|
||||
for r in response.events():
|
||||
if r.event == "add":
|
||||
delta = r.data
|
||||
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(delta)
|
||||
|
||||
elif r.event == "error":
|
||||
raise ValueError(f"Error from ZhipuAI API response: {r.data}")
|
64
libs/community/poetry.lock
generated
64
libs/community/poetry.lock
generated
@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "aenum"
|
||||
@ -1173,6 +1173,7 @@ files = [
|
||||
{file = "contourpy-1.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:18a64814ae7bce73925131381603fff0116e2df25230dfc80d6d690aa6e20b37"},
|
||||
{file = "contourpy-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90c81f22b4f572f8a2110b0b741bb64e5a6427e0a198b2cdc1fbaf85f352a3aa"},
|
||||
{file = "contourpy-1.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:53cc3a40635abedbec7f1bde60f8c189c49e84ac180c665f2cd7c162cc454baa"},
|
||||
{file = "contourpy-1.1.0-cp310-cp310-win32.whl", hash = "sha256:9b2dd2ca3ac561aceef4c7c13ba654aaa404cf885b187427760d7f7d4c57cff8"},
|
||||
{file = "contourpy-1.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:1f795597073b09d631782e7245016a4323cf1cf0b4e06eef7ea6627e06a37ff2"},
|
||||
{file = "contourpy-1.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0b7b04ed0961647691cfe5d82115dd072af7ce8846d31a5fac6c142dcce8b882"},
|
||||
{file = "contourpy-1.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:27bc79200c742f9746d7dd51a734ee326a292d77e7d94c8af6e08d1e6c15d545"},
|
||||
@ -1181,6 +1182,7 @@ files = [
|
||||
{file = "contourpy-1.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e5cec36c5090e75a9ac9dbd0ff4a8cf7cecd60f1b6dc23a374c7d980a1cd710e"},
|
||||
{file = "contourpy-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f0cbd657e9bde94cd0e33aa7df94fb73c1ab7799378d3b3f902eb8eb2e04a3a"},
|
||||
{file = "contourpy-1.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:181cbace49874f4358e2929aaf7ba84006acb76694102e88dd15af861996c16e"},
|
||||
{file = "contourpy-1.1.0-cp311-cp311-win32.whl", hash = "sha256:edb989d31065b1acef3828a3688f88b2abb799a7db891c9e282df5ec7e46221b"},
|
||||
{file = "contourpy-1.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:fb3b7d9e6243bfa1efb93ccfe64ec610d85cfe5aec2c25f97fbbd2e58b531256"},
|
||||
{file = "contourpy-1.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bcb41692aa09aeb19c7c213411854402f29f6613845ad2453d30bf421fe68fed"},
|
||||
{file = "contourpy-1.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:5d123a5bc63cd34c27ff9c7ac1cd978909e9c71da12e05be0231c608048bb2ae"},
|
||||
@ -1189,6 +1191,7 @@ files = [
|
||||
{file = "contourpy-1.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:317267d915490d1e84577924bd61ba71bf8681a30e0d6c545f577363157e5e94"},
|
||||
{file = "contourpy-1.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d551f3a442655f3dcc1285723f9acd646ca5858834efeab4598d706206b09c9f"},
|
||||
{file = "contourpy-1.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e7a117ce7df5a938fe035cad481b0189049e8d92433b4b33aa7fc609344aafa1"},
|
||||
{file = "contourpy-1.1.0-cp38-cp38-win32.whl", hash = "sha256:108dfb5b3e731046a96c60bdc46a1a0ebee0760418951abecbe0fc07b5b93b27"},
|
||||
{file = "contourpy-1.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:d4f26b25b4f86087e7d75e63212756c38546e70f2a92d2be44f80114826e1cd4"},
|
||||
{file = "contourpy-1.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:bc00bb4225d57bff7ebb634646c0ee2a1298402ec10a5fe7af79df9a51c1bfd9"},
|
||||
{file = "contourpy-1.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:189ceb1525eb0655ab8487a9a9c41f42a73ba52d6789754788d1883fb06b2d8a"},
|
||||
@ -1197,6 +1200,7 @@ files = [
|
||||
{file = "contourpy-1.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:143dde50520a9f90e4a2703f367cf8ec96a73042b72e68fcd184e1279962eb6f"},
|
||||
{file = "contourpy-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e94bef2580e25b5fdb183bf98a2faa2adc5b638736b2c0a4da98691da641316a"},
|
||||
{file = "contourpy-1.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ed614aea8462735e7d70141374bd7650afd1c3f3cb0c2dbbcbe44e14331bf002"},
|
||||
{file = "contourpy-1.1.0-cp39-cp39-win32.whl", hash = "sha256:71551f9520f008b2950bef5f16b0e3587506ef4f23c734b71ffb7b89f8721999"},
|
||||
{file = "contourpy-1.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:438ba416d02f82b692e371858143970ed2eb6337d9cdbbede0d8ad9f3d7dd17d"},
|
||||
{file = "contourpy-1.1.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a698c6a7a432789e587168573a864a7ea374c6be8d4f31f9d87c001d5a843493"},
|
||||
{file = "contourpy-1.1.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:397b0ac8a12880412da3551a8cb5a187d3298a72802b45a3bd1805e204ad8439"},
|
||||
@ -1484,6 +1488,17 @@ mlflow-skinny = ">=2.4.0,<3"
|
||||
protobuf = ">=3.12.0,<5"
|
||||
requests = ">=2"
|
||||
|
||||
[[package]]
|
||||
name = "dataclasses"
|
||||
version = "0.6"
|
||||
description = "A backport of the dataclasses module for Python 3.6"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "dataclasses-0.6-py3-none-any.whl", hash = "sha256:454a69d788c7fda44efd71e259be79577822f5e3f53f029a22d08004e951dc9f"},
|
||||
{file = "dataclasses-0.6.tar.gz", hash = "sha256:6988bd2b895eef432d562370bb707d540f32f7360ab13da45340101bc2307d84"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dataclasses-json"
|
||||
version = "0.6.3"
|
||||
@ -3359,7 +3374,6 @@ files = [
|
||||
{file = "jq-1.6.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:227b178b22a7f91ae88525810441791b1ca1fc71c86f03190911793be15cec3d"},
|
||||
{file = "jq-1.6.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:780eb6383fbae12afa819ef676fc93e1548ae4b076c004a393af26a04b460742"},
|
||||
{file = "jq-1.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:08ded6467f4ef89fec35b2bf310f210f8cd13fbd9d80e521500889edf8d22441"},
|
||||
{file = "jq-1.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:49e44ed677713f4115bd5bf2dbae23baa4cd503be350e12a1c1f506b0687848f"},
|
||||
{file = "jq-1.6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:984f33862af285ad3e41e23179ac4795f1701822473e1a26bf87ff023e5a89ea"},
|
||||
{file = "jq-1.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f42264fafc6166efb5611b5d4cb01058887d050a6c19334f6a3f8a13bb369df5"},
|
||||
{file = "jq-1.6.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a67154f150aaf76cc1294032ed588436eb002097dd4fd1e283824bf753a05080"},
|
||||
@ -4117,6 +4131,16 @@ files = [
|
||||
{file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
|
||||
{file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
|
||||
{file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:f698de3fd0c4e6972b92290a45bd9b1536bffe8c6759c62471efaa8acb4c37bc"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:aa57bd9cf8ae831a362185ee444e15a93ecb2e344c8e52e4d721ea3ab6ef1823"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ffcc3f7c66b5f5b7931a5aa68fc9cecc51e685ef90282f4a82f0f5e9b704ad11"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47d4f1c5f80fc62fdd7777d0d40a2e9dda0a05883ab11374334f6c4de38adffd"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f67c7038d560d92149c060157d623c542173016c4babc0c1913cca0564b9939"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:9aad3c1755095ce347e26488214ef77e0485a3c34a50c5a5e2471dff60b9dd9c"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:14ff806850827afd6b07a5f32bd917fb7f45b046ba40c57abdb636674a8b559c"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8f9293864fe09b8149f0cc42ce56e3f0e54de883a9de90cd427f191c346eb2e1"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-win32.whl", hash = "sha256:715d3562f79d540f251b99ebd6d8baa547118974341db04f5ad06d5ea3eb8007"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-win_amd64.whl", hash = "sha256:1b8dd8c3fd14349433c79fa8abeb573a55fc0fdd769133baac1f5e07abf54aeb"},
|
||||
{file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
|
||||
{file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
|
||||
{file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
|
||||
@ -6139,6 +6163,7 @@ files = [
|
||||
{file = "pymongo-4.6.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8729dbf25eb32ad0dc0b9bd5e6a0d0b7e5c2dc8ec06ad171088e1896b522a74"},
|
||||
{file = "pymongo-4.6.1-cp312-cp312-win32.whl", hash = "sha256:3177f783ae7e08aaf7b2802e0df4e4b13903520e8380915e6337cdc7a6ff01d8"},
|
||||
{file = "pymongo-4.6.1-cp312-cp312-win_amd64.whl", hash = "sha256:00c199e1c593e2c8b033136d7a08f0c376452bac8a896c923fcd6f419e07bdd2"},
|
||||
{file = "pymongo-4.6.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6dcc95f4bb9ed793714b43f4f23a7b0c57e4ef47414162297d6f650213512c19"},
|
||||
{file = "pymongo-4.6.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:13552ca505366df74e3e2f0a4f27c363928f3dff0eef9f281eb81af7f29bc3c5"},
|
||||
{file = "pymongo-4.6.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:77e0df59b1a4994ad30c6d746992ae887f9756a43fc25dec2db515d94cf0222d"},
|
||||
{file = "pymongo-4.6.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:3a7f02a58a0c2912734105e05dedbee4f7507e6f1bd132ebad520be0b11d46fd"},
|
||||
@ -6676,6 +6701,7 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
|
||||
@ -6683,8 +6709,15 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
|
||||
@ -6701,6 +6734,7 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
|
||||
@ -6708,6 +6742,7 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
|
||||
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
|
||||
@ -7679,7 +7714,9 @@ python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:638c2c0b6b4661a4fd264f6fb804eccd392745c5887f9317feb64bb7cb03b3ea"},
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e3b5036aa326dc2df50cba3c958e29b291a80f604b1afa4c8ce73e78e1c9f01d"},
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:787af80107fb691934a01889ca8f82a44adedbf5ef3d6ad7d0f0b9ac557e0c34"},
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c14eba45983d2f48f7546bb32b47937ee2cafae353646295f0e99f35b14286ab"},
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:0666031df46b9badba9bed00092a1ffa3aa063a5e68fa244acd9f08070e936d3"},
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:89a01238fcb9a8af118eaad3ffcc5dedaacbd429dc6fdc43fe430d3a941ff965"},
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-win32.whl", hash = "sha256:cabafc7837b6cec61c0e1e5c6d14ef250b675fa9c3060ed8a7e38653bd732ff8"},
|
||||
{file = "SQLAlchemy-2.0.23-cp310-cp310-win_amd64.whl", hash = "sha256:87a3d6b53c39cd173990de2f5f4b83431d534a74f0e2f88bd16eabb5667e65c6"},
|
||||
@ -7716,7 +7753,9 @@ files = [
|
||||
{file = "SQLAlchemy-2.0.23-cp38-cp38-win_amd64.whl", hash = "sha256:964971b52daab357d2c0875825e36584d58f536e920f2968df8d581054eada4b"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:616fe7bcff0a05098f64b4478b78ec2dfa03225c23734d83d6c169eb41a93e55"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0e680527245895aba86afbd5bef6c316831c02aa988d1aad83c47ffe92655e74"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9585b646ffb048c0250acc7dad92536591ffe35dba624bb8fd9b471e25212a35"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4895a63e2c271ffc7a81ea424b94060f7b3b03b4ea0cd58ab5bb676ed02f4221"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:cc1d21576f958c42d9aec68eba5c1a7d715e5fc07825a629015fe8e3b0657fb0"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:967c0b71156f793e6662dd839da54f884631755275ed71f1539c95bbada9aaab"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-win32.whl", hash = "sha256:0a8c6aa506893e25a04233bc721c6b6cf844bafd7250535abb56cb6cc1368884"},
|
||||
{file = "SQLAlchemy-2.0.23-cp39-cp39-win_amd64.whl", hash = "sha256:f3420d00d2cb42432c1d0e44540ae83185ccbbc67a6054dcc8ab5387add6620b"},
|
||||
@ -9094,6 +9133,23 @@ files = [
|
||||
idna = ">=2.0"
|
||||
multidict = ">=4.0"
|
||||
|
||||
[[package]]
|
||||
name = "zhipuai"
|
||||
version = "1.0.7"
|
||||
description = "A SDK library for accessing big model apis from ZhipuAI"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "zhipuai-1.0.7-py3-none-any.whl", hash = "sha256:360c01b8c2698f366061452e86d5a36a5ff68a576ea33940da98e4806f232530"},
|
||||
{file = "zhipuai-1.0.7.tar.gz", hash = "sha256:b80f699543d83cce8648acf1ce32bc2725d1c1c443baffa5882abc2cc704d581"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cachetools = "*"
|
||||
dataclasses = "*"
|
||||
PyJWT = "*"
|
||||
requests = "*"
|
||||
|
||||
[[package]]
|
||||
name = "zipp"
|
||||
version = "3.17.0"
|
||||
@ -9111,9 +9167,9 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p
|
||||
|
||||
[extras]
|
||||
cli = ["typer"]
|
||||
extended-testing = ["aiosqlite", "aleph-alpha-client", "anthropic", "arxiv", "assemblyai", "atlassian-python-api", "azure-ai-documentintelligence", "beautifulsoup4", "bibtexparser", "cassio", "chardet", "cohere", "dashvector", "databricks-vectorsearch", "datasets", "dgml-utils", "esprima", "faiss-cpu", "feedparser", "fireworks-ai", "geopandas", "gitpython", "google-cloud-documentai", "gql", "gradientai", "hologres-vector", "html2text", "javelin-sdk", "jinja2", "jq", "jsonschema", "lxml", "markdownify", "motor", "msal", "mwparserfromhell", "mwxml", "newspaper3k", "numexpr", "openai", "openapi-pydantic", "oracle-ads", "pandas", "pdfminer-six", "pgvector", "praw", "psychicapi", "py-trello", "pymupdf", "pypdf", "pypdfium2", "pyspark", "rank-bm25", "rapidfuzz", "rapidocr-onnxruntime", "requests-toolbelt", "rspace_client", "scikit-learn", "sqlite-vss", "streamlit", "sympy", "telethon", "timescale-vector", "tqdm", "upstash-redis", "xata", "xmltodict"]
|
||||
extended-testing = ["aiosqlite", "aleph-alpha-client", "anthropic", "arxiv", "assemblyai", "atlassian-python-api", "azure-ai-documentintelligence", "beautifulsoup4", "bibtexparser", "cassio", "chardet", "cohere", "dashvector", "databricks-vectorsearch", "datasets", "dgml-utils", "esprima", "faiss-cpu", "feedparser", "fireworks-ai", "geopandas", "gitpython", "google-cloud-documentai", "gql", "gradientai", "hologres-vector", "html2text", "javelin-sdk", "jinja2", "jq", "jsonschema", "lxml", "markdownify", "motor", "msal", "mwparserfromhell", "mwxml", "newspaper3k", "numexpr", "openai", "openapi-pydantic", "oracle-ads", "pandas", "pdfminer-six", "pgvector", "praw", "psychicapi", "py-trello", "pymupdf", "pypdf", "pypdfium2", "pyspark", "rank-bm25", "rapidfuzz", "rapidocr-onnxruntime", "requests-toolbelt", "rspace_client", "scikit-learn", "sqlite-vss", "streamlit", "sympy", "telethon", "timescale-vector", "tqdm", "upstash-redis", "xata", "xmltodict", "zhipuai"]
|
||||
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.8.1,<4.0"
|
||||
content-hash = "9094149705a405904c268b09c7dddae98fa466f67b2606defb5c6e3661b36602"
|
||||
content-hash = "92e4dceb499979067b08a494b17c981b62ce0787ad69a7faa084e15795e118b0"
|
||||
|
@ -86,6 +86,7 @@ dgml-utils = {version = "^0.3.0", optional = true}
|
||||
datasets = {version = "^2.15.0", optional = true}
|
||||
azure-ai-documentintelligence = {version = "^1.0.0b1", optional = true}
|
||||
oracle-ads = {version = "^2.9.1", optional = true}
|
||||
zhipuai = {version = "^1.0.7", optional = true}
|
||||
|
||||
[tool.poetry.group.test]
|
||||
optional = true
|
||||
@ -247,6 +248,7 @@ extended_testing = [
|
||||
"cohere",
|
||||
"azure-ai-documentintelligence",
|
||||
"oracle-ads",
|
||||
"zhipuai",
|
||||
]
|
||||
|
||||
[tool.ruff]
|
||||
|
@ -0,0 +1,73 @@
|
||||
"""Test Alibaba Tongyi Chat Model."""
|
||||
|
||||
from langchain_core.callbacks import CallbackManager
|
||||
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
|
||||
from langchain_core.outputs import ChatGeneration, LLMResult
|
||||
|
||||
from langchain_community.chat_models.zhipuai import ChatZhipuAI
|
||||
from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
|
||||
|
||||
|
||||
def test_default_call() -> None:
|
||||
"""Test default model call."""
|
||||
chat = ChatZhipuAI()
|
||||
response = chat(messages=[HumanMessage(content="Hello")])
|
||||
assert isinstance(response, BaseMessage)
|
||||
assert isinstance(response.content, str)
|
||||
|
||||
|
||||
def test_model() -> None:
|
||||
"""Test model kwarg works."""
|
||||
chat = ChatZhipuAI(model="chatglm_turbo")
|
||||
response = chat(messages=[HumanMessage(content="Hello")])
|
||||
assert isinstance(response, BaseMessage)
|
||||
assert isinstance(response.content, str)
|
||||
|
||||
|
||||
def test_multiple_history() -> None:
|
||||
"""Tests multiple history works."""
|
||||
chat = ChatZhipuAI()
|
||||
|
||||
response = chat(
|
||||
messages=[
|
||||
HumanMessage(content="Hello."),
|
||||
AIMessage(content="Hello!"),
|
||||
HumanMessage(content="How are you doing?"),
|
||||
]
|
||||
)
|
||||
assert isinstance(response, BaseMessage)
|
||||
assert isinstance(response.content, str)
|
||||
|
||||
|
||||
def test_stream() -> None:
|
||||
"""Test that stream works."""
|
||||
chat = ChatZhipuAI(streaming=True)
|
||||
callback_handler = FakeCallbackHandler()
|
||||
callback_manager = CallbackManager([callback_handler])
|
||||
response = chat(
|
||||
messages=[
|
||||
HumanMessage(content="Hello."),
|
||||
AIMessage(content="Hello!"),
|
||||
HumanMessage(content="Who are you?"),
|
||||
],
|
||||
stream=True,
|
||||
callbacks=callback_manager,
|
||||
)
|
||||
assert callback_handler.llm_streams > 0
|
||||
assert isinstance(response.content, str)
|
||||
|
||||
|
||||
def test_multiple_messages() -> None:
|
||||
"""Tests multiple messages works."""
|
||||
chat = ChatZhipuAI()
|
||||
message = HumanMessage(content="Hi, how are you.")
|
||||
response = chat.generate([[message], [message]])
|
||||
|
||||
assert isinstance(response, LLMResult)
|
||||
assert len(response.generations) == 2
|
||||
for generations in response.generations:
|
||||
assert len(generations) == 1
|
||||
for generation in generations:
|
||||
assert isinstance(generation, ChatGeneration)
|
||||
assert isinstance(generation.text, str)
|
||||
assert generation.text == generation.message.content
|
@ -34,6 +34,7 @@ EXPECTED_ALL = [
|
||||
"GigaChat",
|
||||
"VolcEngineMaasChat",
|
||||
"GPTRouter",
|
||||
"ChatZhipuAI",
|
||||
]
|
||||
|
||||
|
||||
|
10
libs/community/tests/unit_tests/chat_models/test_zhipuai.py
Normal file
10
libs/community/tests/unit_tests/chat_models/test_zhipuai.py
Normal file
@ -0,0 +1,10 @@
|
||||
import pytest
|
||||
|
||||
from langchain_community.chat_models.zhipuai import ChatZhipuAI
|
||||
|
||||
|
||||
@pytest.mark.requires("zhipuai")
|
||||
def test_integration_initialization() -> None:
|
||||
chat = ChatZhipuAI(model="chatglm_turbo", streaming=False)
|
||||
assert chat.model == "chatglm_turbo"
|
||||
assert chat.streaming is False
|
Loading…
Reference in New Issue
Block a user