mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-13 06:16:26 +00:00
Compare commits
22 Commits
langchain=
...
langchain-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
403fae8eec | ||
|
|
d6b50ad3f6 | ||
|
|
10a9c24dae | ||
|
|
8fc7a723b9 | ||
|
|
f4863f82e2 | ||
|
|
ae4b6380d9 | ||
|
|
ffbc64c72a | ||
|
|
6b0b317cb5 | ||
|
|
21962e2201 | ||
|
|
1eb0bdadfa | ||
|
|
7ecdac5240 | ||
|
|
faef3e5d50 | ||
|
|
d4fc734250 | ||
|
|
4bc70766b5 | ||
|
|
e4877e5ef1 | ||
|
|
8c5ae108dd | ||
|
|
eedda164c6 | ||
|
|
4be55f7c89 | ||
|
|
577cb53a00 | ||
|
|
a7c1bccd6a | ||
|
|
25d77aa8b4 | ||
|
|
59fd4cb4c0 |
@@ -172,7 +172,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
|
||||
|
||||
### Tools
|
||||
|
||||
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
|
||||
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-built tools.
|
||||
|
||||
- [How to: create tools](/docs/how_to/custom_tools)
|
||||
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
|
||||
|
||||
@@ -17,21 +17,21 @@
|
||||
"source": [
|
||||
"# ChatClovaX\n",
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with Naver’s HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
|
||||
"This notebook provides a quick overview for getting started with Naver’s HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain).\n",
|
||||
"\n",
|
||||
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n",
|
||||
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio Guide [documentation](https://guide.ncloud-docs.com/docs/clovastudio-model).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- |:-----:| :---: |:------------------------------------------------------------------------:| :---: | :---: |\n",
|
||||
"| [ChatClovaX](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ |  |  |\n",
|
||||
"| [ChatClovaX](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain#HyperCLOVAX%EB%AA%A8%EB%8D%B8%EC%9D%B4%EC%9A%A9) | [langchain-naver](https://pypi.org/project/langchain-naver/) | ❌ | ❌ | ❌ |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
"|:------------------------------------------:| :---: | :---: | :---: | :---: | :---: |:-----------------------------------------------------:| :---: |:------------------------------------------------------:|:----------------------------------:|\n",
|
||||
"|❌| ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
|
||||
"|✅| ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -39,26 +39,23 @@
|
||||
"\n",
|
||||
"1. Creating [NAVER Cloud Platform](https://www.ncloud.com/) account\n",
|
||||
"2. Apply to use [CLOVA Studio](https://www.ncloud.com/product/aiService/clovaStudio)\n",
|
||||
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#테스트앱생성).)\n",
|
||||
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/clovastudio-playground-testapp).)\n",
|
||||
"4. Issue a Test or Service API key (See [here](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4).)\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Set the `NCP_CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
|
||||
" - Note that if you are using a legacy API Key (that doesn't start with `nv-*` prefix), you might need to get an additional API Key by clicking `App Request Status` > `Service App, Test App List` > `‘Details’ button for each app` in [CLOVA Studio](https://clovastudio.ncloud.com/studio-application/service-app) and set it as `NCP_APIGW_API_KEY`.\n",
|
||||
"Set the `CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
|
||||
"\n",
|
||||
"You can add them to your environment variables as below:\n",
|
||||
"\n",
|
||||
"``` bash\n",
|
||||
"export NCP_CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
|
||||
"# Uncomment below to use a legacy API key\n",
|
||||
"# export NCP_APIGW_API_KEY=\"your-api-key-here\"\n",
|
||||
"export CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"id": "2def81b5-b023-4f40-a97b-b2c5ca59d6a9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -66,22 +63,19 @@
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"NCP_CLOVASTUDIO_API_KEY\"):\n",
|
||||
" os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
|
||||
" \"Enter your NCP CLOVA Studio API Key: \"\n",
|
||||
" )\n",
|
||||
"# Uncomment below to use a legacy API key\n",
|
||||
"# if not os.getenv(\"NCP_APIGW_API_KEY\"):\n",
|
||||
"# os.environ[\"NCP_APIGW_API_KEY\"] = getpass.getpass(\n",
|
||||
"# \"Enter your NCP API Gateway API key: \"\n",
|
||||
"# )"
|
||||
"if not os.getenv(\"CLOVASTUDIO_API_KEY\"):\n",
|
||||
" os.environ[\"CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
|
||||
" \"Enter your CLOVA Studio API Key: \"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7c695442",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
@@ -101,7 +95,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Naver integration lives in the `langchain-community` package:"
|
||||
"The LangChain Naver integration lives in the `langchain-naver` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -112,7 +106,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# install package\n",
|
||||
"!pip install -qU langchain-community"
|
||||
"%pip install -qU langchain-naver"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -127,21 +121,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 3,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_models import ChatClovaX\n",
|
||||
"from langchain_naver import ChatClovaX\n",
|
||||
"\n",
|
||||
"chat = ChatClovaX(\n",
|
||||
" model=\"HCX-003\",\n",
|
||||
" max_tokens=100,\n",
|
||||
" model=\"HCX-005\",\n",
|
||||
" temperature=0.5,\n",
|
||||
" # clovastudio_api_key=\"...\" # set if you prefer to pass api key directly instead of using environment variables\n",
|
||||
" # task_id=\"...\" # set if you want to use fine-tuned model\n",
|
||||
" # service_app=False # set True if using Service App. Default value is False (means using Test App)\n",
|
||||
" # include_ai_filters=False # set True if you want to detect inappropriate content. Default value is False\n",
|
||||
" max_tokens=None,\n",
|
||||
" timeout=None,\n",
|
||||
" max_retries=2,\n",
|
||||
" # other params...\n",
|
||||
")"
|
||||
]
|
||||
@@ -153,12 +145,12 @@
|
||||
"source": [
|
||||
"## Invocation\n",
|
||||
"\n",
|
||||
"In addition to invoke, we also support batch and stream functionalities."
|
||||
"In addition to invoke, `ChatClovaX` also support batch and stream functionalities."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -167,10 +159,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='저는 네이버 AI를 사용하는 것이 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 1112164354, 'ai_filter': None}, id='run-b57bc356-1148-4007-837d-cc409dbd57cc-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
|
||||
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b70c26671cd247a0864115bacfb5fc12', 'finish_reason': 'stop', 'logprobs': None}, id='run-3faf6a8d-d5da-49ad-9fbb-7b56ed23b484-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -190,7 +182,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 5,
|
||||
"id": "24e7377f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -198,7 +190,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"저는 네이버 AI를 사용하는 것이 좋아요.\n"
|
||||
"네이버 인공지능을 사용하는 것을 정말 좋아합니다.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -218,17 +210,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 6,
|
||||
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='저는 네이버 AI를 사용하는 것이 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 2575184681, 'ai_filter': None}, id='run-7014b330-eba3-4701-bb62-df73ce39b854-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
|
||||
"AIMessage(content='저는 네이버 인공지능을 사용하는 것을 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 28, 'total_tokens': 38, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b7a826d17fcf4fee8386fca2ebc63284', 'finish_reason': 'stop', 'logprobs': None}, id='run-35957816-3325-4d9c-9441-e40704912be6-0', usage_metadata={'input_tokens': 28, 'output_tokens': 10, 'total_tokens': 38, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -266,7 +258,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 7,
|
||||
"id": "2c07af21-dda5-4514-b4de-1f214c2cebcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -274,7 +266,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Certainly! In Korean, \"Hi\" is pronounced as \"안녕\" (annyeong). The first syllable, \"안,\" sounds like the \"ahh\" sound in \"apple,\" while the second syllable, \"녕,\" sounds like the \"yuh\" sound in \"you.\" So when you put them together, it's like saying \"ahhyuh-nyuhng.\" Remember to pronounce each syllable clearly and separately for accurate pronunciation."
|
||||
"In Korean, the informal way of saying 'hi' is \"안녕\" (annyeong). If you're addressing someone older or showing more respect, you would use \"안녕하세요\" (annjeonghaseyo). Both phrases are used as greetings similar to 'hello'. Remember, pronunciation is key so make sure to pronounce each syllable clearly: 안-녀-엉 (an-nyeo-eong) and 안-녕-하-세-요 (an-nyeong-ha-se-yo)."
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -298,115 +290,37 @@
|
||||
"\n",
|
||||
"### Using fine-tuned models\n",
|
||||
"\n",
|
||||
"You can call fine-tuned models by passing in your corresponding `task_id` parameter. (You don’t need to specify the `model_name` parameter when calling fine-tuned model.)\n",
|
||||
"You can call fine-tuned models by passing the `task_id` to the `model` parameter as: `ft:{task_id}`.\n",
|
||||
"\n",
|
||||
"You can check `task_id` from corresponding Test App or Service App details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": null,
|
||||
"id": "cb436788",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='저는 네이버 AI를 사용하는 것이 너무 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 15, 'seed': 52559061, 'ai_filter': None}, id='run-5bea8d4a-48f3-4c34-ae70-66e60dca5344-0', usage_metadata={'input_tokens': 25, 'output_tokens': 15, 'total_tokens': 40})"
|
||||
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': '2222d6d411a948c883aac1e03ca6cebe', 'finish_reason': 'stop', 'logprobs': None}, id='run-9696d7e2-7afa-4bb4-9c03-b95fcf678ab8-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"fine_tuned_model = ChatClovaX(\n",
|
||||
" task_id=\"5s8egt3a\", # set if you want to use fine-tuned model\n",
|
||||
" model=\"ft:a1b2c3d4\", # set as `ft:{task_id}` with your fine-tuned model's task id\n",
|
||||
" # other params...\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"fine_tuned_model.invoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f428deaf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Service App\n",
|
||||
"\n",
|
||||
"When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#서비스앱신청).)\n",
|
||||
"\n",
|
||||
"For a Service App, you should use a corresponding Service API key and can only be called with it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "dcf566df",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Update environment variables\n",
|
||||
"\n",
|
||||
"os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
|
||||
" \"Enter NCP CLOVA Studio Service API Key: \"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "cebe27ae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatClovaX(\n",
|
||||
" service_app=True, # True if you want to use your service app, default value is False.\n",
|
||||
" # clovastudio_api_key=\"...\" # if you prefer to pass api key in directly instead of using env vars\n",
|
||||
" model=\"HCX-003\",\n",
|
||||
" # other params...\n",
|
||||
")\n",
|
||||
"ai_msg = chat.invoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d73e7140",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### AI Filter\n",
|
||||
"\n",
|
||||
"AI Filter detects inappropriate output such as profanity from the test app (or service app included) created in Playground and informs the user. See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#AIFilter) for details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "32bfbc93",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatClovaX(\n",
|
||||
" model=\"HCX-003\",\n",
|
||||
" include_ai_filters=True, # True if you want to enable ai filter\n",
|
||||
" # other params...\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"ai_msg = chat.invoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7bd9e179",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(ai_msg.response_metadata[\"ai_filter\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
|
||||
@@ -414,13 +328,13 @@
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all ChatNaver features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html"
|
||||
"For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -434,7 +348,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.3"
|
||||
"version": "3.12.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -90,7 +90,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"id": "d285fd7f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -99,7 +99,7 @@
|
||||
"\n",
|
||||
"# Initialize a Fireworks model\n",
|
||||
"llm = Fireworks(\n",
|
||||
" model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n",
|
||||
" model=\"accounts/fireworks/models/llama-v3p1-8b-instruct\",\n",
|
||||
" base_url=\"https://api.fireworks.ai/inference/v1/completions\",\n",
|
||||
")"
|
||||
]
|
||||
@@ -176,7 +176,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"id": "b801c20d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -192,7 +192,7 @@
|
||||
"source": [
|
||||
"# Setting additional parameters: temperature, max_tokens, top_p\n",
|
||||
"llm = Fireworks(\n",
|
||||
" model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n",
|
||||
" model=\"accounts/fireworks/models/llama-v3p1-8b-instruct\",\n",
|
||||
" temperature=0.7,\n",
|
||||
" max_tokens=15,\n",
|
||||
" top_p=1.0,\n",
|
||||
@@ -218,7 +218,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": null,
|
||||
"id": "fd2c6bc1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -235,7 +235,7 @@
|
||||
"from langchain_fireworks import Fireworks\n",
|
||||
"\n",
|
||||
"llm = Fireworks(\n",
|
||||
" model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n",
|
||||
" model=\"accounts/fireworks/models/llama-v3p1-8b-instruct\",\n",
|
||||
" temperature=0.7,\n",
|
||||
" max_tokens=15,\n",
|
||||
" top_p=1.0,\n",
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -10,19 +10,23 @@ Please refer to [NCP User Guide](https://guide.ncloud-docs.com/docs/clovastudio-
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
- Get a CLOVA Studio API Key by [issuing it](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4) and set it as an environment variable (`NCP_CLOVASTUDIO_API_KEY`).
|
||||
- If you are using a legacy API Key (that doesn't start with `nv-*` prefix), you might need to get an additional API Key by [creating your app](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#create-test-app) and set it as `NCP_APIGW_API_KEY`.
|
||||
- Get a CLOVA Studio API Key by [issuing it](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4) and set it as an environment variable (`CLOVASTUDIO_API_KEY`).
|
||||
|
||||
|
||||
Naver integrations live in two packages:
|
||||
|
||||
- `langchain-naver-community`: a dedicated integration package for Naver. It is a community-maintained package and is not officially maintained by Naver or LangChain.
|
||||
- `langchain-community`: a collection of [third-party integrations](https://python.langchain.com/docs/concepts/architecture/#langchain-community),
|
||||
including Naver. **New features should be implemented in the dedicated `langchain-naver-community` package**.
|
||||
- `langchain-naver`: a dedicated integration package for Naver.
|
||||
- `langchain-naver-community`: a community-maintained package and is not officially maintained by Naver or LangChain.
|
||||
|
||||
```bash
|
||||
pip install -U langchain-community langchain-naver-community
|
||||
pip install -U langchain-naver
|
||||
# pip install -U langchain-naver-community // Install to use Naver Search tool.
|
||||
```
|
||||
|
||||
> **(Note)** Naver integration via `langchain-community`, a collection of [third-party integrations](https://python.langchain.com/docs/concepts/architecture/#langchain-community), is outdated.
|
||||
> - **Use `langchain-naver` instead as new features should only be implemented via this package**.
|
||||
> - If you are using `langchain-community` (outdated) and got a legacy API Key (that doesn't start with `nv-*` prefix), you should set it as `NCP_CLOVASTUDIO_API_KEY`, and might need to get an additional API Gateway API Key by [creating your app](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#create-test-app) and set it as `NCP_APIGW_API_KEY`.
|
||||
|
||||
## Chat models
|
||||
|
||||
### ChatClovaX
|
||||
@@ -30,7 +34,7 @@ pip install -U langchain-community langchain-naver-community
|
||||
See a [usage example](/docs/integrations/chat/naver).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatClovaX
|
||||
from langchain_naver import ChatClovaX
|
||||
```
|
||||
|
||||
## Embedding models
|
||||
@@ -40,7 +44,7 @@ from langchain_community.chat_models import ChatClovaX
|
||||
See a [usage example](/docs/integrations/text_embedding/naver).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import ClovaXEmbeddings
|
||||
from langchain_naver import ClovaXEmbeddings
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
@@ -17,14 +17,14 @@
|
||||
"source": [
|
||||
"# ClovaXEmbeddings\n",
|
||||
"\n",
|
||||
"This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.naver.ClovaXEmbeddings.html).\n",
|
||||
"This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain#%EC%9E%84%EB%B2%A0%EB%94%A9%EB%8F%84%EA%B5%AC%EC%9D%B4%EC%9A%A9).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Provider | Package |\n",
|
||||
"|:--------:|:-------:|\n",
|
||||
"| [Naver](/docs/integrations/providers/naver.mdx) | [langchain-community](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.naver.ClovaXEmbeddings.html) |\n",
|
||||
"| [Naver](/docs/integrations/providers/naver.mdx) | [langchain-naver](https://pypi.org/project/langchain-naver/) |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -33,12 +33,11 @@
|
||||
"1. Creating [NAVER Cloud Platform](https://www.ncloud.com/) account \n",
|
||||
"2. Apply to use [CLOVA Studio](https://www.ncloud.com/product/aiService/clovaStudio)\n",
|
||||
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/clovastudio-explorer03#%ED%85%8C%EC%8A%A4%ED%8A%B8%EC%95%B1%EC%83%9D%EC%84%B1).)\n",
|
||||
"4. Issue a Test or Service API key (See [here](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4).)\n",
|
||||
"4. Issue a Test or Service API key (See [here](https://guide.ncloud-docs.com/docs/clovastudio-explorer-testapp).)\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Set the `NCP_CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
|
||||
" - Note that if you are using a legacy API Key (that doesn't start with `nv-*` prefix), you might need two additional keys to be set as environment variables (`NCP_APIGW_API_KEY` and `NCP_CLOVASTUDIO_APP_ID`. They could be found by clicking `App Request Status` > `Service App, Test App List` > `Details` button for each app in [CLOVA Studio](https://clovastudio.ncloud.com/studio-application/service-app)."
|
||||
"Set the `CLOVASTUDIO_API_KEY` environment variable with your API key."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -51,30 +50,8 @@
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"NCP_CLOVASTUDIO_API_KEY\"):\n",
|
||||
" os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
|
||||
" \"Enter NCP CLOVA Studio API Key: \"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b31fc062",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Uncomment below to use a legacy API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "83520d8e-ecf8-4e47-b3bc-1ac205b3a2ab",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# if not os.getenv(\"NCP_APIGW_API_KEY\"):\n",
|
||||
"# os.environ[\"NCP_APIGW_API_KEY\"] = getpass.getpass(\"Enter NCP API Gateway API Key: \")\n",
|
||||
"# os.environ[\"NCP_CLOVASTUDIO_APP_ID\"] = input(\"Enter NCP CLOVA Studio App ID: \")"
|
||||
"if not os.getenv(\"CLOVASTUDIO_API_KEY\"):\n",
|
||||
" os.environ[\"CLOVASTUDIO_API_KEY\"] = getpass.getpass(\"Enter CLOVA Studio API Key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -84,7 +61,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"ClovaXEmbeddings integration lives in the `langchain_community` package:"
|
||||
"ClovaXEmbeddings integration lives in the `langchain_naver` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -95,7 +72,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# install package\n",
|
||||
"!pip install -U langchain-community"
|
||||
"%pip install -qU langchain-naver"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -113,7 +90,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": null,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"scrolled": true,
|
||||
@@ -121,10 +98,10 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.embeddings import ClovaXEmbeddings\n",
|
||||
"from langchain_naver import ClovaXEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = ClovaXEmbeddings(\n",
|
||||
" model=\"clir-emb-dolphin\" # set with the model name of corresponding app id. Default is `clir-emb-dolphin`\n",
|
||||
" model=\"clir-emb-dolphin\" # set with the model name of corresponding test/service app. Default is `clir-emb-dolphin`\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -225,7 +202,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": null,
|
||||
"id": "1f2e6104",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -239,55 +216,12 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"text2 = \"LangChain is a framework for building context-aware reasoning applications\"\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "eee40d32367cc5c4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Additional functionalities\n",
|
||||
"\n",
|
||||
"### Service App\n",
|
||||
"\n",
|
||||
"When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#서비스앱신청).)\n",
|
||||
"\n",
|
||||
"For a Service App, you should use a corresponding Service API key and can only be called with it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "08f9f44e-c6a4-4163-8caf-27a0cda345b7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Update environment variables\n",
|
||||
"\n",
|
||||
"os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
|
||||
" \"Enter NCP CLOVA Studio API Key for Service App: \"\n",
|
||||
")\n",
|
||||
"# Uncomment below to use a legacy API key:\n",
|
||||
"os.environ[\"NCP_CLOVASTUDIO_APP_ID\"] = input(\"Enter NCP CLOVA Studio Service App ID: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "86f59698-b3f4-4b19-a9d4-4facfcea304b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = ClovaXEmbeddings(\n",
|
||||
" service_app=True,\n",
|
||||
" model=\"clir-emb-dolphin\", # set with the model name of corresponding app id of your Service App\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1ddeaee9",
|
||||
@@ -295,7 +229,7 @@
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/latest/api_reference/community/embeddings/langchain_community.embeddings.naver.ClovaXEmbeddings.html)."
|
||||
"For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain#%EC%9E%84%EB%B2%A0%EB%94%A9%EB%8F%84%EA%B5%AC%EC%9D%B4%EC%9A%A9)."
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -135,6 +135,13 @@ ${llmVarName} = AzureChatOpenAI(
|
||||
apiKeyName: "AZURE_OPENAI_API_KEY",
|
||||
packageName: "langchain[openai]",
|
||||
},
|
||||
{
|
||||
value: "google_genai",
|
||||
label: "Google Gemini",
|
||||
model: "gemini-2.0-flash",
|
||||
apiKeyName: "GOOGLE_API_KEY",
|
||||
packageName: "langchain[google-genai]",
|
||||
},
|
||||
{
|
||||
value: "google_vertexai",
|
||||
label: "Google Vertex",
|
||||
|
||||
1
docs/static/js/google_analytics.js
vendored
1
docs/static/js/google_analytics.js
vendored
@@ -3,3 +3,4 @@ function gtag(){dataLayer.push(arguments);}
|
||||
gtag('js', new Date());
|
||||
|
||||
gtag('config', 'G-9B66JQQH2F');
|
||||
gtag('config', 'G-47WX3HKKY2');
|
||||
|
||||
@@ -185,7 +185,7 @@ class SitemapLoader(WebBaseLoader):
|
||||
|
||||
els.append(
|
||||
{
|
||||
tag: prop.text
|
||||
tag: prop.text.strip()
|
||||
for tag in ["loc", "lastmod", "changefreq", "priority"]
|
||||
if (prop := url.find(tag))
|
||||
}
|
||||
|
||||
@@ -480,6 +480,8 @@ class OpenSearchVectorSearch(VectorStore):
|
||||
bulk_size = bulk_size if bulk_size is not None else self.bulk_size
|
||||
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
|
||||
index_name = kwargs.get("index_name", self.index_name)
|
||||
if self.index_name is None:
|
||||
raise ValueError("index_name must be provided.")
|
||||
text_field = kwargs.get("text_field", "text")
|
||||
dim = len(embeddings[0])
|
||||
engine = kwargs.get("engine", self.engine)
|
||||
@@ -522,6 +524,8 @@ class OpenSearchVectorSearch(VectorStore):
|
||||
bulk_size = bulk_size if bulk_size is not None else self.bulk_size
|
||||
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
|
||||
index_name = kwargs.get("index_name", self.index_name)
|
||||
if self.index_name is None:
|
||||
raise ValueError("index_name must be provided.")
|
||||
text_field = kwargs.get("text_field", "text")
|
||||
dim = len(embeddings[0])
|
||||
engine = kwargs.get("engine", self.engine)
|
||||
@@ -735,12 +739,14 @@ class OpenSearchVectorSearch(VectorStore):
|
||||
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
|
||||
|
||||
body = []
|
||||
|
||||
index_name = kwargs.get("index_name", self.index_name)
|
||||
if self.index_name is None:
|
||||
raise ValueError("index_name must be provided.")
|
||||
if ids is None:
|
||||
raise ValueError("ids must be provided.")
|
||||
|
||||
for _id in ids:
|
||||
body.append({"_op_type": "delete", "_index": self.index_name, "_id": _id})
|
||||
body.append({"_op_type": "delete", "_index": index_name, "_id": _id})
|
||||
|
||||
if len(body) > 0:
|
||||
try:
|
||||
@@ -766,8 +772,10 @@ class OpenSearchVectorSearch(VectorStore):
|
||||
"""
|
||||
if ids is None:
|
||||
raise ValueError("No ids provided to delete.")
|
||||
|
||||
actions = [{"delete": {"_index": self.index_name, "_id": id_}} for id_ in ids]
|
||||
index_name = kwargs.get("index_name", self.index_name)
|
||||
if self.index_name is None:
|
||||
raise ValueError("index_name must be provided.")
|
||||
actions = [{"delete": {"_index": index_name, "_id": id_}} for id_ in ids]
|
||||
response = await self.async_client.bulk(body=actions, **kwargs)
|
||||
return not any(
|
||||
item.get("delete", {}).get("error") for item in response["items"]
|
||||
@@ -1096,6 +1104,8 @@ class OpenSearchVectorSearch(VectorStore):
|
||||
search_type = kwargs.get("search_type", "approximate_search")
|
||||
vector_field = kwargs.get("vector_field", "vector_field")
|
||||
index_name = kwargs.get("index_name", self.index_name)
|
||||
if self.index_name is None:
|
||||
raise ValueError("index_name must be provided.")
|
||||
filter = kwargs.get("filter", {})
|
||||
|
||||
if (
|
||||
|
||||
@@ -7,8 +7,8 @@ authors = []
|
||||
license = { text = "MIT" }
|
||||
requires-python = "<4.0,>=3.9"
|
||||
dependencies = [
|
||||
"langchain-core<1.0.0,>=0.3.51",
|
||||
"langchain<1.0.0,>=0.3.23",
|
||||
"langchain-core<1.0.0,>=0.3.55",
|
||||
"langchain<1.0.0,>=0.3.24",
|
||||
"SQLAlchemy<3,>=1.4",
|
||||
"requests<3,>=2",
|
||||
"PyYAML>=5.3",
|
||||
@@ -22,7 +22,7 @@ dependencies = [
|
||||
"numpy>=2.1.0; python_version>='3.13'",
|
||||
]
|
||||
name = "langchain-community"
|
||||
version = "0.3.21"
|
||||
version = "0.3.22"
|
||||
description = "Community contributed LangChain integrations."
|
||||
readme = "README.md"
|
||||
|
||||
|
||||
@@ -1,155 +0,0 @@
|
||||
import importlib
|
||||
import inspect
|
||||
import pkgutil
|
||||
from types import ModuleType
|
||||
|
||||
from langchain_core.load.mapping import SERIALIZABLE_MAPPING
|
||||
|
||||
|
||||
def import_all_modules(package_name: str) -> dict:
|
||||
package = importlib.import_module(package_name)
|
||||
classes: dict = {}
|
||||
|
||||
def _handle_module(module: ModuleType) -> None:
|
||||
# Iterate over all members of the module
|
||||
|
||||
names = dir(module)
|
||||
|
||||
if hasattr(module, "__all__"):
|
||||
names += list(module.__all__)
|
||||
|
||||
names = sorted(set(names))
|
||||
|
||||
for name in names:
|
||||
# Check if it's a class or function
|
||||
attr = getattr(module, name)
|
||||
|
||||
if not inspect.isclass(attr):
|
||||
continue
|
||||
|
||||
if not hasattr(attr, "is_lc_serializable") or not isinstance(attr, type):
|
||||
continue
|
||||
|
||||
if (
|
||||
isinstance(attr.is_lc_serializable(), bool)
|
||||
and attr.is_lc_serializable()
|
||||
):
|
||||
key = tuple(attr.lc_id())
|
||||
value = tuple(attr.__module__.split(".") + [attr.__name__])
|
||||
if key in classes and classes[key] != value:
|
||||
raise ValueError
|
||||
classes[key] = value
|
||||
|
||||
_handle_module(package)
|
||||
|
||||
for importer, modname, ispkg in pkgutil.walk_packages(
|
||||
package.__path__, package.__name__ + "."
|
||||
):
|
||||
try:
|
||||
module = importlib.import_module(modname)
|
||||
except ModuleNotFoundError:
|
||||
continue
|
||||
_handle_module(module)
|
||||
|
||||
return classes
|
||||
|
||||
|
||||
def test_import_all_modules() -> None:
|
||||
"""Test import all modules works as expected"""
|
||||
all_modules = import_all_modules("langchain")
|
||||
filtered_modules = [
|
||||
k
|
||||
for k in all_modules
|
||||
if len(k) == 4 and tuple(k[:2]) == ("langchain", "chat_models")
|
||||
]
|
||||
# This test will need to be updated if new serializable classes are added
|
||||
# to community
|
||||
assert sorted(filtered_modules) == sorted(
|
||||
[
|
||||
("langchain", "chat_models", "azure_openai", "AzureChatOpenAI"),
|
||||
("langchain", "chat_models", "bedrock", "BedrockChat"),
|
||||
("langchain", "chat_models", "anthropic", "ChatAnthropic"),
|
||||
("langchain", "chat_models", "fireworks", "ChatFireworks"),
|
||||
("langchain", "chat_models", "google_palm", "ChatGooglePalm"),
|
||||
("langchain", "chat_models", "openai", "ChatOpenAI"),
|
||||
("langchain", "chat_models", "vertexai", "ChatVertexAI"),
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def test_serializable_mapping() -> None:
|
||||
to_skip = {
|
||||
# This should have had a different namespace, as it was never
|
||||
# exported from the langchain module, but we keep for whoever has
|
||||
# already serialized it.
|
||||
("langchain", "prompts", "image", "ImagePromptTemplate"): (
|
||||
"langchain_core",
|
||||
"prompts",
|
||||
"image",
|
||||
"ImagePromptTemplate",
|
||||
),
|
||||
# This is not exported from langchain, only langchain_core
|
||||
("langchain_core", "prompts", "structured", "StructuredPrompt"): (
|
||||
"langchain_core",
|
||||
"prompts",
|
||||
"structured",
|
||||
"StructuredPrompt",
|
||||
),
|
||||
# This is not exported from langchain, only langchain_core
|
||||
("langchain", "schema", "messages", "RemoveMessage"): (
|
||||
"langchain_core",
|
||||
"messages",
|
||||
"modifier",
|
||||
"RemoveMessage",
|
||||
),
|
||||
("langchain", "chat_models", "mistralai", "ChatMistralAI"): (
|
||||
"langchain_mistralai",
|
||||
"chat_models",
|
||||
"ChatMistralAI",
|
||||
),
|
||||
("langchain_groq", "chat_models", "ChatGroq"): (
|
||||
"langchain_groq",
|
||||
"chat_models",
|
||||
"ChatGroq",
|
||||
),
|
||||
("langchain_sambanova", "chat_models", "ChatSambaNovaCloud"): (
|
||||
"langchain_sambanova",
|
||||
"chat_models",
|
||||
"ChatSambaNovaCloud",
|
||||
),
|
||||
("langchain_sambanova", "chat_models", "ChatSambaStudio"): (
|
||||
"langchain_sambanova",
|
||||
"chat_models",
|
||||
"ChatSambaStudio",
|
||||
),
|
||||
# TODO(0.3): For now we're skipping the below two tests. Need to fix
|
||||
# so that it only runs when langchain-aws, langchain-google-genai
|
||||
# are installed.
|
||||
("langchain", "chat_models", "bedrock", "ChatBedrock"): (
|
||||
"langchain_aws",
|
||||
"chat_models",
|
||||
"bedrock",
|
||||
"ChatBedrock",
|
||||
),
|
||||
("langchain_google_genai", "chat_models", "ChatGoogleGenerativeAI"): (
|
||||
"langchain_google_genai",
|
||||
"chat_models",
|
||||
"ChatGoogleGenerativeAI",
|
||||
),
|
||||
}
|
||||
serializable_modules = import_all_modules("langchain")
|
||||
|
||||
missing = set(SERIALIZABLE_MAPPING).difference(
|
||||
set(serializable_modules).union(to_skip)
|
||||
)
|
||||
assert missing == set()
|
||||
extra = set(serializable_modules).difference(SERIALIZABLE_MAPPING)
|
||||
assert extra == set()
|
||||
|
||||
for k, import_path in serializable_modules.items():
|
||||
import_dir, import_obj = import_path[:-1], import_path[-1]
|
||||
# Import module
|
||||
mod = importlib.import_module(".".join(import_dir))
|
||||
# Import class
|
||||
cls = getattr(mod, import_obj)
|
||||
assert list(k) == cls.lc_id()
|
||||
12
libs/community/uv.lock
generated
12
libs/community/uv.lock
generated
@@ -1,5 +1,4 @@
|
||||
version = 1
|
||||
revision = 1
|
||||
requires-python = ">=3.9, <4.0"
|
||||
resolution-markers = [
|
||||
"python_full_version >= '3.13' and platform_python_implementation == 'PyPy'",
|
||||
@@ -1498,7 +1497,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain"
|
||||
version = "0.3.23"
|
||||
version = "0.3.24"
|
||||
source = { editable = "../langchain" }
|
||||
dependencies = [
|
||||
{ name = "async-timeout", marker = "python_full_version < '3.11'" },
|
||||
@@ -1539,7 +1538,6 @@ requires-dist = [
|
||||
{ name = "requests", specifier = ">=2,<3" },
|
||||
{ name = "sqlalchemy", specifier = ">=1.4,<3" },
|
||||
]
|
||||
provides-extras = ["community", "anthropic", "openai", "azure-ai", "cohere", "google-vertexai", "google-genai", "fireworks", "ollama", "together", "mistralai", "huggingface", "groq", "aws", "deepseek", "xai", "perplexity"]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
codespell = [{ name = "codespell", specifier = ">=2.2.0,<3.0.0" }]
|
||||
@@ -1596,7 +1594,7 @@ test-integration = [
|
||||
typing = [
|
||||
{ name = "langchain-core", editable = "../core" },
|
||||
{ name = "langchain-text-splitters", editable = "../text-splitters" },
|
||||
{ name = "mypy", specifier = ">=1.10,<2.0" },
|
||||
{ name = "mypy", specifier = ">=1.15,<2.0" },
|
||||
{ name = "mypy-protobuf", specifier = ">=3.0.0,<4.0.0" },
|
||||
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
|
||||
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
|
||||
@@ -1610,7 +1608,7 @@ typing = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-community"
|
||||
version = "0.3.21"
|
||||
version = "0.3.22"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "aiohttp" },
|
||||
@@ -1757,7 +1755,7 @@ typing = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.3.51"
|
||||
version = "0.3.55"
|
||||
source = { editable = "../core" }
|
||||
dependencies = [
|
||||
{ name = "jsonpatch" },
|
||||
@@ -1816,7 +1814,7 @@ typing = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-tests"
|
||||
version = "0.3.17"
|
||||
version = "0.3.19"
|
||||
source = { editable = "../standard-tests" }
|
||||
dependencies = [
|
||||
{ name = "httpx" },
|
||||
|
||||
139
libs/core/langchain_core/language_models/_utils.py
Normal file
139
libs/core/langchain_core/language_models/_utils.py
Normal file
@@ -0,0 +1,139 @@
|
||||
import re
|
||||
from collections.abc import Sequence
|
||||
from typing import Optional
|
||||
|
||||
from langchain_core.messages import BaseMessage
|
||||
|
||||
|
||||
def _is_openai_data_block(block: dict) -> bool:
|
||||
"""Check if the block contains multimodal data in OpenAI Chat Completions format."""
|
||||
if block.get("type") == "image_url":
|
||||
if (
|
||||
(set(block.keys()) <= {"type", "image_url", "detail"})
|
||||
and (image_url := block.get("image_url"))
|
||||
and isinstance(image_url, dict)
|
||||
):
|
||||
url = image_url.get("url")
|
||||
if isinstance(url, str):
|
||||
return True
|
||||
|
||||
elif block.get("type") == "file":
|
||||
if (file := block.get("file")) and isinstance(file, dict):
|
||||
file_data = file.get("file_data")
|
||||
if isinstance(file_data, str):
|
||||
return True
|
||||
|
||||
elif block.get("type") == "input_audio": # noqa: SIM102
|
||||
if (input_audio := block.get("input_audio")) and isinstance(input_audio, dict):
|
||||
audio_data = input_audio.get("data")
|
||||
audio_format = input_audio.get("format")
|
||||
if isinstance(audio_data, str) and isinstance(audio_format, str):
|
||||
return True
|
||||
|
||||
else:
|
||||
return False
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def _parse_data_uri(uri: str) -> Optional[dict]:
|
||||
"""Parse a data URI into its components. If parsing fails, return None.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
data_uri = "data:image/jpeg;base64,/9j/4AAQSkZJRg..."
|
||||
parsed = _parse_data_uri(data_uri)
|
||||
|
||||
assert parsed == {
|
||||
"source_type": "base64",
|
||||
"mime_type": "image/jpeg",
|
||||
"data": "/9j/4AAQSkZJRg...",
|
||||
}
|
||||
"""
|
||||
regex = r"^data:(?P<mime_type>[^;]+);base64,(?P<data>.+)$"
|
||||
match = re.match(regex, uri)
|
||||
if match is None:
|
||||
return None
|
||||
return {
|
||||
"source_type": "base64",
|
||||
"data": match.group("data"),
|
||||
"mime_type": match.group("mime_type"),
|
||||
}
|
||||
|
||||
|
||||
def _convert_openai_format_to_data_block(block: dict) -> dict:
|
||||
"""Convert OpenAI image content block to standard data content block.
|
||||
|
||||
If parsing fails, pass-through.
|
||||
|
||||
Args:
|
||||
block: The OpenAI image content block to convert.
|
||||
|
||||
Returns:
|
||||
The converted standard data content block.
|
||||
"""
|
||||
if block["type"] == "image_url":
|
||||
parsed = _parse_data_uri(block["image_url"]["url"])
|
||||
if parsed is not None:
|
||||
parsed["type"] = "image"
|
||||
return parsed
|
||||
return block
|
||||
|
||||
if block["type"] == "file":
|
||||
parsed = _parse_data_uri(block["file"]["file_data"])
|
||||
if parsed is not None:
|
||||
parsed["type"] = "file"
|
||||
if filename := block["file"].get("filename"):
|
||||
parsed["filename"] = filename
|
||||
return parsed
|
||||
return block
|
||||
|
||||
if block["type"] == "input_audio":
|
||||
data = block["input_audio"].get("data")
|
||||
format = block["input_audio"].get("format")
|
||||
if data and format:
|
||||
return {
|
||||
"type": "audio",
|
||||
"source_type": "base64",
|
||||
"data": data,
|
||||
"mime_type": f"audio/{format}",
|
||||
}
|
||||
return block
|
||||
|
||||
return block
|
||||
|
||||
|
||||
def _normalize_messages(messages: Sequence[BaseMessage]) -> list[BaseMessage]:
|
||||
"""Extend support for message formats.
|
||||
|
||||
Chat models implement support for images in OpenAI Chat Completions format, as well
|
||||
as other multimodal data as standard data blocks. This function extends support to
|
||||
audio and file data in OpenAI Chat Completions format by converting them to standard
|
||||
data blocks.
|
||||
"""
|
||||
formatted_messages = []
|
||||
for message in messages:
|
||||
formatted_message = message
|
||||
if isinstance(message.content, list):
|
||||
for idx, block in enumerate(message.content):
|
||||
if (
|
||||
isinstance(block, dict)
|
||||
# Subset to (PDF) files and audio, as most relevant chat models
|
||||
# support images in OAI format (and some may not yet support the
|
||||
# standard data block format)
|
||||
and block.get("type") in ("file", "input_audio")
|
||||
and _is_openai_data_block(block)
|
||||
):
|
||||
if formatted_message is message:
|
||||
formatted_message = message.model_copy()
|
||||
# Also shallow-copy content
|
||||
formatted_message.content = list(formatted_message.content)
|
||||
|
||||
formatted_message.content[idx] = ( # type: ignore[index] # mypy confused by .model_copy
|
||||
_convert_openai_format_to_data_block(block)
|
||||
)
|
||||
formatted_messages.append(formatted_message)
|
||||
|
||||
return formatted_messages
|
||||
@@ -40,6 +40,7 @@ from langchain_core.callbacks import (
|
||||
Callbacks,
|
||||
)
|
||||
from langchain_core.globals import get_llm_cache
|
||||
from langchain_core.language_models._utils import _normalize_messages
|
||||
from langchain_core.language_models.base import (
|
||||
BaseLanguageModel,
|
||||
LangSmithParams,
|
||||
@@ -489,7 +490,8 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
|
||||
self.rate_limiter.acquire(blocking=True)
|
||||
|
||||
try:
|
||||
for chunk in self._stream(messages, stop=stop, **kwargs):
|
||||
input_messages = _normalize_messages(messages)
|
||||
for chunk in self._stream(input_messages, stop=stop, **kwargs):
|
||||
if chunk.message.id is None:
|
||||
chunk.message.id = f"run-{run_manager.run_id}"
|
||||
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
|
||||
@@ -574,8 +576,9 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
|
||||
|
||||
generation: Optional[ChatGenerationChunk] = None
|
||||
try:
|
||||
input_messages = _normalize_messages(messages)
|
||||
async for chunk in self._astream(
|
||||
messages,
|
||||
input_messages,
|
||||
stop=stop,
|
||||
**kwargs,
|
||||
):
|
||||
@@ -753,7 +756,10 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
|
||||
batch_size=len(messages),
|
||||
)
|
||||
results = []
|
||||
for i, m in enumerate(messages):
|
||||
input_messages = [
|
||||
_normalize_messages(message_list) for message_list in messages
|
||||
]
|
||||
for i, m in enumerate(input_messages):
|
||||
try:
|
||||
results.append(
|
||||
self._generate_with_cache(
|
||||
@@ -865,6 +871,9 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
|
||||
run_id=run_id,
|
||||
)
|
||||
|
||||
input_messages = [
|
||||
_normalize_messages(message_list) for message_list in messages
|
||||
]
|
||||
results = await asyncio.gather(
|
||||
*[
|
||||
self._agenerate_with_cache(
|
||||
@@ -873,7 +882,7 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
|
||||
run_manager=run_managers[i] if run_managers else None,
|
||||
**kwargs,
|
||||
)
|
||||
for i, m in enumerate(messages)
|
||||
for i, m in enumerate(input_messages)
|
||||
],
|
||||
return_exceptions=True,
|
||||
)
|
||||
|
||||
@@ -540,6 +540,12 @@ SERIALIZABLE_MAPPING: dict[tuple[str, ...], tuple[str, ...]] = {
|
||||
"chat_models",
|
||||
"ChatSambaStudio",
|
||||
),
|
||||
("langchain_core", "prompts", "message", "_DictMessagePromptTemplate"): (
|
||||
"langchain_core",
|
||||
"prompts",
|
||||
"dict",
|
||||
"DictPromptTemplate",
|
||||
),
|
||||
}
|
||||
|
||||
# Needed for backwards compatibility for old versions of LangChain where things
|
||||
|
||||
@@ -33,6 +33,7 @@ if TYPE_CHECKING:
|
||||
)
|
||||
from langchain_core.messages.chat import ChatMessage, ChatMessageChunk
|
||||
from langchain_core.messages.content_blocks import (
|
||||
convert_to_openai_data_block,
|
||||
convert_to_openai_image_block,
|
||||
is_data_content_block,
|
||||
)
|
||||
@@ -83,6 +84,7 @@ __all__ = (
|
||||
"ToolMessageChunk",
|
||||
"RemoveMessage",
|
||||
"_message_from_dict",
|
||||
"convert_to_openai_data_block",
|
||||
"convert_to_openai_image_block",
|
||||
"convert_to_messages",
|
||||
"get_buffer_string",
|
||||
@@ -124,6 +126,7 @@ _dynamic_imports = {
|
||||
"MessageLikeRepresentation": "utils",
|
||||
"_message_from_dict": "utils",
|
||||
"convert_to_messages": "utils",
|
||||
"convert_to_openai_data_block": "content_blocks",
|
||||
"convert_to_openai_image_block": "content_blocks",
|
||||
"convert_to_openai_messages": "utils",
|
||||
"filter_messages": "utils",
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
"""Types for content blocks."""
|
||||
|
||||
import warnings
|
||||
from typing import Any, Literal, Union
|
||||
|
||||
from pydantic import TypeAdapter, ValidationError
|
||||
@@ -108,3 +109,47 @@ def convert_to_openai_image_block(content_block: dict[str, Any]) -> dict:
|
||||
}
|
||||
error_message = "Unsupported source type. Only 'url' and 'base64' are supported."
|
||||
raise ValueError(error_message)
|
||||
|
||||
|
||||
def convert_to_openai_data_block(block: dict) -> dict:
|
||||
"""Format standard data content block to format expected by OpenAI."""
|
||||
if block["type"] == "image":
|
||||
formatted_block = convert_to_openai_image_block(block)
|
||||
|
||||
elif block["type"] == "file":
|
||||
if block["source_type"] == "base64":
|
||||
file = {"file_data": f"data:{block['mime_type']};base64,{block['data']}"}
|
||||
if filename := block.get("filename"):
|
||||
file["filename"] = filename
|
||||
elif (metadata := block.get("metadata")) and ("filename" in metadata):
|
||||
file["filename"] = metadata["filename"]
|
||||
else:
|
||||
warnings.warn(
|
||||
"OpenAI may require a filename for file inputs. Specify a filename "
|
||||
"in the content block: {'type': 'file', 'source_type': 'base64', "
|
||||
"'mime_type': 'application/pdf', 'data': '...', "
|
||||
"'filename': 'my-pdf'}",
|
||||
stacklevel=1,
|
||||
)
|
||||
formatted_block = {"type": "file", "file": file}
|
||||
elif block["source_type"] == "id":
|
||||
formatted_block = {"type": "file", "file": {"file_id": block["id"]}}
|
||||
else:
|
||||
error_msg = "source_type base64 or id is required for file blocks."
|
||||
raise ValueError(error_msg)
|
||||
|
||||
elif block["type"] == "audio":
|
||||
if block["source_type"] == "base64":
|
||||
format = block["mime_type"].split("/")[-1]
|
||||
formatted_block = {
|
||||
"type": "input_audio",
|
||||
"input_audio": {"data": block["data"], "format": format},
|
||||
}
|
||||
else:
|
||||
error_msg = "source_type base64 is required for audio blocks."
|
||||
raise ValueError(error_msg)
|
||||
else:
|
||||
error_msg = f"Block of type {block['type']} is not supported."
|
||||
raise ValueError(error_msg)
|
||||
|
||||
return formatted_block
|
||||
|
||||
@@ -12,6 +12,7 @@ from __future__ import annotations
|
||||
import base64
|
||||
import inspect
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
from collections.abc import Iterable, Sequence
|
||||
from functools import partial
|
||||
@@ -30,6 +31,7 @@ from typing import (
|
||||
from pydantic import Discriminator, Field, Tag
|
||||
|
||||
from langchain_core.exceptions import ErrorCode, create_message
|
||||
from langchain_core.messages import convert_to_openai_data_block, is_data_content_block
|
||||
from langchain_core.messages.ai import AIMessage, AIMessageChunk
|
||||
from langchain_core.messages.base import BaseMessage, BaseMessageChunk
|
||||
from langchain_core.messages.chat import ChatMessage, ChatMessageChunk
|
||||
@@ -46,6 +48,8 @@ if TYPE_CHECKING:
|
||||
from langchain_core.prompt_values import PromptValue
|
||||
from langchain_core.runnables.base import Runnable
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _get_type(v: Any) -> str:
|
||||
"""Get the type associated with the object for serialization purposes."""
|
||||
@@ -1067,6 +1071,17 @@ def convert_to_openai_messages(
|
||||
"image_url": block["image_url"],
|
||||
}
|
||||
)
|
||||
# Standard multi-modal content block
|
||||
elif is_data_content_block(block):
|
||||
formatted_block = convert_to_openai_data_block(block)
|
||||
if (
|
||||
formatted_block.get("type") == "file"
|
||||
and "file" in formatted_block
|
||||
and "filename" not in formatted_block["file"]
|
||||
):
|
||||
logger.info("Generating a fallback filename.")
|
||||
formatted_block["file"]["filename"] = "LC_AUTOGENERATED"
|
||||
content.append(formatted_block)
|
||||
# Anthropic and Bedrock converse format
|
||||
elif (block.get("type") == "image") or "image" in block:
|
||||
# Anthropic
|
||||
|
||||
@@ -44,6 +44,7 @@ if TYPE_CHECKING:
|
||||
MessagesPlaceholder,
|
||||
SystemMessagePromptTemplate,
|
||||
)
|
||||
from langchain_core.prompts.dict import DictPromptTemplate
|
||||
from langchain_core.prompts.few_shot import (
|
||||
FewShotChatMessagePromptTemplate,
|
||||
FewShotPromptTemplate,
|
||||
@@ -68,6 +69,7 @@ __all__ = (
|
||||
"BasePromptTemplate",
|
||||
"ChatMessagePromptTemplate",
|
||||
"ChatPromptTemplate",
|
||||
"DictPromptTemplate",
|
||||
"FewShotPromptTemplate",
|
||||
"FewShotPromptWithTemplates",
|
||||
"FewShotChatMessagePromptTemplate",
|
||||
@@ -94,6 +96,7 @@ _dynamic_imports = {
|
||||
"BaseChatPromptTemplate": "chat",
|
||||
"ChatMessagePromptTemplate": "chat",
|
||||
"ChatPromptTemplate": "chat",
|
||||
"DictPromptTemplate": "dict",
|
||||
"HumanMessagePromptTemplate": "chat",
|
||||
"MessagesPlaceholder": "chat",
|
||||
"SystemMessagePromptTemplate": "chat",
|
||||
|
||||
@@ -37,10 +37,10 @@ from langchain_core.messages import (
|
||||
from langchain_core.messages.base import get_msg_title_repr
|
||||
from langchain_core.prompt_values import ChatPromptValue, ImageURL, PromptValue
|
||||
from langchain_core.prompts.base import BasePromptTemplate
|
||||
from langchain_core.prompts.dict import DictPromptTemplate
|
||||
from langchain_core.prompts.image import ImagePromptTemplate
|
||||
from langchain_core.prompts.message import (
|
||||
BaseMessagePromptTemplate,
|
||||
_DictMessagePromptTemplate,
|
||||
)
|
||||
from langchain_core.prompts.prompt import PromptTemplate
|
||||
from langchain_core.prompts.string import (
|
||||
@@ -396,9 +396,7 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
|
||||
|
||||
prompt: Union[
|
||||
StringPromptTemplate,
|
||||
list[
|
||||
Union[StringPromptTemplate, ImagePromptTemplate, _DictMessagePromptTemplate]
|
||||
],
|
||||
list[Union[StringPromptTemplate, ImagePromptTemplate, DictPromptTemplate]],
|
||||
]
|
||||
"""Prompt template."""
|
||||
additional_kwargs: dict = Field(default_factory=dict)
|
||||
@@ -447,7 +445,12 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
|
||||
raise ValueError(msg)
|
||||
prompt = []
|
||||
for tmpl in template:
|
||||
if isinstance(tmpl, str) or isinstance(tmpl, dict) and "text" in tmpl:
|
||||
if (
|
||||
isinstance(tmpl, str)
|
||||
or isinstance(tmpl, dict)
|
||||
and "text" in tmpl
|
||||
and set(tmpl.keys()) <= {"type", "text"}
|
||||
):
|
||||
if isinstance(tmpl, str):
|
||||
text: str = tmpl
|
||||
else:
|
||||
@@ -457,7 +460,15 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
|
||||
text, template_format=template_format
|
||||
)
|
||||
)
|
||||
elif isinstance(tmpl, dict) and "image_url" in tmpl:
|
||||
elif (
|
||||
isinstance(tmpl, dict)
|
||||
and "image_url" in tmpl
|
||||
and set(tmpl.keys())
|
||||
<= {
|
||||
"type",
|
||||
"image_url",
|
||||
}
|
||||
):
|
||||
img_template = cast("_ImageTemplateParam", tmpl)["image_url"]
|
||||
input_variables = []
|
||||
if isinstance(img_template, str):
|
||||
@@ -503,7 +514,7 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
|
||||
"format."
|
||||
)
|
||||
raise ValueError(msg)
|
||||
data_template_obj = _DictMessagePromptTemplate(
|
||||
data_template_obj = DictPromptTemplate(
|
||||
template=cast("dict[str, Any]", tmpl),
|
||||
template_format=template_format,
|
||||
)
|
||||
@@ -592,7 +603,7 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
|
||||
elif isinstance(prompt, ImagePromptTemplate):
|
||||
formatted = prompt.format(**inputs)
|
||||
content.append({"type": "image_url", "image_url": formatted})
|
||||
elif isinstance(prompt, _DictMessagePromptTemplate):
|
||||
elif isinstance(prompt, DictPromptTemplate):
|
||||
formatted = prompt.format(**inputs)
|
||||
content.append(formatted)
|
||||
return self._msg_class(
|
||||
@@ -624,7 +635,7 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
|
||||
elif isinstance(prompt, ImagePromptTemplate):
|
||||
formatted = await prompt.aformat(**inputs)
|
||||
content.append({"type": "image_url", "image_url": formatted})
|
||||
elif isinstance(prompt, _DictMessagePromptTemplate):
|
||||
elif isinstance(prompt, DictPromptTemplate):
|
||||
formatted = prompt.format(**inputs)
|
||||
content.append(formatted)
|
||||
return self._msg_class(
|
||||
|
||||
137
libs/core/langchain_core/prompts/dict.py
Normal file
137
libs/core/langchain_core/prompts/dict.py
Normal file
@@ -0,0 +1,137 @@
|
||||
"""Dict prompt template."""
|
||||
|
||||
import warnings
|
||||
from functools import cached_property
|
||||
from typing import Any, Literal, Optional
|
||||
|
||||
from langchain_core.load import dumpd
|
||||
from langchain_core.prompts.string import (
|
||||
DEFAULT_FORMATTER_MAPPING,
|
||||
get_template_variables,
|
||||
)
|
||||
from langchain_core.runnables import RunnableConfig, RunnableSerializable
|
||||
from langchain_core.runnables.config import ensure_config
|
||||
|
||||
|
||||
class DictPromptTemplate(RunnableSerializable[dict, dict]):
|
||||
"""Template represented by a dict.
|
||||
|
||||
Recognizes variables in f-string or mustache formatted string dict values. Does NOT
|
||||
recognize variables in dict keys. Applies recursively.
|
||||
"""
|
||||
|
||||
template: dict[str, Any]
|
||||
template_format: Literal["f-string", "mustache"]
|
||||
|
||||
@property
|
||||
def input_variables(self) -> list[str]:
|
||||
"""Template input variables."""
|
||||
return _get_input_variables(self.template, self.template_format)
|
||||
|
||||
def format(self, **kwargs: Any) -> dict[str, Any]:
|
||||
"""Format the prompt with the inputs."""
|
||||
return _insert_input_variables(self.template, kwargs, self.template_format)
|
||||
|
||||
async def aformat(self, **kwargs: Any) -> dict[str, Any]:
|
||||
"""Format the prompt with the inputs."""
|
||||
return self.format(**kwargs)
|
||||
|
||||
def invoke(
|
||||
self, input: dict, config: Optional[RunnableConfig] = None, **kwargs: Any
|
||||
) -> dict:
|
||||
"""Invoke the prompt."""
|
||||
return self._call_with_config(
|
||||
lambda x: self.format(**x),
|
||||
input,
|
||||
ensure_config(config),
|
||||
run_type="prompt",
|
||||
serialized=self._serialized,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@property
|
||||
def _prompt_type(self) -> str:
|
||||
return "dict-prompt"
|
||||
|
||||
@cached_property
|
||||
def _serialized(self) -> dict[str, Any]:
|
||||
return dumpd(self)
|
||||
|
||||
@classmethod
|
||||
def is_lc_serializable(cls) -> bool:
|
||||
"""Return whether or not the class is serializable.
|
||||
|
||||
Returns: True.
|
||||
"""
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def get_lc_namespace(cls) -> list[str]:
|
||||
"""Serialization namespace."""
|
||||
return ["langchain_core", "prompts", "dict"]
|
||||
|
||||
def pretty_repr(self, *, html: bool = False) -> str:
|
||||
"""Human-readable representation.
|
||||
|
||||
Args:
|
||||
html: Whether to format as HTML. Defaults to False.
|
||||
|
||||
Returns:
|
||||
Human-readable representation.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def _get_input_variables(
|
||||
template: dict, template_format: Literal["f-string", "mustache"]
|
||||
) -> list[str]:
|
||||
input_variables = []
|
||||
for v in template.values():
|
||||
if isinstance(v, str):
|
||||
input_variables += get_template_variables(v, template_format)
|
||||
elif isinstance(v, dict):
|
||||
input_variables += _get_input_variables(v, template_format)
|
||||
elif isinstance(v, (list, tuple)):
|
||||
for x in v:
|
||||
if isinstance(x, str):
|
||||
input_variables += get_template_variables(x, template_format)
|
||||
elif isinstance(x, dict):
|
||||
input_variables += _get_input_variables(x, template_format)
|
||||
else:
|
||||
pass
|
||||
return list(set(input_variables))
|
||||
|
||||
|
||||
def _insert_input_variables(
|
||||
template: dict[str, Any],
|
||||
inputs: dict[str, Any],
|
||||
template_format: Literal["f-string", "mustache"],
|
||||
) -> dict[str, Any]:
|
||||
formatted = {}
|
||||
formatter = DEFAULT_FORMATTER_MAPPING[template_format]
|
||||
for k, v in template.items():
|
||||
if isinstance(v, str):
|
||||
formatted[k] = formatter(v, **inputs)
|
||||
elif isinstance(v, dict):
|
||||
if k == "image_url" and "path" in v:
|
||||
msg = (
|
||||
"Specifying image inputs via file path in environments with "
|
||||
"user-input paths is a security vulnerability. Out of an abundance "
|
||||
"of caution, the utility has been removed to prevent possible "
|
||||
"misuse."
|
||||
)
|
||||
warnings.warn(msg, stacklevel=2)
|
||||
formatted[k] = _insert_input_variables(v, inputs, template_format)
|
||||
elif isinstance(v, (list, tuple)):
|
||||
formatted_v = []
|
||||
for x in v:
|
||||
if isinstance(x, str):
|
||||
formatted_v.append(formatter(x, **inputs))
|
||||
elif isinstance(x, dict):
|
||||
formatted_v.append(
|
||||
_insert_input_variables(x, inputs, template_format)
|
||||
)
|
||||
formatted[k] = type(v)(formatted_v)
|
||||
else:
|
||||
formatted[k] = v
|
||||
return formatted
|
||||
@@ -3,14 +3,10 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import TYPE_CHECKING, Any, Literal
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from langchain_core.load import Serializable
|
||||
from langchain_core.messages import BaseMessage, convert_to_messages
|
||||
from langchain_core.prompts.string import (
|
||||
DEFAULT_FORMATTER_MAPPING,
|
||||
get_template_variables,
|
||||
)
|
||||
from langchain_core.messages import BaseMessage
|
||||
from langchain_core.utils.interactive_env import is_interactive_env
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -98,89 +94,3 @@ class BaseMessagePromptTemplate(Serializable, ABC):
|
||||
|
||||
prompt = ChatPromptTemplate(messages=[self])
|
||||
return prompt + other
|
||||
|
||||
|
||||
class _DictMessagePromptTemplate(BaseMessagePromptTemplate):
|
||||
"""Template represented by a dict that recursively fills input vars in string vals.
|
||||
|
||||
Special handling of image_url dicts to load local paths. These look like:
|
||||
``{"type": "image_url", "image_url": {"path": "..."}}``
|
||||
"""
|
||||
|
||||
template: dict[str, Any]
|
||||
template_format: Literal["f-string", "mustache"]
|
||||
|
||||
def format_messages(self, **kwargs: Any) -> list[BaseMessage]:
|
||||
msg_dict = _insert_input_variables(self.template, kwargs, self.template_format)
|
||||
return convert_to_messages([msg_dict])
|
||||
|
||||
@property
|
||||
def input_variables(self) -> list[str]:
|
||||
return _get_input_variables(self.template, self.template_format)
|
||||
|
||||
@property
|
||||
def _prompt_type(self) -> str:
|
||||
return "message-dict-prompt"
|
||||
|
||||
@classmethod
|
||||
def get_lc_namespace(cls) -> list[str]:
|
||||
return ["langchain_core", "prompts", "message"]
|
||||
|
||||
def format(
|
||||
self,
|
||||
**kwargs: Any,
|
||||
) -> dict[str, Any]:
|
||||
"""Format the prompt with the inputs."""
|
||||
return _insert_input_variables(self.template, kwargs, self.template_format)
|
||||
|
||||
|
||||
def _get_input_variables(
|
||||
template: dict, template_format: Literal["f-string", "mustache"]
|
||||
) -> list[str]:
|
||||
input_variables = []
|
||||
for v in template.values():
|
||||
if isinstance(v, str):
|
||||
input_variables += get_template_variables(v, template_format)
|
||||
elif isinstance(v, dict):
|
||||
input_variables += _get_input_variables(v, template_format)
|
||||
elif isinstance(v, (list, tuple)):
|
||||
for x in v:
|
||||
if isinstance(x, str):
|
||||
input_variables += get_template_variables(x, template_format)
|
||||
elif isinstance(x, dict):
|
||||
input_variables += _get_input_variables(x, template_format)
|
||||
return list(set(input_variables))
|
||||
|
||||
|
||||
def _insert_input_variables(
|
||||
template: dict[str, Any],
|
||||
inputs: dict[str, Any],
|
||||
template_format: Literal["f-string", "mustache"],
|
||||
) -> dict[str, Any]:
|
||||
formatted = {}
|
||||
formatter = DEFAULT_FORMATTER_MAPPING[template_format]
|
||||
for k, v in template.items():
|
||||
if isinstance(v, str):
|
||||
formatted[k] = formatter(v, **inputs)
|
||||
elif isinstance(v, dict):
|
||||
# No longer support loading local images.
|
||||
if k == "image_url" and "path" in v:
|
||||
msg = (
|
||||
"Specifying image inputs via file path in environments with "
|
||||
"user-input paths is a security vulnerability. Out of an abundance "
|
||||
"of caution, the utility has been removed to prevent possible "
|
||||
"misuse."
|
||||
)
|
||||
raise ValueError(msg)
|
||||
formatted[k] = _insert_input_variables(v, inputs, template_format)
|
||||
elif isinstance(v, (list, tuple)):
|
||||
formatted_v = []
|
||||
for x in v:
|
||||
if isinstance(x, str):
|
||||
formatted_v.append(formatter(x, **inputs))
|
||||
elif isinstance(x, dict):
|
||||
formatted_v.append(
|
||||
_insert_input_variables(x, inputs, template_format)
|
||||
)
|
||||
formatted[k] = type(v)(formatted_v)
|
||||
return formatted
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""langchain-core version information and utilities."""
|
||||
|
||||
VERSION = "0.3.55"
|
||||
VERSION = "0.3.56"
|
||||
|
||||
@@ -17,7 +17,7 @@ dependencies = [
|
||||
"pydantic<3.0.0,>=2.7.4; python_full_version >= \"3.12.4\"",
|
||||
]
|
||||
name = "langchain-core"
|
||||
version = "0.3.55"
|
||||
version = "0.3.56"
|
||||
description = "Building applications with LLMs through composability"
|
||||
readme = "README.md"
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@ from langchain_core.language_models import (
|
||||
FakeListChatModel,
|
||||
ParrotFakeChatModel,
|
||||
)
|
||||
from langchain_core.language_models._utils import _normalize_messages
|
||||
from langchain_core.language_models.fake_chat_models import FakeListChatModelError
|
||||
from langchain_core.messages import (
|
||||
AIMessage,
|
||||
@@ -455,3 +456,143 @@ def test_trace_images_in_openai_format() -> None:
|
||||
"url": "https://example.com/image.png",
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
def test_extend_support_to_openai_multimodal_formats() -> None:
|
||||
"""Test that chat models normalize OpenAI file and audio inputs."""
|
||||
llm = ParrotFakeChatModel()
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "Hello"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "https://example.com/image.png"},
|
||||
},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/jpeg;base64,/9j/4AAQSkZJRg..."},
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {
|
||||
"filename": "draconomicon.pdf",
|
||||
"file_data": "data:application/pdf;base64,<base64 string>",
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {
|
||||
"file_data": "data:application/pdf;base64,<base64 string>",
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {"file_id": "<file id>"},
|
||||
},
|
||||
{
|
||||
"type": "input_audio",
|
||||
"input_audio": {"data": "<base64 data>", "format": "wav"},
|
||||
},
|
||||
],
|
||||
},
|
||||
]
|
||||
expected_content = [
|
||||
{"type": "text", "text": "Hello"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "https://example.com/image.png"},
|
||||
},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/jpeg;base64,/9j/4AAQSkZJRg..."},
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source_type": "base64",
|
||||
"data": "<base64 string>",
|
||||
"mime_type": "application/pdf",
|
||||
"filename": "draconomicon.pdf",
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source_type": "base64",
|
||||
"data": "<base64 string>",
|
||||
"mime_type": "application/pdf",
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {"file_id": "<file id>"},
|
||||
},
|
||||
{
|
||||
"type": "audio",
|
||||
"source_type": "base64",
|
||||
"data": "<base64 data>",
|
||||
"mime_type": "audio/wav",
|
||||
},
|
||||
]
|
||||
response = llm.invoke(messages)
|
||||
assert response.content == expected_content
|
||||
|
||||
# Test no mutation
|
||||
assert messages[0]["content"] == [
|
||||
{"type": "text", "text": "Hello"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "https://example.com/image.png"},
|
||||
},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": "data:image/jpeg;base64,/9j/4AAQSkZJRg..."},
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {
|
||||
"filename": "draconomicon.pdf",
|
||||
"file_data": "data:application/pdf;base64,<base64 string>",
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {
|
||||
"file_data": "data:application/pdf;base64,<base64 string>",
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {"file_id": "<file id>"},
|
||||
},
|
||||
{
|
||||
"type": "input_audio",
|
||||
"input_audio": {"data": "<base64 data>", "format": "wav"},
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def test_normalize_messages_edge_cases() -> None:
|
||||
# Test some blocks that should pass through
|
||||
messages = [
|
||||
HumanMessage(
|
||||
content=[
|
||||
{
|
||||
"type": "file",
|
||||
"file": "uri",
|
||||
},
|
||||
{
|
||||
"type": "input_file",
|
||||
"file_data": "uri",
|
||||
"filename": "file-name",
|
||||
},
|
||||
{
|
||||
"type": "input_audio",
|
||||
"input_audio": "uri",
|
||||
},
|
||||
{
|
||||
"type": "input_image",
|
||||
"image_url": "uri",
|
||||
},
|
||||
]
|
||||
)
|
||||
]
|
||||
assert messages == _normalize_messages(messages)
|
||||
|
||||
@@ -33,6 +33,7 @@ EXPECTED_ALL = [
|
||||
"filter_messages",
|
||||
"merge_message_runs",
|
||||
"trim_messages",
|
||||
"convert_to_openai_data_block",
|
||||
"convert_to_openai_image_block",
|
||||
"convert_to_openai_messages",
|
||||
]
|
||||
|
||||
@@ -1186,6 +1186,76 @@ def test_convert_to_openai_messages_developer() -> None:
|
||||
assert result == [{"role": "developer", "content": "a"}] * 2
|
||||
|
||||
|
||||
def test_convert_to_openai_messages_multimodal() -> None:
|
||||
messages = [
|
||||
HumanMessage(
|
||||
content=[
|
||||
{"type": "text", "text": "Text message"},
|
||||
{
|
||||
"type": "image",
|
||||
"source_type": "url",
|
||||
"url": "https://example.com/test.png",
|
||||
},
|
||||
{
|
||||
"type": "image",
|
||||
"source_type": "base64",
|
||||
"data": "<base64 string>",
|
||||
"mime_type": "image/png",
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source_type": "base64",
|
||||
"data": "<base64 string>",
|
||||
"mime_type": "application/pdf",
|
||||
"filename": "test.pdf",
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source_type": "id",
|
||||
"id": "file-abc123",
|
||||
},
|
||||
{
|
||||
"type": "audio",
|
||||
"source_type": "base64",
|
||||
"data": "<base64 string>",
|
||||
"mime_type": "audio/wav",
|
||||
},
|
||||
]
|
||||
)
|
||||
]
|
||||
result = convert_to_openai_messages(messages, text_format="block")
|
||||
assert len(result) == 1
|
||||
message = result[0]
|
||||
assert len(message["content"]) == 6
|
||||
|
||||
# Test adding filename
|
||||
messages = [
|
||||
HumanMessage(
|
||||
content=[
|
||||
{
|
||||
"type": "file",
|
||||
"source_type": "base64",
|
||||
"data": "<base64 string>",
|
||||
"mime_type": "application/pdf",
|
||||
},
|
||||
]
|
||||
)
|
||||
]
|
||||
with pytest.warns(match="filename"):
|
||||
result = convert_to_openai_messages(messages, text_format="block")
|
||||
assert len(result) == 1
|
||||
message = result[0]
|
||||
assert len(message["content"]) == 1
|
||||
block = message["content"][0]
|
||||
assert block == {
|
||||
"type": "file",
|
||||
"file": {
|
||||
"file_data": "data:application/pdf;base64,<base64 string>",
|
||||
"filename": "LC_AUTOGENERATED",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def test_count_tokens_approximately_empty_messages() -> None:
|
||||
# Test with empty message list
|
||||
assert count_tokens_approximately([]) == 0
|
||||
|
||||
@@ -3135,6 +3135,27 @@
|
||||
'name': 'PromptTemplate',
|
||||
'type': 'constructor',
|
||||
}),
|
||||
dict({
|
||||
'id': list([
|
||||
'langchain_core',
|
||||
'prompts',
|
||||
'dict',
|
||||
'DictPromptTemplate',
|
||||
]),
|
||||
'kwargs': dict({
|
||||
'template': dict({
|
||||
'cache_control': dict({
|
||||
'type': '{foo}',
|
||||
}),
|
||||
'text': "What's in this image?",
|
||||
'type': 'text',
|
||||
}),
|
||||
'template_format': 'f-string',
|
||||
}),
|
||||
'lc': 1,
|
||||
'name': 'DictPromptTemplate',
|
||||
'type': 'constructor',
|
||||
}),
|
||||
dict({
|
||||
'id': list([
|
||||
'langchain',
|
||||
|
||||
@@ -973,6 +973,11 @@ def test_chat_tmpl_serdes(snapshot: SnapshotAssertion) -> None:
|
||||
"hello",
|
||||
{"text": "What's in this image?"},
|
||||
{"type": "text", "text": "What's in this image?"},
|
||||
{
|
||||
"type": "text",
|
||||
"text": "What's in this image?",
|
||||
"cache_control": {"type": "{foo}"},
|
||||
},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": "data:image/jpeg;base64,{my_image}",
|
||||
@@ -1012,7 +1017,7 @@ def test_chat_tmpl_serdes(snapshot: SnapshotAssertion) -> None:
|
||||
@pytest.mark.xfail(
|
||||
reason=(
|
||||
"In a breaking release, we can update `_convert_to_message_template` to use "
|
||||
"_DictMessagePromptTemplate for all `dict` inputs, allowing for templatization "
|
||||
"DictPromptTemplate for all `dict` inputs, allowing for templatization "
|
||||
"of message attributes outside content blocks. That would enable the below "
|
||||
"test to pass."
|
||||
)
|
||||
|
||||
34
libs/core/tests/unit_tests/prompts/test_dict.py
Normal file
34
libs/core/tests/unit_tests/prompts/test_dict.py
Normal file
@@ -0,0 +1,34 @@
|
||||
from langchain_core.load import load
|
||||
from langchain_core.prompts.dict import DictPromptTemplate
|
||||
|
||||
|
||||
def test__dict_message_prompt_template_fstring() -> None:
|
||||
template = {
|
||||
"type": "text",
|
||||
"text": "{text1}",
|
||||
"cache_control": {"type": "{cache_type}"},
|
||||
}
|
||||
prompt = DictPromptTemplate(template=template, template_format="f-string")
|
||||
expected = {
|
||||
"type": "text",
|
||||
"text": "important message",
|
||||
"cache_control": {"type": "ephemeral"},
|
||||
}
|
||||
actual = prompt.format(text1="important message", cache_type="ephemeral")
|
||||
assert actual == expected
|
||||
|
||||
|
||||
def test_deserialize_legacy() -> None:
|
||||
ser = {
|
||||
"type": "constructor",
|
||||
"lc": 1,
|
||||
"id": ["langchain_core", "prompts", "message", "_DictMessagePromptTemplate"],
|
||||
"kwargs": {
|
||||
"template_format": "f-string",
|
||||
"template": {"type": "audio", "audio": "{audio_data}"},
|
||||
},
|
||||
}
|
||||
expected = DictPromptTemplate(
|
||||
template={"type": "audio", "audio": "{audio_data}"}, template_format="f-string"
|
||||
)
|
||||
assert load(ser) == expected
|
||||
@@ -6,6 +6,7 @@ EXPECTED_ALL = [
|
||||
"BasePromptTemplate",
|
||||
"ChatMessagePromptTemplate",
|
||||
"ChatPromptTemplate",
|
||||
"DictPromptTemplate",
|
||||
"FewShotPromptTemplate",
|
||||
"FewShotPromptWithTemplates",
|
||||
"FewShotChatMessagePromptTemplate",
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
from pathlib import Path
|
||||
|
||||
from langchain_core.messages import AIMessage, BaseMessage, ToolMessage
|
||||
from langchain_core.prompts.message import _DictMessagePromptTemplate
|
||||
|
||||
CUR_DIR = Path(__file__).parent.absolute().resolve()
|
||||
|
||||
|
||||
def test__dict_message_prompt_template_fstring() -> None:
|
||||
template = {
|
||||
"role": "assistant",
|
||||
"content": [
|
||||
{"type": "text", "text": "{text1}", "cache_control": {"type": "ephemeral"}},
|
||||
],
|
||||
"name": "{name1}",
|
||||
"tool_calls": [
|
||||
{
|
||||
"name": "{tool_name1}",
|
||||
"args": {"arg1": "{tool_arg1}"},
|
||||
"id": "1",
|
||||
"type": "tool_call",
|
||||
}
|
||||
],
|
||||
}
|
||||
prompt = _DictMessagePromptTemplate(template=template, template_format="f-string")
|
||||
expected: BaseMessage = AIMessage(
|
||||
[
|
||||
{
|
||||
"type": "text",
|
||||
"text": "important message",
|
||||
"cache_control": {"type": "ephemeral"},
|
||||
},
|
||||
],
|
||||
name="foo",
|
||||
tool_calls=[
|
||||
{
|
||||
"name": "do_stuff",
|
||||
"args": {"arg1": "important arg1"},
|
||||
"id": "1",
|
||||
"type": "tool_call",
|
||||
}
|
||||
],
|
||||
)
|
||||
actual = prompt.format_messages(
|
||||
text1="important message",
|
||||
name1="foo",
|
||||
tool_arg1="important arg1",
|
||||
tool_name1="do_stuff",
|
||||
)[0]
|
||||
assert actual == expected
|
||||
|
||||
template = {
|
||||
"role": "tool",
|
||||
"content": "{content1}",
|
||||
"tool_call_id": "1",
|
||||
"name": "{name1}",
|
||||
}
|
||||
prompt = _DictMessagePromptTemplate(template=template, template_format="f-string")
|
||||
expected = ToolMessage("foo", name="bar", tool_call_id="1")
|
||||
actual = prompt.format_messages(content1="foo", name1="bar")[0]
|
||||
assert actual == expected
|
||||
4
libs/core/uv.lock
generated
4
libs/core/uv.lock
generated
@@ -937,7 +937,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.3.55"
|
||||
version = "0.3.56"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "jsonpatch" },
|
||||
@@ -1104,7 +1104,7 @@ test-integration = [
|
||||
]
|
||||
typing = [
|
||||
{ name = "lxml-stubs", specifier = ">=0.5.1,<1.0.0" },
|
||||
{ name = "mypy", specifier = ">=1.10,<2.0" },
|
||||
{ name = "mypy", specifier = ">=1.15,<2.0" },
|
||||
{ name = "tiktoken", specifier = ">=0.8.0,<1.0.0" },
|
||||
{ name = "types-requests", specifier = ">=2.31.0.20240218,<3.0.0.0" },
|
||||
]
|
||||
|
||||
@@ -5,615 +5,641 @@ packages:
|
||||
- name: langchain-core
|
||||
path: libs/core
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 34037607
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 51178135
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-text-splitters
|
||||
path: libs/text-splitters
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 15929924
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 18371499
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain
|
||||
path: libs/langchain
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 57432421
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 68611637
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-community
|
||||
path: libs/community
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 18667783
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 20961009
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-experimental
|
||||
path: libs/experimental
|
||||
repo: langchain-ai/langchain-experimental
|
||||
downloads: 1898303
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 1651817
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-cli
|
||||
path: libs/cli
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 52317
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 55074
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-ai21
|
||||
path: libs/ai21
|
||||
repo: langchain-ai/langchain-ai21
|
||||
downloads: 4634
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 4684
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-anthropic
|
||||
path: libs/partners/anthropic
|
||||
repo: langchain-ai/langchain
|
||||
js: '@langchain/anthropic'
|
||||
downloads: 2206405
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 2205980
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-chroma
|
||||
path: libs/partners/chroma
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 653121
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 934777
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-exa
|
||||
path: libs/partners/exa
|
||||
repo: langchain-ai/langchain
|
||||
provider_page: exa_search
|
||||
js: '@langchain/exa'
|
||||
downloads: 5577
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 5949
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-fireworks
|
||||
path: libs/partners/fireworks
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 252470
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 253744
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-groq
|
||||
path: libs/partners/groq
|
||||
repo: langchain-ai/langchain
|
||||
js: '@langchain/groq'
|
||||
downloads: 623776
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 713166
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-huggingface
|
||||
path: libs/partners/huggingface
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 520031
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 565389
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-ibm
|
||||
path: libs/ibm
|
||||
repo: langchain-ai/langchain-ibm
|
||||
js: '@langchain/ibm'
|
||||
downloads: 138680
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 193195
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-localai
|
||||
path: libs/localai
|
||||
repo: mkhludnev/langchain-localai
|
||||
downloads: 551
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 811
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-milvus
|
||||
path: libs/milvus
|
||||
repo: langchain-ai/langchain-milvus
|
||||
downloads: 212461
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
disabled: true
|
||||
downloads: 207750
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-mistralai
|
||||
path: libs/partners/mistralai
|
||||
repo: langchain-ai/langchain
|
||||
js: '@langchain/mistralai'
|
||||
downloads: 347559
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 333887
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-mongodb
|
||||
path: libs/langchain-mongodb
|
||||
repo: langchain-ai/langchain-mongodb
|
||||
provider_page: mongodb_atlas
|
||||
js: '@langchain/mongodb'
|
||||
downloads: 206335
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 229323
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-nomic
|
||||
path: libs/partners/nomic
|
||||
repo: langchain-ai/langchain
|
||||
js: '@langchain/nomic'
|
||||
downloads: 13770
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 13453
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-openai
|
||||
path: libs/partners/openai
|
||||
repo: langchain-ai/langchain
|
||||
js: '@langchain/openai'
|
||||
downloads: 11970288
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 12632953
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-pinecone
|
||||
path: libs/pinecone
|
||||
repo: langchain-ai/langchain-pinecone
|
||||
js: '@langchain/pinecone'
|
||||
downloads: 460930
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 731139
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-prompty
|
||||
path: libs/partners/prompty
|
||||
repo: langchain-ai/langchain
|
||||
provider_page: microsoft
|
||||
downloads: 1434
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 2215
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-qdrant
|
||||
path: libs/partners/qdrant
|
||||
repo: langchain-ai/langchain
|
||||
js: '@langchain/qdrant'
|
||||
downloads: 157906
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 156264
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-scrapegraph
|
||||
path: .
|
||||
repo: ScrapeGraphAI/langchain-scrapegraph
|
||||
downloads: 1248
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 1338
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-sema4
|
||||
path: libs/sema4
|
||||
repo: langchain-ai/langchain-sema4
|
||||
provider_page: robocorp
|
||||
downloads: 1661
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 1864
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-together
|
||||
path: libs/together
|
||||
repo: langchain-ai/langchain-together
|
||||
downloads: 87068
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 84925
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-upstage
|
||||
path: libs/upstage
|
||||
repo: langchain-ai/langchain-upstage
|
||||
downloads: 19096
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 20074
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-voyageai
|
||||
path: libs/partners/voyageai
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 32072
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 31164
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-aws
|
||||
name_title: AWS
|
||||
path: libs/aws
|
||||
repo: langchain-ai/langchain-aws
|
||||
js: '@langchain/aws'
|
||||
downloads: 2406627
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 2756214
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-astradb
|
||||
path: libs/astradb
|
||||
repo: langchain-ai/langchain-datastax
|
||||
downloads: 93059
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 100973
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-google-genai
|
||||
name_title: Google Generative AI
|
||||
path: libs/genai
|
||||
repo: langchain-ai/langchain-google
|
||||
provider_page: google
|
||||
js: '@langchain/google-genai'
|
||||
downloads: 1436931
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 1860492
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-google-vertexai
|
||||
path: libs/vertexai
|
||||
repo: langchain-ai/langchain-google
|
||||
provider_page: google
|
||||
js: '@langchain/google-vertexai'
|
||||
downloads: 12451626
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 14375847
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-google-community
|
||||
path: libs/community
|
||||
repo: langchain-ai/langchain-google
|
||||
provider_page: google
|
||||
downloads: 4685976
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 4565784
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-weaviate
|
||||
path: libs/weaviate
|
||||
repo: langchain-ai/langchain-weaviate
|
||||
js: '@langchain/weaviate'
|
||||
downloads: 51226
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 42280
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-cohere
|
||||
path: libs/cohere
|
||||
repo: langchain-ai/langchain-cohere
|
||||
js: '@langchain/cohere'
|
||||
downloads: 824573
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 816207
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-elasticsearch
|
||||
path: libs/elasticsearch
|
||||
repo: langchain-ai/langchain-elastic
|
||||
downloads: 172813
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 182874
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-nvidia-ai-endpoints
|
||||
path: libs/ai-endpoints
|
||||
repo: langchain-ai/langchain-nvidia
|
||||
provider_page: nvidia
|
||||
downloads: 190677
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 178772
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-postgres
|
||||
path: .
|
||||
repo: langchain-ai/langchain-postgres
|
||||
provider_page: pgvector
|
||||
downloads: 464832
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 751590
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-redis
|
||||
path: libs/redis
|
||||
repo: langchain-ai/langchain-redis
|
||||
js: '@langchain/redis'
|
||||
downloads: 34437
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 43514
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-unstructured
|
||||
path: libs/unstructured
|
||||
repo: langchain-ai/langchain-unstructured
|
||||
downloads: 160903
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 152489
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-azure-ai
|
||||
path: libs/azure-ai
|
||||
repo: langchain-ai/langchain-azure
|
||||
provider_page: azure_ai
|
||||
js: '@langchain/openai'
|
||||
downloads: 25508
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 29862
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-azure-dynamic-sessions
|
||||
path: libs/azure-dynamic-sessions
|
||||
repo: langchain-ai/langchain-azure
|
||||
provider_page: microsoft
|
||||
js: '@langchain/azure-dynamic-sessions'
|
||||
downloads: 10158
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 9328
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-sqlserver
|
||||
path: libs/sqlserver
|
||||
repo: langchain-ai/langchain-azure
|
||||
provider_page: microsoft
|
||||
downloads: 2337
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 2519
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-cerebras
|
||||
path: libs/cerebras
|
||||
repo: langchain-ai/langchain-cerebras
|
||||
downloads: 57330
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 66301
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-snowflake
|
||||
path: libs/snowflake
|
||||
repo: langchain-ai/langchain-snowflake
|
||||
downloads: 1906
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 2235
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: databricks-langchain
|
||||
name_title: Databricks
|
||||
path: integrations/langchain
|
||||
repo: databricks/databricks-ai-bridge
|
||||
provider_page: databricks
|
||||
downloads: 116103
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 112136
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-couchbase
|
||||
path: .
|
||||
repo: Couchbase-Ecosystem/langchain-couchbase
|
||||
downloads: 744
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 1251
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-ollama
|
||||
path: libs/partners/ollama
|
||||
repo: langchain-ai/langchain
|
||||
js: '@langchain/ollama'
|
||||
downloads: 948150
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 924780
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-box
|
||||
path: libs/box
|
||||
repo: box-community/langchain-box
|
||||
downloads: 563
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 703
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-tests
|
||||
path: libs/standard-tests
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 252853
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 267152
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-neo4j
|
||||
path: libs/neo4j
|
||||
repo: langchain-ai/langchain-neo4j
|
||||
downloads: 50662
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 55071
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-linkup
|
||||
path: .
|
||||
repo: LinkupPlatform/langchain-linkup
|
||||
downloads: 581
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 782
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-yt-dlp
|
||||
path: .
|
||||
repo: aqib0770/langchain-yt-dlp
|
||||
downloads: 2254
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 2369
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-oceanbase
|
||||
path: .
|
||||
repo: oceanbase/langchain-oceanbase
|
||||
downloads: 71
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 73
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-predictionguard
|
||||
path: .
|
||||
repo: predictionguard/langchain-predictionguard
|
||||
downloads: 3230
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 4063
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-cratedb
|
||||
path: .
|
||||
repo: crate/langchain-cratedb
|
||||
downloads: 185
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 216
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-modelscope
|
||||
path: .
|
||||
repo: modelscope/langchain-modelscope
|
||||
downloads: 122
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 141
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-falkordb
|
||||
path: .
|
||||
repo: kingtroga/langchain-falkordb
|
||||
downloads: 139
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 129
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-dappier
|
||||
path: .
|
||||
repo: DappierAI/langchain-dappier
|
||||
downloads: 234
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 343
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-pull-md
|
||||
path: .
|
||||
repo: chigwell/langchain-pull-md
|
||||
downloads: 117
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 135
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-kuzu
|
||||
path: .
|
||||
repo: kuzudb/langchain-kuzu
|
||||
downloads: 352
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 760
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-docling
|
||||
path: .
|
||||
repo: DS4SD/docling-langchain
|
||||
downloads: 16426
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 18845
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-lindorm-integration
|
||||
path: .
|
||||
repo: AlwaysBluer/langchain-lindorm-integration
|
||||
provider_page: lindorm
|
||||
downloads: 69
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
|
||||
- name: langchain-hyperbrowser
|
||||
path: .
|
||||
repo: hyperbrowserai/langchain-hyperbrowser
|
||||
downloads: 203
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 523
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-fmp-data
|
||||
path: .
|
||||
repo: MehdiZare/langchain-fmp-data
|
||||
downloads: 100
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 108
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: tilores-langchain
|
||||
name_title: Tilores
|
||||
path: .
|
||||
repo: tilotech/tilores-langchain
|
||||
provider_page: tilores
|
||||
downloads: 93
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 124
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-pipeshift
|
||||
path: .
|
||||
repo: pipeshift-org/langchain-pipeshift
|
||||
downloads: 88
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 119
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-payman-tool
|
||||
path: .
|
||||
repo: paymanai/langchain-payman-tool
|
||||
downloads: 223
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 226
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-sambanova
|
||||
path: .
|
||||
repo: sambanova/langchain-sambanova
|
||||
downloads: 51371
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 53108
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-deepseek
|
||||
path: libs/partners/deepseek
|
||||
repo: langchain-ai/langchain
|
||||
provider_page: deepseek
|
||||
js: '@langchain/deepseek'
|
||||
downloads: 66642
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 100570
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-jenkins
|
||||
path: .
|
||||
repo: Amitgb14/langchain_jenkins
|
||||
downloads: 194
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 200
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-goodfire
|
||||
path: .
|
||||
repo: keenanpepper/langchain-goodfire
|
||||
downloads: 332
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 314
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-nimble
|
||||
path: .
|
||||
repo: Nimbleway/langchain-nimble
|
||||
downloads: 160
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 214
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-apify
|
||||
path: .
|
||||
repo: apify/langchain-apify
|
||||
downloads: 748
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 886
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langfair
|
||||
name_title: LangFair
|
||||
path: .
|
||||
repo: cvs-health/langfair
|
||||
downloads: 1051
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 1692
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-abso
|
||||
path: .
|
||||
repo: lunary-ai/langchain-abso
|
||||
downloads: 190
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 233
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-graph-retriever
|
||||
name_title: Graph RAG
|
||||
path: packages/langchain-graph-retriever
|
||||
repo: datastax/graph-rag
|
||||
provider_page: graph_rag
|
||||
downloads: 13573
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 47297
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-xai
|
||||
path: libs/partners/xai
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 37703
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 44422
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-salesforce
|
||||
path: .
|
||||
repo: colesmcintosh/langchain-salesforce
|
||||
downloads: 366
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 455
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-discord-shikenso
|
||||
path: .
|
||||
repo: Shikenso-Analytics/langchain-discord
|
||||
downloads: 138
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 137
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-vdms
|
||||
name_title: VDMS
|
||||
path: .
|
||||
repo: IntelLabs/langchain-vdms
|
||||
downloads: 1455
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 11847
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-deeplake
|
||||
path: .
|
||||
repo: activeloopai/langchain-deeplake
|
||||
downloads: 186
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 117
|
||||
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
|
||||
- name: langchain-cognee
|
||||
path: .
|
||||
repo: topoteretes/langchain-cognee
|
||||
downloads: 114
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-prolog
|
||||
path: .
|
||||
repo: apisani1/langchain-prolog
|
||||
downloads: 198
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 175
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-permit
|
||||
path: .
|
||||
repo: permitio/langchain-permit
|
||||
downloads: 240
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 266
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-pymupdf4llm
|
||||
path: .
|
||||
repo: lakinduboteju/langchain-pymupdf4llm
|
||||
downloads: 2354
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 5324
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-writer
|
||||
path: .
|
||||
repo: writer/langchain-writer
|
||||
downloads: 545
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 728
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-taiga
|
||||
name_title: Taiga
|
||||
path: .
|
||||
repo: Shikenso-Analytics/langchain-taiga
|
||||
downloads: 250
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 439
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-tableau
|
||||
name_title: Tableau
|
||||
path: .
|
||||
repo: Tab-SE/tableau_langchain
|
||||
downloads: 195
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 551
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: ads4gpts-langchain
|
||||
name_title: ADS4GPTs
|
||||
path: libs/python-sdk/ads4gpts-langchain
|
||||
repo: ADS4GPTs/ads4gpts
|
||||
provider_page: ads4gpts
|
||||
downloads: 930
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 626
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-contextual
|
||||
name_title: Contextual AI
|
||||
path: langchain-contextual
|
||||
repo: ContextualAI//langchain-contextual
|
||||
downloads: 785
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 365
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-valthera
|
||||
name_title: Valthera
|
||||
path: .
|
||||
repo: valthera/langchain-valthera
|
||||
downloads: 560
|
||||
downloads_updated_at: '2025-04-04T17:02:26.080646+00:00'
|
||||
downloads: 213
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-opengradient
|
||||
path: .
|
||||
repo: OpenGradient/og-langchain
|
||||
downloads: 408
|
||||
downloads_updated_at: '2025-04-04T17:02:26.080646+00:00'
|
||||
downloads: 263
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: goat-sdk-adapter-langchain
|
||||
name_title: GOAT SDK
|
||||
path: python/src/adapters/langchain
|
||||
repo: goat-sdk/goat
|
||||
provider_page: goat
|
||||
downloads: 421
|
||||
downloads_updated_at: '2025-04-04T17:02:05.408319+00:00'
|
||||
downloads: 418
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-netmind
|
||||
path: .
|
||||
repo: protagolabs/langchain-netmind
|
||||
downloads: 186
|
||||
downloads_updated_at: '2025-04-04T17:02:26.080646+00:00'
|
||||
downloads: 65
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-agentql
|
||||
path: langchain
|
||||
repo: tinyfish-io/agentql-integrations
|
||||
downloads: 502
|
||||
downloads_updated_at: '2025-04-04T17:02:43.163359+00:00'
|
||||
downloads: 227
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-xinference
|
||||
path: .
|
||||
repo: TheSongg/langchain-xinference
|
||||
downloads: 188
|
||||
downloads_updated_at: '2025-04-04T17:02:43.163359+00:00'
|
||||
downloads: 132
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: powerscale-rag-connector
|
||||
name_title: PowerScale RAG Connector
|
||||
path: .
|
||||
repo: dell/powerscale-rag-connector
|
||||
provider_page: dell
|
||||
downloads: 158
|
||||
downloads_updated_at: '2025-04-04T17:02:43.163359+00:00'
|
||||
downloads: 89
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-tavily
|
||||
path: .
|
||||
repo: tavily-ai/langchain-tavily
|
||||
downloads: 3298
|
||||
downloads_updated_at: '2025-04-04T17:04:16.538679+00:00'
|
||||
downloads: 13796
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-zotero-retriever
|
||||
name_title: Zotero
|
||||
path: .
|
||||
repo: TimBMK/langchain-zotero-retriever
|
||||
provider_page: zotero
|
||||
downloads: 169
|
||||
downloads_updated_at: '2025-04-04T17:04:16.538679+00:00'
|
||||
- name: langchain-naver-community
|
||||
downloads: 72
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-naver
|
||||
name_title: Naver
|
||||
path: .
|
||||
repo: NaverCloudPlatform/langchain-naver
|
||||
provider_page: naver
|
||||
downloads: 239
|
||||
downloads_updated_at: '2025-04-22T15:43:47.979572+00:00'
|
||||
- name: langchain-naver-community
|
||||
name_title: Naver (community-maintained)
|
||||
path: .
|
||||
repo: e7217/langchain-naver-community
|
||||
provider_page: naver
|
||||
downloads: 141
|
||||
downloads_updated_at: '2025-04-04T17:04:16.538679+00:00'
|
||||
downloads: 119
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-memgraph
|
||||
path: .
|
||||
repo: memgraph/langchain-memgraph
|
||||
downloads: 250
|
||||
downloads_updated_at: '2025-04-04T17:04:16.538679+00:00'
|
||||
downloads: 222
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-vectara
|
||||
path: libs/vectara
|
||||
repo: vectara/langchain-vectara
|
||||
downloads: 227
|
||||
downloads_updated_at: '2025-04-04T17:04:16.538679+00:00'
|
||||
downloads: 284
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-oxylabs
|
||||
path: .
|
||||
repo: oxylabs/langchain-oxylabs
|
||||
downloads: 141
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-perplexity
|
||||
path: libs/partners/perplexity
|
||||
repo: langchain-ai/langchain
|
||||
downloads: 3297
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-runpod
|
||||
name_title: RunPod
|
||||
path: .
|
||||
repo: runpod/langchain-runpod
|
||||
provider_page: runpod
|
||||
downloads: 145
|
||||
downloads_updated_at: '2025-04-04T17:06:18.386313+00:00'
|
||||
downloads: 283
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-mariadb
|
||||
path: .
|
||||
repo: mariadb-corporation/langchain-mariadb
|
||||
downloads: 741
|
||||
downloads_updated_at: '2025-04-04T17:06:18.386313+00:00'
|
||||
downloads: 428
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-qwq
|
||||
path: .
|
||||
repo: yigit353/langchain-qwq
|
||||
provider_page: alibaba_cloud
|
||||
downloads: 1062
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-litellm
|
||||
path: .
|
||||
repo: akshay-dongare/langchain-litellm
|
||||
downloads: 2114
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-cloudflare
|
||||
repo: cloudflare/langchain-cloudflare
|
||||
path: .
|
||||
repo: cloudflare/langchain-cloudflare
|
||||
downloads: 766
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-ydb
|
||||
path: .
|
||||
repo: ydb-platform/langchain-ydb
|
||||
downloads: 231
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-singlestore
|
||||
name_title: SingleStore
|
||||
path: .
|
||||
repo: singlestore-labs/langchain-singlestore
|
||||
downloads: 116
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-galaxia-retriever
|
||||
provider_page: galaxia
|
||||
path: .
|
||||
repo: rrozanski-smabbler/galaxia-langchain
|
||||
provider_page: galaxia
|
||||
downloads: 319
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
- name: langchain-valyu
|
||||
path: .
|
||||
repo: valyu-network/langchain-valyu
|
||||
downloads: 120
|
||||
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
|
||||
|
||||
@@ -39,7 +39,7 @@ import os
|
||||
|
||||
# Initialize a Fireworks model
|
||||
llm = Fireworks(
|
||||
model="accounts/fireworks/models/mixtral-8x7b-instruct",
|
||||
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
|
||||
base_url="https://api.fireworks.ai/inference/v1/completions",
|
||||
)
|
||||
```
|
||||
|
||||
@@ -279,7 +279,7 @@ class ChatFireworks(BaseChatModel):
|
||||
|
||||
from langchain_fireworks.chat_models import ChatFireworks
|
||||
fireworks = ChatFireworks(
|
||||
model_name="accounts/fireworks/models/mixtral-8x7b-instruct")
|
||||
model_name="accounts/fireworks/models/llama-v3p1-8b-instruct")
|
||||
"""
|
||||
|
||||
@property
|
||||
@@ -306,11 +306,9 @@ class ChatFireworks(BaseChatModel):
|
||||
|
||||
client: Any = Field(default=None, exclude=True) #: :meta private:
|
||||
async_client: Any = Field(default=None, exclude=True) #: :meta private:
|
||||
model_name: str = Field(
|
||||
default="accounts/fireworks/models/mixtral-8x7b-instruct", alias="model"
|
||||
)
|
||||
model_name: str = Field(alias="model")
|
||||
"""Model name to use."""
|
||||
temperature: float = 0.0
|
||||
temperature: Optional[float] = None
|
||||
"""What sampling temperature to use."""
|
||||
stop: Optional[Union[str, list[str]]] = Field(default=None, alias="stop_sequences")
|
||||
"""Default stop sequences."""
|
||||
@@ -397,10 +395,11 @@ class ChatFireworks(BaseChatModel):
|
||||
"model": self.model_name,
|
||||
"stream": self.streaming,
|
||||
"n": self.n,
|
||||
"temperature": self.temperature,
|
||||
"stop": self.stop,
|
||||
**self.model_kwargs,
|
||||
}
|
||||
if self.temperature is not None:
|
||||
params["temperature"] = self.temperature
|
||||
if self.max_tokens is not None:
|
||||
params["max_tokens"] = self.max_tokens
|
||||
return params
|
||||
|
||||
@@ -7,14 +7,14 @@ authors = []
|
||||
license = { text = "MIT" }
|
||||
requires-python = "<4.0,>=3.9"
|
||||
dependencies = [
|
||||
"langchain-core<1.0.0,>=0.3.49",
|
||||
"langchain-core<1.0.0,>=0.3.55",
|
||||
"fireworks-ai>=0.13.0",
|
||||
"openai<2.0.0,>=1.10.0",
|
||||
"requests<3,>=2",
|
||||
"aiohttp<4.0.0,>=3.9.1",
|
||||
]
|
||||
name = "langchain-fireworks"
|
||||
version = "0.2.9"
|
||||
version = "0.3.0"
|
||||
description = "An integration package connecting Fireworks and LangChain"
|
||||
readme = "README.md"
|
||||
|
||||
|
||||
@@ -13,54 +13,13 @@ from typing_extensions import TypedDict
|
||||
|
||||
from langchain_fireworks import ChatFireworks
|
||||
|
||||
|
||||
def test_chat_fireworks_call() -> None:
|
||||
"""Test valid call to fireworks."""
|
||||
llm = ChatFireworks( # type: ignore[call-arg]
|
||||
model="accounts/fireworks/models/llama-v3p1-70b-instruct", temperature=0
|
||||
)
|
||||
|
||||
resp = llm.invoke("Hello!")
|
||||
assert isinstance(resp, AIMessage)
|
||||
|
||||
assert len(resp.content) > 0
|
||||
|
||||
|
||||
def test_tool_choice() -> None:
|
||||
"""Test that tool choice is respected."""
|
||||
llm = ChatFireworks( # type: ignore[call-arg]
|
||||
model="accounts/fireworks/models/llama-v3p1-70b-instruct", temperature=0
|
||||
)
|
||||
|
||||
class MyTool(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
|
||||
with_tool = llm.bind_tools([MyTool], tool_choice="MyTool")
|
||||
|
||||
resp = with_tool.invoke("Who was the 27 year old named Erick?")
|
||||
assert isinstance(resp, AIMessage)
|
||||
assert resp.content == "" # should just be tool call
|
||||
tool_calls = resp.additional_kwargs["tool_calls"]
|
||||
assert len(tool_calls) == 1
|
||||
tool_call = tool_calls[0]
|
||||
assert tool_call["function"]["name"] == "MyTool"
|
||||
assert json.loads(tool_call["function"]["arguments"]) == {
|
||||
"age": 27,
|
||||
"name": "Erick",
|
||||
}
|
||||
assert tool_call["type"] == "function"
|
||||
assert isinstance(resp.tool_calls, list)
|
||||
assert len(resp.tool_calls) == 1
|
||||
tool_call = resp.tool_calls[0]
|
||||
assert tool_call["name"] == "MyTool"
|
||||
assert tool_call["args"] == {"age": 27, "name": "Erick"}
|
||||
_MODEL = "accounts/fireworks/models/llama-v3p1-8b-instruct"
|
||||
|
||||
|
||||
def test_tool_choice_bool() -> None:
|
||||
"""Test that tool choice is respected just passing in True."""
|
||||
|
||||
llm = ChatFireworks( # type: ignore[call-arg]
|
||||
llm = ChatFireworks(
|
||||
model="accounts/fireworks/models/llama-v3p1-70b-instruct", temperature=0
|
||||
)
|
||||
|
||||
@@ -84,17 +43,9 @@ def test_tool_choice_bool() -> None:
|
||||
assert tool_call["type"] == "function"
|
||||
|
||||
|
||||
def test_stream() -> None:
|
||||
"""Test streaming tokens from ChatFireworks."""
|
||||
llm = ChatFireworks() # type: ignore[call-arg]
|
||||
|
||||
for token in llm.stream("I'm Pickle Rick"):
|
||||
assert isinstance(token.content, str)
|
||||
|
||||
|
||||
async def test_astream() -> None:
|
||||
"""Test streaming tokens from ChatFireworks."""
|
||||
llm = ChatFireworks() # type: ignore[call-arg]
|
||||
llm = ChatFireworks(model=_MODEL)
|
||||
|
||||
full: Optional[BaseMessageChunk] = None
|
||||
chunks_with_token_counts = 0
|
||||
@@ -125,18 +76,9 @@ async def test_astream() -> None:
|
||||
assert full.response_metadata["model_name"]
|
||||
|
||||
|
||||
async def test_abatch() -> None:
|
||||
"""Test abatch tokens from ChatFireworks."""
|
||||
llm = ChatFireworks() # type: ignore[call-arg]
|
||||
|
||||
result = await llm.abatch(["I'm Pickle Rick", "I'm not Pickle Rick"])
|
||||
for token in result:
|
||||
assert isinstance(token.content, str)
|
||||
|
||||
|
||||
async def test_abatch_tags() -> None:
|
||||
"""Test batch tokens from ChatFireworks."""
|
||||
llm = ChatFireworks() # type: ignore[call-arg]
|
||||
llm = ChatFireworks(model=_MODEL)
|
||||
|
||||
result = await llm.abatch(
|
||||
["I'm Pickle Rick", "I'm not Pickle Rick"], config={"tags": ["foo"]}
|
||||
@@ -145,18 +87,9 @@ async def test_abatch_tags() -> None:
|
||||
assert isinstance(token.content, str)
|
||||
|
||||
|
||||
def test_batch() -> None:
|
||||
"""Test batch tokens from ChatFireworks."""
|
||||
llm = ChatFireworks() # type: ignore[call-arg]
|
||||
|
||||
result = llm.batch(["I'm Pickle Rick", "I'm not Pickle Rick"])
|
||||
for token in result:
|
||||
assert isinstance(token.content, str)
|
||||
|
||||
|
||||
async def test_ainvoke() -> None:
|
||||
"""Test invoke tokens from ChatFireworks."""
|
||||
llm = ChatFireworks() # type: ignore[call-arg]
|
||||
llm = ChatFireworks(model=_MODEL)
|
||||
|
||||
result = await llm.ainvoke("I'm Pickle Rick", config={"tags": ["foo"]})
|
||||
assert isinstance(result.content, str)
|
||||
@@ -164,7 +97,7 @@ async def test_ainvoke() -> None:
|
||||
|
||||
def test_invoke() -> None:
|
||||
"""Test invoke tokens from ChatFireworks."""
|
||||
llm = ChatFireworks() # type: ignore[call-arg]
|
||||
llm = ChatFireworks(model=_MODEL)
|
||||
|
||||
result = llm.invoke("I'm Pickle Rick", config=dict(tags=["foo"]))
|
||||
assert isinstance(result.content, str)
|
||||
|
||||
@@ -17,11 +17,12 @@
|
||||
}),
|
||||
'max_retries': 2,
|
||||
'max_tokens': 100,
|
||||
'model_name': 'accounts/fireworks/models/mixtral-8x7b-instruct',
|
||||
'model_name': 'accounts/fireworks/models/llama-v3p1-70b-instruct',
|
||||
'n': 1,
|
||||
'request_timeout': 60.0,
|
||||
'stop': list([
|
||||
]),
|
||||
'temperature': 0.0,
|
||||
}),
|
||||
'lc': 1,
|
||||
'name': 'ChatFireworks',
|
||||
|
||||
@@ -15,7 +15,10 @@ class TestFireworksStandard(ChatModelUnitTests):
|
||||
|
||||
@property
|
||||
def chat_model_params(self) -> dict:
|
||||
return {"api_key": "test_api_key"}
|
||||
return {
|
||||
"model": "accounts/fireworks/models/llama-v3p1-70b-instruct",
|
||||
"api_key": "test_api_key",
|
||||
}
|
||||
|
||||
@property
|
||||
def init_from_env_params(self) -> tuple[dict, dict, dict]:
|
||||
@@ -24,7 +27,9 @@ class TestFireworksStandard(ChatModelUnitTests):
|
||||
"FIREWORKS_API_KEY": "api_key",
|
||||
"FIREWORKS_API_BASE": "https://base.com",
|
||||
},
|
||||
{},
|
||||
{
|
||||
"model": "accounts/fireworks/models/llama-v3p1-70b-instruct",
|
||||
},
|
||||
{
|
||||
"fireworks_api_key": "api_key",
|
||||
"fireworks_api_base": "https://base.com",
|
||||
|
||||
26
libs/partners/fireworks/uv.lock
generated
26
libs/partners/fireworks/uv.lock
generated
@@ -1,7 +1,8 @@
|
||||
version = 1
|
||||
requires-python = ">=3.9, <4.0"
|
||||
resolution-markers = [
|
||||
"python_full_version >= '3.12.4'",
|
||||
"python_full_version >= '3.13'",
|
||||
"python_full_version >= '3.12.4' and python_full_version < '3.13'",
|
||||
"python_full_version >= '3.12' and python_full_version < '3.12.4'",
|
||||
"python_full_version < '3.12'",
|
||||
]
|
||||
@@ -635,7 +636,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.3.49"
|
||||
version = "0.3.55"
|
||||
source = { editable = "../../core" }
|
||||
dependencies = [
|
||||
{ name = "jsonpatch" },
|
||||
@@ -665,16 +666,18 @@ dev = [
|
||||
{ name = "jupyter", specifier = ">=1.0.0,<2.0.0" },
|
||||
{ name = "setuptools", specifier = ">=67.6.1,<68.0.0" },
|
||||
]
|
||||
lint = [{ name = "ruff", specifier = ">=0.9.2,<1.0.0" }]
|
||||
lint = [{ name = "ruff", specifier = ">=0.11.2,<0.12.0" }]
|
||||
test = [
|
||||
{ name = "blockbuster", specifier = "~=1.5.18" },
|
||||
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
|
||||
{ name = "grandalf", specifier = ">=0.8,<1.0" },
|
||||
{ name = "langchain-tests", directory = "../../standard-tests" },
|
||||
{ name = "numpy", marker = "python_full_version < '3.12'", specifier = ">=1.24.0,<2.0.0" },
|
||||
{ name = "numpy", marker = "python_full_version >= '3.12'", specifier = ">=1.26.0,<3" },
|
||||
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
|
||||
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
|
||||
{ name = "pytest", specifier = ">=8,<9" },
|
||||
{ name = "pytest-asyncio", specifier = ">=0.21.1,<1.0.0" },
|
||||
{ name = "pytest-benchmark" },
|
||||
{ name = "pytest-codspeed" },
|
||||
{ name = "pytest-mock", specifier = ">=3.10.0,<4.0.0" },
|
||||
{ name = "pytest-socket", specifier = ">=0.7.0,<1.0.0" },
|
||||
{ name = "pytest-watcher", specifier = ">=0.3.4,<1.0.0" },
|
||||
@@ -685,15 +688,14 @@ test = [
|
||||
test-integration = []
|
||||
typing = [
|
||||
{ name = "langchain-text-splitters", directory = "../../text-splitters" },
|
||||
{ name = "mypy", specifier = ">=1.10,<1.11" },
|
||||
{ name = "types-jinja2", specifier = ">=2.11.9,<3.0.0" },
|
||||
{ name = "mypy", specifier = ">=1.15,<1.16" },
|
||||
{ name = "types-pyyaml", specifier = ">=6.0.12.2,<7.0.0.0" },
|
||||
{ name = "types-requests", specifier = ">=2.28.11.5,<3.0.0.0" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "langchain-fireworks"
|
||||
version = "0.2.9"
|
||||
version = "0.3.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "aiohttp" },
|
||||
@@ -763,7 +765,7 @@ typing = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-tests"
|
||||
version = "0.3.17"
|
||||
version = "0.3.19"
|
||||
source = { editable = "../../standard-tests" }
|
||||
dependencies = [
|
||||
{ name = "httpx" },
|
||||
@@ -780,7 +782,8 @@ dependencies = [
|
||||
requires-dist = [
|
||||
{ name = "httpx", specifier = ">=0.25.0,<1" },
|
||||
{ name = "langchain-core", editable = "../../core" },
|
||||
{ name = "numpy", specifier = ">=1.26.2,<3" },
|
||||
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.2" },
|
||||
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
|
||||
{ name = "pytest", specifier = ">=7,<9" },
|
||||
{ name = "pytest-asyncio", specifier = ">=0.20,<1" },
|
||||
{ name = "pytest-socket", specifier = ">=0.6.0,<1" },
|
||||
@@ -1005,7 +1008,8 @@ name = "numpy"
|
||||
version = "2.2.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
resolution-markers = [
|
||||
"python_full_version >= '3.12.4'",
|
||||
"python_full_version >= '3.13'",
|
||||
"python_full_version >= '3.12.4' and python_full_version < '3.13'",
|
||||
"python_full_version >= '3.12' and python_full_version < '3.12.4'",
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ec/d0/c12ddfd3a02274be06ffc71f3efc6d0e457b0409c4481596881e748cb264/numpy-2.2.2.tar.gz", hash = "sha256:ed6906f61834d687738d25988ae117683705636936cc605be0bb208b23df4d8f", size = 20233295 }
|
||||
|
||||
@@ -61,7 +61,7 @@ from langchain_core.messages import (
|
||||
ToolCall,
|
||||
ToolMessage,
|
||||
ToolMessageChunk,
|
||||
convert_to_openai_image_block,
|
||||
convert_to_openai_data_block,
|
||||
is_data_content_block,
|
||||
)
|
||||
from langchain_core.messages.ai import (
|
||||
@@ -186,45 +186,6 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
|
||||
return ChatMessage(content=_dict.get("content", ""), role=role, id=id_) # type: ignore[arg-type]
|
||||
|
||||
|
||||
def _format_data_content_block(block: dict) -> dict:
|
||||
"""Format standard data content block to format expected by OpenAI."""
|
||||
if block["type"] == "image":
|
||||
formatted_block = convert_to_openai_image_block(block)
|
||||
|
||||
elif block["type"] == "file":
|
||||
if block["source_type"] == "base64":
|
||||
file = {"file_data": f"data:{block['mime_type']};base64,{block['data']}"}
|
||||
if filename := block.get("filename"):
|
||||
file["filename"] = filename
|
||||
elif (metadata := block.get("metadata")) and ("filename" in metadata):
|
||||
file["filename"] = metadata["filename"]
|
||||
else:
|
||||
warnings.warn(
|
||||
"OpenAI may require a filename for file inputs. Specify a filename "
|
||||
"in the content block: {'type': 'file', 'source_type': 'base64', "
|
||||
"'mime_type': 'application/pdf', 'data': '...', "
|
||||
"'filename': 'my-pdf'}"
|
||||
)
|
||||
formatted_block = {"type": "file", "file": file}
|
||||
elif block["source_type"] == "id":
|
||||
formatted_block = {"type": "file", "file": {"file_id": block["id"]}}
|
||||
else:
|
||||
raise ValueError("source_type base64 or id is required for file blocks.")
|
||||
elif block["type"] == "audio":
|
||||
if block["source_type"] == "base64":
|
||||
format = block["mime_type"].split("/")[-1]
|
||||
formatted_block = {
|
||||
"type": "input_audio",
|
||||
"input_audio": {"data": block["data"], "format": format},
|
||||
}
|
||||
else:
|
||||
raise ValueError("source_type base64 is required for audio blocks.")
|
||||
else:
|
||||
raise ValueError(f"Block of type {block['type']} is not supported.")
|
||||
|
||||
return formatted_block
|
||||
|
||||
|
||||
def _format_message_content(content: Any) -> Any:
|
||||
"""Format message content."""
|
||||
if content and isinstance(content, list):
|
||||
@@ -238,7 +199,7 @@ def _format_message_content(content: Any) -> Any:
|
||||
):
|
||||
continue
|
||||
elif isinstance(block, dict) and is_data_content_block(block):
|
||||
formatted_content.append(_format_data_content_block(block))
|
||||
formatted_content.append(convert_to_openai_data_block(block))
|
||||
# Anthropic image blocks
|
||||
elif (
|
||||
isinstance(block, dict)
|
||||
@@ -3450,14 +3411,16 @@ def _convert_responses_chunk_to_generation_chunk(
|
||||
)
|
||||
elif chunk.type == "response.refusal.done":
|
||||
additional_kwargs["refusal"] = chunk.refusal
|
||||
elif chunk.type == "response.output_item.added" and chunk.item.type == "reasoning":
|
||||
additional_kwargs["reasoning"] = chunk.item.model_dump(
|
||||
exclude_none=True, mode="json"
|
||||
)
|
||||
elif chunk.type == "response.reasoning_summary_part.added":
|
||||
additional_kwargs["reasoning"] = {
|
||||
"type": "reasoning",
|
||||
"id": chunk.item_id,
|
||||
# langchain-core uses the `index` key to aggregate text blocks.
|
||||
"summary": [
|
||||
{"index": chunk.summary_index, "type": "summary_text", "text": ""}
|
||||
],
|
||||
]
|
||||
}
|
||||
elif chunk.type == "response.reasoning_summary_text.delta":
|
||||
additional_kwargs["reasoning"] = {
|
||||
|
||||
@@ -736,12 +736,6 @@ async def test_openai_response_headers_async(use_responses_api: bool) -> None:
|
||||
assert "content-type" in headers
|
||||
|
||||
|
||||
@pytest.mark.xfail(
|
||||
reason=(
|
||||
"As of 12.19.24 OpenAI API returns 1151 instead of 1118. Not clear yet if "
|
||||
"this is an undocumented API change or a bug on their end."
|
||||
)
|
||||
)
|
||||
def test_image_token_counting_jpeg() -> None:
|
||||
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
||||
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
|
||||
@@ -774,12 +768,6 @@ def test_image_token_counting_jpeg() -> None:
|
||||
assert expected == actual
|
||||
|
||||
|
||||
@pytest.mark.xfail(
|
||||
reason=(
|
||||
"As of 12.19.24 OpenAI API returns 871 instead of 779. Not clear yet if "
|
||||
"this is an undocumented API change or a bug on their end."
|
||||
)
|
||||
)
|
||||
def test_image_token_counting_png() -> None:
|
||||
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
||||
image_url = "https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png"
|
||||
|
||||
@@ -103,6 +103,21 @@ class TestOpenAIStandard(ChatModelIntegrationTests):
|
||||
)
|
||||
_ = model.invoke([message])
|
||||
|
||||
# Test OpenAI Chat Completions format
|
||||
message = HumanMessage(
|
||||
[
|
||||
{"type": "text", "text": "Summarize this document:"},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {
|
||||
"filename": "test file.pdf",
|
||||
"file_data": f"data:application/pdf;base64,{pdf_data}",
|
||||
},
|
||||
},
|
||||
]
|
||||
)
|
||||
_ = model.invoke([message])
|
||||
|
||||
|
||||
def _invoke(llm: ChatOpenAI, input_: str, stream: bool) -> AIMessage:
|
||||
if stream:
|
||||
|
||||
@@ -2036,6 +2036,24 @@ class ChatModelIntegrationTests(ChatModelTests):
|
||||
)
|
||||
_ = model.invoke([message])
|
||||
|
||||
# Test OpenAI Chat Completions format
|
||||
message = HumanMessage(
|
||||
[
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Summarize this document:",
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"file": {
|
||||
"filename": "test file.pdf",
|
||||
"file_data": f"data:application/pdf;base64,{pdf_data}",
|
||||
},
|
||||
},
|
||||
]
|
||||
)
|
||||
_ = model.invoke([message])
|
||||
|
||||
def test_audio_inputs(self, model: BaseChatModel) -> None:
|
||||
"""Test that the model can process audio inputs.
|
||||
|
||||
@@ -2093,6 +2111,21 @@ class ChatModelIntegrationTests(ChatModelTests):
|
||||
)
|
||||
_ = model.invoke([message])
|
||||
|
||||
# Test OpenAI Chat Completions format
|
||||
message = HumanMessage(
|
||||
[
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Describe this audio:",
|
||||
},
|
||||
{
|
||||
"type": "input_audio",
|
||||
"input_audio": {"data": audio_data, "format": "wav"},
|
||||
},
|
||||
]
|
||||
)
|
||||
_ = model.invoke([message])
|
||||
|
||||
def test_image_inputs(self, model: BaseChatModel) -> None:
|
||||
"""Test that the model can process image inputs.
|
||||
|
||||
|
||||
@@ -43,9 +43,9 @@ lint_tests: MYPY_CACHE=.mypy_cache_test
|
||||
|
||||
lint lint_diff lint_package lint_tests:
|
||||
./scripts/lint_imports.sh
|
||||
[ "$(PYTHON_FILES)" = "" ] || uv run --group typing --group lint ruff check $(PYTHON_FILES)
|
||||
[ "$(PYTHON_FILES)" = "" ] || uv run --group typing --group lint ruff format $(PYTHON_FILES) --diff
|
||||
[ "$(PYTHON_FILES)" = "" ] || mkdir -p $(MYPY_CACHE) && uv run --group typing --group lint mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
|
||||
[ "$(PYTHON_FILES)" = "" ] || uv run --all-groups ruff check $(PYTHON_FILES)
|
||||
[ "$(PYTHON_FILES)" = "" ] || uv run --all-groups ruff format $(PYTHON_FILES) --diff
|
||||
[ "$(PYTHON_FILES)" = "" ] || mkdir -p $(MYPY_CACHE) && uv run --all-groups mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
|
||||
|
||||
format format_diff:
|
||||
[ "$(PYTHON_FILES)" = "" ] || uv run --all-groups ruff format $(PYTHON_FILES)
|
||||
|
||||
@@ -68,7 +68,7 @@ class TextSplitter(BaseDocumentTransformer, ABC):
|
||||
"""Split text into multiple components."""
|
||||
|
||||
def create_documents(
|
||||
self, texts: List[str], metadatas: Optional[List[dict]] = None
|
||||
self, texts: list[str], metadatas: Optional[list[dict[Any, Any]]] = None
|
||||
) -> List[Document]:
|
||||
"""Create documents from a list of texts."""
|
||||
_metadatas = metadatas or [{}] * len(texts)
|
||||
|
||||
@@ -353,8 +353,8 @@ class HTMLSectionSplitter:
|
||||
return self.split_text_from_file(StringIO(text))
|
||||
|
||||
def create_documents(
|
||||
self, texts: List[str], metadatas: Optional[List[dict]] = None
|
||||
) -> List[Document]:
|
||||
self, texts: list[str], metadatas: Optional[list[dict[Any, Any]]] = None
|
||||
) -> list[Document]:
|
||||
"""Create documents from a list of texts."""
|
||||
_metadatas = metadatas or [{}] * len(texts)
|
||||
documents = []
|
||||
@@ -389,10 +389,8 @@ class HTMLSectionSplitter:
|
||||
- 'tag_name': The name of the header tag (e.g., "h1", "h2").
|
||||
"""
|
||||
try:
|
||||
from bs4 import (
|
||||
BeautifulSoup, # type: ignore[import-untyped]
|
||||
PageElement,
|
||||
)
|
||||
from bs4 import BeautifulSoup
|
||||
from bs4.element import PageElement
|
||||
except ImportError as e:
|
||||
raise ImportError(
|
||||
"Unable to import BeautifulSoup/PageElement, \
|
||||
@@ -411,13 +409,13 @@ class HTMLSectionSplitter:
|
||||
if i == 0:
|
||||
current_header = "#TITLE#"
|
||||
current_header_tag = "h1"
|
||||
section_content: List = []
|
||||
section_content: list[str] = []
|
||||
else:
|
||||
current_header = header_element.text.strip()
|
||||
current_header_tag = header_element.name # type: ignore[attr-defined]
|
||||
section_content = []
|
||||
for element in header_element.next_elements:
|
||||
if i + 1 < len(headers) and element == headers[i + 1]:
|
||||
if i + 1 < len(headers) and element == headers[i + 1]: # type: ignore[comparison-overlap]
|
||||
break
|
||||
if isinstance(element, str):
|
||||
section_content.append(element)
|
||||
@@ -637,8 +635,8 @@ class HTMLSemanticPreservingSplitter(BaseDocumentTransformer):
|
||||
|
||||
if self._stopword_removal:
|
||||
try:
|
||||
import nltk # type: ignore
|
||||
from nltk.corpus import stopwords # type: ignore
|
||||
import nltk
|
||||
from nltk.corpus import stopwords # type: ignore[import-untyped]
|
||||
|
||||
nltk.download("stopwords")
|
||||
self._stopwords = set(stopwords.words(self._stopword_lang))
|
||||
@@ -902,7 +900,7 @@ class HTMLSemanticPreservingSplitter(BaseDocumentTransformer):
|
||||
return documents
|
||||
|
||||
def _create_documents(
|
||||
self, headers: dict, content: str, preserved_elements: dict
|
||||
self, headers: dict[str, str], content: str, preserved_elements: dict[str, str]
|
||||
) -> List[Document]:
|
||||
"""Creates Document objects from the provided headers, content, and elements.
|
||||
|
||||
@@ -928,7 +926,7 @@ class HTMLSemanticPreservingSplitter(BaseDocumentTransformer):
|
||||
return self._further_split_chunk(content, metadata, preserved_elements)
|
||||
|
||||
def _further_split_chunk(
|
||||
self, content: str, metadata: dict, preserved_elements: dict
|
||||
self, content: str, metadata: dict[Any, Any], preserved_elements: dict[str, str]
|
||||
) -> List[Document]:
|
||||
"""Further splits the content into smaller chunks.
|
||||
|
||||
@@ -959,7 +957,7 @@ class HTMLSemanticPreservingSplitter(BaseDocumentTransformer):
|
||||
return result
|
||||
|
||||
def _reinsert_preserved_elements(
|
||||
self, content: str, preserved_elements: dict
|
||||
self, content: str, preserved_elements: dict[str, str]
|
||||
) -> str:
|
||||
"""Reinserts preserved elements into the content into their original positions.
|
||||
|
||||
|
||||
@@ -49,12 +49,12 @@ class RecursiveJsonSplitter:
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _json_size(data: Dict) -> int:
|
||||
def _json_size(data: dict[str, Any]) -> int:
|
||||
"""Calculate the size of the serialized JSON object."""
|
||||
return len(json.dumps(data))
|
||||
|
||||
@staticmethod
|
||||
def _set_nested_dict(d: Dict, path: List[str], value: Any) -> None:
|
||||
def _set_nested_dict(d: dict[str, Any], path: list[str], value: Any) -> None:
|
||||
"""Set a value in a nested dictionary based on the given path."""
|
||||
for key in path[:-1]:
|
||||
d = d.setdefault(key, {})
|
||||
@@ -76,10 +76,10 @@ class RecursiveJsonSplitter:
|
||||
|
||||
def _json_split(
|
||||
self,
|
||||
data: Dict[str, Any],
|
||||
current_path: Optional[List[str]] = None,
|
||||
chunks: Optional[List[Dict]] = None,
|
||||
) -> List[Dict]:
|
||||
data: dict[str, Any],
|
||||
current_path: Optional[list[str]] = None,
|
||||
chunks: Optional[list[dict[str, Any]]] = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Split json into maximum size dictionaries while preserving structure."""
|
||||
current_path = current_path or []
|
||||
chunks = chunks if chunks is not None else [{}]
|
||||
@@ -107,9 +107,9 @@ class RecursiveJsonSplitter:
|
||||
|
||||
def split_json(
|
||||
self,
|
||||
json_data: Dict[str, Any],
|
||||
json_data: dict[str, Any],
|
||||
convert_lists: bool = False,
|
||||
) -> List[Dict]:
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Splits JSON into a list of JSON chunks."""
|
||||
if convert_lists:
|
||||
chunks = self._json_split(self._list_to_dict_preprocessing(json_data))
|
||||
@@ -135,11 +135,11 @@ class RecursiveJsonSplitter:
|
||||
|
||||
def create_documents(
|
||||
self,
|
||||
texts: List[Dict],
|
||||
texts: list[dict[str, Any]],
|
||||
convert_lists: bool = False,
|
||||
ensure_ascii: bool = True,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
) -> List[Document]:
|
||||
metadatas: Optional[list[dict[Any, Any]]] = None,
|
||||
) -> list[Document]:
|
||||
"""Create documents from a list of json objects (Dict)."""
|
||||
_metadatas = metadatas or [{}] * len(texts)
|
||||
documents = []
|
||||
|
||||
@@ -404,18 +404,18 @@ class ExperimentalMarkdownSyntaxTextSplitter:
|
||||
self.current_chunk = Document(page_content="")
|
||||
|
||||
# Match methods
|
||||
def _match_header(self, line: str) -> Union[re.Match, None]:
|
||||
def _match_header(self, line: str) -> Union[re.Match[str], None]:
|
||||
match = re.match(r"^(#{1,6}) (.*)", line)
|
||||
# Only matches on the configured headers
|
||||
if match and match.group(1) in self.splittable_headers:
|
||||
return match
|
||||
return None
|
||||
|
||||
def _match_code(self, line: str) -> Union[re.Match, None]:
|
||||
def _match_code(self, line: str) -> Union[re.Match[str], None]:
|
||||
matches = [re.match(rule, line) for rule in [r"^```(.*)", r"^~~~(.*)"]]
|
||||
return next((match for match in matches if match), None)
|
||||
|
||||
def _match_horz(self, line: str) -> Union[re.Match, None]:
|
||||
def _match_horz(self, line: str) -> Union[re.Match[str], None]:
|
||||
matches = [
|
||||
re.match(rule, line) for rule in [r"^\*\*\*+\n", r"^---+\n", r"^___+\n"]
|
||||
]
|
||||
|
||||
@@ -35,7 +35,7 @@ class SentenceTransformersTokenTextSplitter(TextSplitter):
|
||||
def _initialize_chunk_configuration(
|
||||
self, *, tokens_per_chunk: Optional[int]
|
||||
) -> None:
|
||||
self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)
|
||||
self.maximum_tokens_per_chunk = self._model.max_seq_length
|
||||
|
||||
if tokens_per_chunk is None:
|
||||
self.tokens_per_chunk = self.maximum_tokens_per_chunk
|
||||
@@ -93,10 +93,10 @@ class SentenceTransformersTokenTextSplitter(TextSplitter):
|
||||
|
||||
_max_length_equal_32_bit_integer: int = 2**32
|
||||
|
||||
def _encode(self, text: str) -> List[int]:
|
||||
def _encode(self, text: str) -> list[int]:
|
||||
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
|
||||
text,
|
||||
max_length=self._max_length_equal_32_bit_integer,
|
||||
truncation="do_not_truncate",
|
||||
)
|
||||
return token_ids_with_start_and_end_token_ids
|
||||
return cast("list[int]", token_ids_with_start_and_end_token_ids)
|
||||
|
||||
@@ -20,7 +20,7 @@ repository = "https://github.com/langchain-ai/langchain"
|
||||
[dependency-groups]
|
||||
lint = ["ruff<1.0.0,>=0.9.2", "langchain-core"]
|
||||
typing = [
|
||||
"mypy<2.0,>=1.10",
|
||||
"mypy<2.0,>=1.15",
|
||||
"lxml-stubs<1.0.0,>=0.5.1",
|
||||
"types-requests<3.0.0.0,>=2.31.0.20240218",
|
||||
"tiktoken<1.0.0,>=0.8.0",
|
||||
@@ -48,7 +48,11 @@ test_integration = [
|
||||
langchain-core = { path = "../core", editable = true }
|
||||
|
||||
[tool.mypy]
|
||||
disallow_untyped_defs = "True"
|
||||
strict = "True"
|
||||
strict_bytes = "True"
|
||||
enable_error_code = "deprecated"
|
||||
report_deprecated_as_note = "True"
|
||||
|
||||
[[tool.mypy.overrides]]
|
||||
module = [
|
||||
"transformers",
|
||||
@@ -70,7 +74,7 @@ ignore_missing_imports = "True"
|
||||
target-version = "py39"
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "T201", "D"]
|
||||
select = ["E", "F", "I", "PGH003", "T201", "D"]
|
||||
ignore = ["D100"]
|
||||
|
||||
[tool.coverage.run]
|
||||
|
||||
@@ -20,7 +20,7 @@ def spacy() -> Any:
|
||||
import spacy
|
||||
except ImportError:
|
||||
pytest.skip("Spacy not installed.")
|
||||
spacy.cli.download("en_core_web_sm") # type: ignore
|
||||
spacy.cli.download("en_core_web_sm") # type: ignore[attr-defined,operator,unused-ignore]
|
||||
return spacy
|
||||
|
||||
|
||||
|
||||
80
libs/text-splitters/uv.lock
generated
80
libs/text-splitters/uv.lock
generated
@@ -1,4 +1,5 @@
|
||||
version = 1
|
||||
revision = 1
|
||||
requires-python = ">=3.9, <4.0"
|
||||
resolution-markers = [
|
||||
"python_full_version >= '3.12.4'",
|
||||
@@ -1079,7 +1080,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.3.51"
|
||||
version = "0.3.52"
|
||||
source = { editable = "../core" }
|
||||
dependencies = [
|
||||
{ name = "jsonpatch" },
|
||||
@@ -1115,10 +1116,12 @@ test = [
|
||||
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
|
||||
{ name = "grandalf", specifier = ">=0.8,<1.0" },
|
||||
{ name = "langchain-tests", directory = "../standard-tests" },
|
||||
{ name = "numpy", marker = "python_full_version < '3.12'", specifier = ">=1.24.0,<2.0.0" },
|
||||
{ name = "numpy", marker = "python_full_version >= '3.12'", specifier = ">=1.26.0,<3" },
|
||||
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
|
||||
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
|
||||
{ name = "pytest", specifier = ">=8,<9" },
|
||||
{ name = "pytest-asyncio", specifier = ">=0.21.1,<1.0.0" },
|
||||
{ name = "pytest-benchmark" },
|
||||
{ name = "pytest-codspeed" },
|
||||
{ name = "pytest-mock", specifier = ">=3.10.0,<4.0.0" },
|
||||
{ name = "pytest-socket", specifier = ">=0.7.0,<1.0.0" },
|
||||
{ name = "pytest-watcher", specifier = ">=0.3.4,<1.0.0" },
|
||||
@@ -1129,8 +1132,7 @@ test = [
|
||||
test-integration = []
|
||||
typing = [
|
||||
{ name = "langchain-text-splitters", directory = "." },
|
||||
{ name = "mypy", specifier = ">=1.10,<1.11" },
|
||||
{ name = "types-jinja2", specifier = ">=2.11.9,<3.0.0" },
|
||||
{ name = "mypy", specifier = ">=1.15,<1.16" },
|
||||
{ name = "types-pyyaml", specifier = ">=6.0.12.2,<7.0.0.0" },
|
||||
{ name = "types-requests", specifier = ">=2.28.11.5,<3.0.0.0" },
|
||||
]
|
||||
@@ -1207,7 +1209,7 @@ test-integration = [
|
||||
]
|
||||
typing = [
|
||||
{ name = "lxml-stubs", specifier = ">=0.5.1,<1.0.0" },
|
||||
{ name = "mypy", specifier = ">=1.10,<2.0" },
|
||||
{ name = "mypy", specifier = ">=1.15,<2.0" },
|
||||
{ name = "tiktoken", specifier = ">=0.8.0,<1.0.0" },
|
||||
{ name = "types-requests", specifier = ">=2.31.0.20240218,<3.0.0.0" },
|
||||
]
|
||||
@@ -1495,46 +1497,46 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "mypy"
|
||||
version = "1.14.1"
|
||||
version = "1.15.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "mypy-extensions" },
|
||||
{ name = "tomli", marker = "python_full_version < '3.11'" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b9/eb/2c92d8ea1e684440f54fa49ac5d9a5f19967b7b472a281f419e69a8d228e/mypy-1.14.1.tar.gz", hash = "sha256:7ec88144fe9b510e8475ec2f5f251992690fcf89ccb4500b214b4226abcd32d6", size = 3216051 }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ce/43/d5e49a86afa64bd3839ea0d5b9c7103487007d728e1293f52525d6d5486a/mypy-1.15.0.tar.gz", hash = "sha256:404534629d51d3efea5c800ee7c42b72a6554d6c400e6a79eafe15d11341fd43", size = 3239717 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/7a/87ae2adb31d68402da6da1e5f30c07ea6063e9f09b5e7cfc9dfa44075e74/mypy-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:52686e37cf13d559f668aa398dd7ddf1f92c5d613e4f8cb262be2fb4fedb0fcb", size = 11211002 },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/23/eada4c38608b444618a132be0d199b280049ded278b24cbb9d3fc59658e4/mypy-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1fb545ca340537d4b45d3eecdb3def05e913299ca72c290326be19b3804b39c0", size = 10358400 },
|
||||
{ url = "https://files.pythonhosted.org/packages/43/c9/d6785c6f66241c62fd2992b05057f404237deaad1566545e9f144ced07f5/mypy-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:90716d8b2d1f4cd503309788e51366f07c56635a3309b0f6a32547eaaa36a64d", size = 12095172 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/62/daa7e787770c83c52ce2aaf1a111eae5893de9e004743f51bfcad9e487ec/mypy-1.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2ae753f5c9fef278bcf12e1a564351764f2a6da579d4a81347e1d5a15819997b", size = 12828732 },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/a2/5fb18318a3637f29f16f4e41340b795da14f4751ef4f51c99ff39ab62e52/mypy-1.14.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e0fe0f5feaafcb04505bcf439e991c6d8f1bf8b15f12b05feeed96e9e7bf1427", size = 13012197 },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/99/e153ce39105d164b5f02c06c35c7ba958aaff50a2babba7d080988b03fe7/mypy-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:7d54bd85b925e501c555a3227f3ec0cfc54ee8b6930bd6141ec872d1c572f81f", size = 9780836 },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/11/a9422850fd506edbcdc7f6090682ecceaf1f87b9dd847f9df79942da8506/mypy-1.14.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f995e511de847791c3b11ed90084a7a0aafdc074ab88c5a9711622fe4751138c", size = 11120432 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/9e/47e450fd39078d9c02d620545b2cb37993a8a8bdf7db3652ace2f80521ca/mypy-1.14.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d64169ec3b8461311f8ce2fd2eb5d33e2d0f2c7b49116259c51d0d96edee48d1", size = 10279515 },
|
||||
{ url = "https://files.pythonhosted.org/packages/01/b5/6c8d33bd0f851a7692a8bfe4ee75eb82b6983a3cf39e5e32a5d2a723f0c1/mypy-1.14.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ba24549de7b89b6381b91fbc068d798192b1b5201987070319889e93038967a8", size = 12025791 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/4c/e10e2c46ea37cab5c471d0ddaaa9a434dc1d28650078ac1b56c2d7b9b2e4/mypy-1.14.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:183cf0a45457d28ff9d758730cd0210419ac27d4d3f285beda038c9083363b1f", size = 12749203 },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/55/beacb0c69beab2153a0f57671ec07861d27d735a0faff135a494cd4f5020/mypy-1.14.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f2a0ecc86378f45347f586e4163d1769dd81c5a223d577fe351f26b179e148b1", size = 12885900 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a2/75/8c93ff7f315c4d086a2dfcde02f713004357d70a163eddb6c56a6a5eff40/mypy-1.14.1-cp311-cp311-win_amd64.whl", hash = "sha256:ad3301ebebec9e8ee7135d8e3109ca76c23752bac1e717bc84cd3836b4bf3eae", size = 9777869 },
|
||||
{ url = "https://files.pythonhosted.org/packages/43/1b/b38c079609bb4627905b74fc6a49849835acf68547ac33d8ceb707de5f52/mypy-1.14.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:30ff5ef8519bbc2e18b3b54521ec319513a26f1bba19a7582e7b1f58a6e69f14", size = 11266668 },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/75/2ed0d2964c1ffc9971c729f7a544e9cd34b2cdabbe2d11afd148d7838aa2/mypy-1.14.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:cb9f255c18052343c70234907e2e532bc7e55a62565d64536dbc7706a20b78b9", size = 10254060 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/5f/7b8051552d4da3c51bbe8fcafffd76a6823779101a2b198d80886cd8f08e/mypy-1.14.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8b4e3413e0bddea671012b063e27591b953d653209e7a4fa5e48759cda77ca11", size = 11933167 },
|
||||
{ url = "https://files.pythonhosted.org/packages/04/90/f53971d3ac39d8b68bbaab9a4c6c58c8caa4d5fd3d587d16f5927eeeabe1/mypy-1.14.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:553c293b1fbdebb6c3c4030589dab9fafb6dfa768995a453d8a5d3b23784af2e", size = 12864341 },
|
||||
{ url = "https://files.pythonhosted.org/packages/03/d2/8bc0aeaaf2e88c977db41583559319f1821c069e943ada2701e86d0430b7/mypy-1.14.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fad79bfe3b65fe6a1efaed97b445c3d37f7be9fdc348bdb2d7cac75579607c89", size = 12972991 },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/17/07815114b903b49b0f2cf7499f1c130e5aa459411596668267535fe9243c/mypy-1.14.1-cp312-cp312-win_amd64.whl", hash = "sha256:8fa2220e54d2946e94ab6dbb3ba0a992795bd68b16dc852db33028df2b00191b", size = 9879016 },
|
||||
{ url = "https://files.pythonhosted.org/packages/9e/15/bb6a686901f59222275ab228453de741185f9d54fecbaacec041679496c6/mypy-1.14.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:92c3ed5afb06c3a8e188cb5da4984cab9ec9a77ba956ee419c68a388b4595255", size = 11252097 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/b3/8b0f74dfd072c802b7fa368829defdf3ee1566ba74c32a2cb2403f68024c/mypy-1.14.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:dbec574648b3e25f43d23577309b16534431db4ddc09fda50841f1e34e64ed34", size = 10239728 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c5/9b/4fd95ab20c52bb5b8c03cc49169be5905d931de17edfe4d9d2986800b52e/mypy-1.14.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8c6d94b16d62eb3e947281aa7347d78236688e21081f11de976376cf010eb31a", size = 11924965 },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/9d/4a236b9c57f5d8f08ed346914b3f091a62dd7e19336b2b2a0d85485f82ff/mypy-1.14.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d4b19b03fdf54f3c5b2fa474c56b4c13c9dbfb9a2db4370ede7ec11a2c5927d9", size = 12867660 },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/88/a61a5497e2f68d9027de2bb139c7bb9abaeb1be1584649fa9d807f80a338/mypy-1.14.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:0c911fde686394753fff899c409fd4e16e9b294c24bfd5e1ea4675deae1ac6fd", size = 12969198 },
|
||||
{ url = "https://files.pythonhosted.org/packages/54/da/3d6fc5d92d324701b0c23fb413c853892bfe0e1dbe06c9138037d459756b/mypy-1.14.1-cp313-cp313-win_amd64.whl", hash = "sha256:8b21525cb51671219f5307be85f7e646a153e5acc656e5cebf64bfa076c50107", size = 9885276 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ca/1f/186d133ae2514633f8558e78cd658070ba686c0e9275c5a5c24a1e1f0d67/mypy-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3888a1816d69f7ab92092f785a462944b3ca16d7c470d564165fe703b0970c35", size = 11200493 },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/fc/4842485d034e38a4646cccd1369f6b1ccd7bc86989c52770d75d719a9941/mypy-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:46c756a444117c43ee984bd055db99e498bc613a70bbbc120272bd13ca579fbc", size = 10357702 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/e6/457b83f2d701e23869cfec013a48a12638f75b9d37612a9ddf99072c1051/mypy-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:27fc248022907e72abfd8e22ab1f10e903915ff69961174784a3900a8cba9ad9", size = 12091104 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/bf/76a569158db678fee59f4fd30b8e7a0d75bcbaeef49edd882a0d63af6d66/mypy-1.14.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:499d6a72fb7e5de92218db961f1a66d5f11783f9ae549d214617edab5d4dbdbb", size = 12830167 },
|
||||
{ url = "https://files.pythonhosted.org/packages/43/bc/0bc6b694b3103de9fed61867f1c8bd33336b913d16831431e7cb48ef1c92/mypy-1.14.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:57961db9795eb566dc1d1b4e9139ebc4c6b0cb6e7254ecde69d1552bf7613f60", size = 13013834 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/79/5f5ec47849b6df1e6943d5fd8e6632fbfc04b4fd4acfa5a5a9535d11b4e2/mypy-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:07ba89fdcc9451f2ebb02853deb6aaaa3d2239a236669a63ab3801bbf923ef5c", size = 9781231 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/b5/32dd67b69a16d088e533962e5044e51004176a9952419de0370cdaead0f8/mypy-1.14.1-py3-none-any.whl", hash = "sha256:b66a60cc4073aeb8ae00057f9c1f64d49e90f918fbcef9a977eb121da8b8f1d1", size = 2752905 },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/f8/65a7ce8d0e09b6329ad0c8d40330d100ea343bd4dd04c4f8ae26462d0a17/mypy-1.15.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:979e4e1a006511dacf628e36fadfecbcc0160a8af6ca7dad2f5025529e082c13", size = 10738433 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/95/9c0ecb8eacfe048583706249439ff52105b3f552ea9c4024166c03224270/mypy-1.15.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c4bb0e1bd29f7d34efcccd71cf733580191e9a264a2202b0239da95984c5b559", size = 9861472 },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/09/9ec95e982e282e20c0d5407bc65031dfd0f0f8ecc66b69538296e06fcbee/mypy-1.15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:be68172e9fd9ad8fb876c6389f16d1c1b5f100ffa779f77b1fb2176fcc9ab95b", size = 11611424 },
|
||||
{ url = "https://files.pythonhosted.org/packages/78/13/f7d14e55865036a1e6a0a69580c240f43bc1f37407fe9235c0d4ef25ffb0/mypy-1.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c7be1e46525adfa0d97681432ee9fcd61a3964c2446795714699a998d193f1a3", size = 12365450 },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/e1/301a73852d40c241e915ac6d7bcd7fedd47d519246db2d7b86b9d7e7a0cb/mypy-1.15.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:2e2c2e6d3593f6451b18588848e66260ff62ccca522dd231cd4dd59b0160668b", size = 12551765 },
|
||||
{ url = "https://files.pythonhosted.org/packages/77/ba/c37bc323ae5fe7f3f15a28e06ab012cd0b7552886118943e90b15af31195/mypy-1.15.0-cp310-cp310-win_amd64.whl", hash = "sha256:6983aae8b2f653e098edb77f893f7b6aca69f6cffb19b2cc7443f23cce5f4828", size = 9274701 },
|
||||
{ url = "https://files.pythonhosted.org/packages/03/bc/f6339726c627bd7ca1ce0fa56c9ae2d0144604a319e0e339bdadafbbb599/mypy-1.15.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2922d42e16d6de288022e5ca321cd0618b238cfc5570e0263e5ba0a77dbef56f", size = 10662338 },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/90/8dcf506ca1a09b0d17555cc00cd69aee402c203911410136cd716559efe7/mypy-1.15.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2ee2d57e01a7c35de00f4634ba1bbf015185b219e4dc5909e281016df43f5ee5", size = 9787540 },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/05/a10f9479681e5da09ef2f9426f650d7b550d4bafbef683b69aad1ba87457/mypy-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:973500e0774b85d9689715feeffcc980193086551110fd678ebe1f4342fb7c5e", size = 11538051 },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/9a/1f7d18b30edd57441a6411fcbc0c6869448d1a4bacbaee60656ac0fc29c8/mypy-1.15.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5a95fb17c13e29d2d5195869262f8125dfdb5c134dc8d9a9d0aecf7525b10c2c", size = 12286751 },
|
||||
{ url = "https://files.pythonhosted.org/packages/72/af/19ff499b6f1dafcaf56f9881f7a965ac2f474f69f6f618b5175b044299f5/mypy-1.15.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1905f494bfd7d85a23a88c5d97840888a7bd516545fc5aaedff0267e0bb54e2f", size = 12421783 },
|
||||
{ url = "https://files.pythonhosted.org/packages/96/39/11b57431a1f686c1aed54bf794870efe0f6aeca11aca281a0bd87a5ad42c/mypy-1.15.0-cp311-cp311-win_amd64.whl", hash = "sha256:c9817fa23833ff189db061e6d2eff49b2f3b6ed9856b4a0a73046e41932d744f", size = 9265618 },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/3a/03c74331c5eb8bd025734e04c9840532226775c47a2c39b56a0c8d4f128d/mypy-1.15.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:aea39e0583d05124836ea645f412e88a5c7d0fd77a6d694b60d9b6b2d9f184fd", size = 10793981 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/1a/41759b18f2cfd568848a37c89030aeb03534411eef981df621d8fad08a1d/mypy-1.15.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2f2147ab812b75e5b5499b01ade1f4a81489a147c01585cda36019102538615f", size = 9749175 },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/7e/873481abf1ef112c582db832740f4c11b2bfa510e829d6da29b0ab8c3f9c/mypy-1.15.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ce436f4c6d218a070048ed6a44c0bbb10cd2cc5e272b29e7845f6a2f57ee4464", size = 11455675 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/d0/92ae4cde706923a2d3f2d6c39629134063ff64b9dedca9c1388363da072d/mypy-1.15.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8023ff13985661b50a5928fc7a5ca15f3d1affb41e5f0a9952cb68ef090b31ee", size = 12410020 },
|
||||
{ url = "https://files.pythonhosted.org/packages/46/8b/df49974b337cce35f828ba6fda228152d6db45fed4c86ba56ffe442434fd/mypy-1.15.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:1124a18bc11a6a62887e3e137f37f53fbae476dc36c185d549d4f837a2a6a14e", size = 12498582 },
|
||||
{ url = "https://files.pythonhosted.org/packages/13/50/da5203fcf6c53044a0b699939f31075c45ae8a4cadf538a9069b165c1050/mypy-1.15.0-cp312-cp312-win_amd64.whl", hash = "sha256:171a9ca9a40cd1843abeca0e405bc1940cd9b305eaeea2dda769ba096932bb22", size = 9366614 },
|
||||
{ url = "https://files.pythonhosted.org/packages/6a/9b/fd2e05d6ffff24d912f150b87db9e364fa8282045c875654ce7e32fffa66/mypy-1.15.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:93faf3fdb04768d44bf28693293f3904bbb555d076b781ad2530214ee53e3445", size = 10788592 },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/37/b246d711c28a03ead1fd906bbc7106659aed7c089d55fe40dd58db812628/mypy-1.15.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:811aeccadfb730024c5d3e326b2fbe9249bb7413553f15499a4050f7c30e801d", size = 9753611 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/ac/395808a92e10cfdac8003c3de9a2ab6dc7cde6c0d2a4df3df1b815ffd067/mypy-1.15.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:98b7b9b9aedb65fe628c62a6dc57f6d5088ef2dfca37903a7d9ee374d03acca5", size = 11438443 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/8b/801aa06445d2de3895f59e476f38f3f8d610ef5d6908245f07d002676cbf/mypy-1.15.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c43a7682e24b4f576d93072216bf56eeff70d9140241f9edec0c104d0c515036", size = 12402541 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/67/5a4268782eb77344cc613a4cf23540928e41f018a9a1ec4c6882baf20ab8/mypy-1.15.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:baefc32840a9f00babd83251560e0ae1573e2f9d1b067719479bfb0e987c6357", size = 12494348 },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/3e/57bb447f7bbbfaabf1712d96f9df142624a386d98fb026a761532526057e/mypy-1.15.0-cp313-cp313-win_amd64.whl", hash = "sha256:b9378e2c00146c44793c98b8d5a61039a048e31f429fb0eb546d93f4b000bedf", size = 9373648 },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/fa/79cf41a55b682794abe71372151dbbf856e3008f6767057229e6649d294a/mypy-1.15.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:e601a7fa172c2131bff456bb3ee08a88360760d0d2f8cbd7a75a65497e2df078", size = 10737129 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/33/dd8feb2597d648de29e3da0a8bf4e1afbda472964d2a4a0052203a6f3594/mypy-1.15.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:712e962a6357634fef20412699a3655c610110e01cdaa6180acec7fc9f8513ba", size = 9856335 },
|
||||
{ url = "https://files.pythonhosted.org/packages/e4/b5/74508959c1b06b96674b364ffeb7ae5802646b32929b7701fc6b18447592/mypy-1.15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f95579473af29ab73a10bada2f9722856792a36ec5af5399b653aa28360290a5", size = 11611935 },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/53/da61b9d9973efcd6507183fdad96606996191657fe79701b2c818714d573/mypy-1.15.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8f8722560a14cde92fdb1e31597760dc35f9f5524cce17836c0d22841830fd5b", size = 12365827 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/72/965bd9ee89540c79a25778cc080c7e6ef40aa1eeac4d52cec7eae6eb5228/mypy-1.15.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:1fbb8da62dc352133d7d7ca90ed2fb0e9d42bb1a32724c287d3c76c58cbaa9c2", size = 12541924 },
|
||||
{ url = "https://files.pythonhosted.org/packages/46/d0/f41645c2eb263e6c77ada7d76f894c580c9ddb20d77f0c24d34273a4dab2/mypy-1.15.0-cp39-cp39-win_amd64.whl", hash = "sha256:d10d994b41fb3497719bbf866f227b3489048ea4bbbb5015357db306249f7980", size = 9271176 },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/4e/a7d65c7322c510de2c409ff3828b03354a7c43f5a8ed458a7a131b41c7b9/mypy-1.15.0-py3-none-any.whl", hash = "sha256:5469affef548bd1895d86d3bf10ce2b44e33d86923c29e4d675b3e323437ea3e", size = 2221777 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Reference in New Issue
Block a user