-LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development β all while future-proofing decisions as the underlying technology evolves.
+LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development β all while future-proofing decisions as the underlying technology evolves.
```bash
-pip install -U langchain
+pip install langchain
```
+If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview), our framework for building controllable agent workflows.
+
---
-**Documentation**: To learn more about LangChain, check out [the docs](https://docs.langchain.com/).
+**Documentation**:
-If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building controllable agent workflows.
+- [docs.langchain.com](https://docs.langchain.com/oss/python/langchain/overview) β Comprehensive documentation, including conceptual overviews and guides
+- [reference.langchain.com/python](https://reference.langchain.com/python) β API reference docs for LangChain packages
+
+**Discussions**: Visit the [LangChain Forum](https://forum.langchain.com) to connect with the community and share all of your technical questions, ideas, and feedback.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -52,26 +60,27 @@ LangChain helps developers build applications powered by LLMs through a standard
Use LangChain for:
-- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChainβs vast library of integrations with model providers, tools, vector stores, retrievers, and more.
-- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your applicationβs needs. As the industry frontier evolves, adapt quickly β LangChainβs abstractions keep you moving without losing momentum.
+- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more.
+- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly β LangChain's abstractions keep you moving without losing momentum.
+- **Rapid prototyping**. Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle.
+- **Production-ready features**. Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices.
+- **Vibrant community and ecosystem**. Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community.
+- **Flexible abstraction layers**. Work at the level of abstraction that suits your needs - from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity.
-## LangChainβs ecosystem
+## LangChain ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
-- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
-- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows β and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
-- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams β and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
+- [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview) β Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows β and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
+- [Integrations](https://docs.langchain.com/oss/python/integrations/providers/overview) β List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
+- [LangSmith](https://www.langchain.com/langsmith) β Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
+- [LangSmith Deployment](https://docs.langchain.com/langsmith/deployments) β Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams β and iterate quickly with visual prototyping in [LangSmith Studio](https://docs.langchain.com/langsmith/studio).
+- [Deep Agents](https://github.com/langchain-ai/deepagents) *(new!)* β Build agents that can plan, use subagents, and leverage file systems for complex tasks
## Additional resources
-- [Conceptual Guides](https://docs.langchain.com/oss/python/langchain/overview): Explanations of key
-concepts behind the LangChain framework.
-- [Tutorials](https://docs.langchain.com/oss/python/learn): Simple walkthroughs with
-guided examples on getting started with LangChain.
-- [API Reference](https://reference.langchain.com/python/): Detailed reference on
-navigating base packages and integrations for LangChain.
-- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
-- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.
+- [API Reference](https://reference.langchain.com/python) β Detailed reference on navigating base packages and integrations for LangChain.
+- [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview) β Learn how to contribute to LangChain projects and find good first issues.
+- [Code of Conduct](https://github.com/langchain-ai/langchain/blob/master/.github/CODE_OF_CONDUCT.md) β Our community guidelines and standards for participation.
diff --git a/SECURITY.md b/SECURITY.md
index 1af4746f78a..c35d9342194 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -55,10 +55,10 @@ All out of scope targets defined by huntr as well as:
* **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
-* **tools**: Tools in either langchain or langchain-community are not eligible for bug
+* **tools**: Tools in either `langchain` or `langchain-community` are not eligible for bug
bounties. This includes the following directories
- * libs/langchain/langchain/tools
- * libs/community/langchain_community/tools
+ * `libs/langchain/langchain/tools`
+ * `libs/community/langchain_community/tools`
* Please review the [Best Practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
diff --git a/libs/cli/README.md b/libs/cli/README.md
index f86c6ef69d4..7c29748e954 100644
--- a/libs/cli/README.md
+++ b/libs/cli/README.md
@@ -1,6 +1,30 @@
# langchain-cli
-This package implements the official CLI for LangChain. Right now, it is most useful
-for getting started with LangChain Templates!
+[](https://pypi.org/project/langchain-cli/#history)
+[](https://opensource.org/licenses/MIT)
+[](https://pypistats.org/packages/langchain-cli)
+[](https://twitter.com/langchainai)
+
+## Quick Install
+
+```bash
+pip install langchain-cli
+```
+
+## π€ What is this?
+
+This package implements the official CLI for LangChain. Right now, it is most useful for getting started with LangChain Templates!
+
+## π Documentation
[CLI Docs](https://github.com/langchain-ai/langchain/blob/master/libs/cli/DOCS.md)
+
+## π Releases & Versioning
+
+See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.
+
+## π Contributing
+
+As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
+
+For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
diff --git a/libs/cli/langchain_cli/integration_template/README.md b/libs/cli/langchain_cli/integration_template/README.md
index f8d70df8005..15741c62f85 100644
--- a/libs/cli/langchain_cli/integration_template/README.md
+++ b/libs/cli/langchain_cli/integration_template/README.md
@@ -19,8 +19,8 @@ And you should configure credentials by setting the following environment variab
```python
from __module_name__ import Chat__ModuleName__
-llm = Chat__ModuleName__()
-llm.invoke("Sing a ballad of LangChain.")
+model = Chat__ModuleName__()
+model.invoke("Sing a ballad of LangChain.")
```
## Embeddings
@@ -41,6 +41,6 @@ embeddings.embed_query("What is the meaning of life?")
```python
from __module_name__ import __ModuleName__LLM
-llm = __ModuleName__LLM()
-llm.invoke("The meaning of life is")
+model = __ModuleName__LLM()
+model.invoke("The meaning of life is")
```
diff --git a/libs/cli/langchain_cli/integration_template/docs/chat.ipynb b/libs/cli/langchain_cli/integration_template/docs/chat.ipynb
index 86221b5fc63..32f4cb8e1b0 100644
--- a/libs/cli/langchain_cli/integration_template/docs/chat.ipynb
+++ b/libs/cli/langchain_cli/integration_template/docs/chat.ipynb
@@ -1,262 +1,264 @@
{
- "cells": [
- {
- "cell_type": "raw",
- "id": "afaf8039",
- "metadata": {},
- "source": [
- "---\n",
- "sidebar_label: __ModuleName__\n",
- "---"
- ]
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "afaf8039",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_label: __ModuleName__\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e49f1e0d",
+ "metadata": {},
+ "source": [
+ "# Chat__ModuleName__\n",
+ "\n",
+ "- TODO: Make sure API reference link is correct.\n",
+ "\n",
+ "This will help you get started with __ModuleName__ [chat models](/docs/concepts/chat_models). For detailed documentation of all Chat__ModuleName__ features and configurations head to the [API reference](https://python.langchain.com/api_reference/__package_name_short_snake__/chat_models/__module_name__.chat_models.Chat__ModuleName__.html).\n",
+ "\n",
+ "- TODO: Add any other relevant links, like information about models, prices, context windows, etc. See https://python.langchain.com/docs/integrations/chat/openai/ for an example.\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "- TODO: Fill in table features.\n",
+ "- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
+ "- TODO: Make sure API reference links are correct.\n",
+ "\n",
+ "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [Chat__ModuleName__](https://python.langchain.com/api_reference/__package_name_short_snake__/chat_models/__module_name__.chat_models.Chat__ModuleName__.html) | [__package_name__](https://python.langchain.com/api_reference/__package_name_short_snake__/) | β /β | beta/β | β /β |  |  |\n",
+ "\n",
+ "### Model features\n",
+ "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
+ "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
+ "| β /β | β /β | β /β | β /β | β /β | β /β | β /β | β /β | β /β | β /β |\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "- TODO: Update with relevant info.\n",
+ "\n",
+ "To access __ModuleName__ models you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name__` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "- TODO: Update with relevant info.\n",
+ "\n",
+ "Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
+ " os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
+ " \"Enter your __ModuleName__ API key: \"\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "72ee0c4b-9764-423a-9dbf-95129e185210",
+ "metadata": {},
+ "source": [
+ "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
+ "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0730d6a1-c893-4840-9817-5e5251676d5d",
+ "metadata": {},
+ "source": [
+ "### Installation\n",
+ "\n",
+ "The LangChain __ModuleName__ integration lives in the `__package_name__` package:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "652d6238-1f87-422a-b135-f5abbb8652fc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install -qU __package_name__"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a38cde65-254d-4219-a441-068766c0d4b5",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our model object and generate chat completions:\n",
+ "\n",
+ "- TODO: Update model instantiation with relevant params."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from __module_name__ import Chat__ModuleName__\n",
+ "\n",
+ "model = Chat__ModuleName__(\n",
+ " model=\"model-name\",\n",
+ " temperature=0,\n",
+ " max_tokens=None,\n",
+ " timeout=None,\n",
+ " max_retries=2,\n",
+ " # other params...\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2b4f3e15",
+ "metadata": {},
+ "source": [
+ "## Invocation\n",
+ "\n",
+ "- TODO: Run cells so output can be seen."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "62e0dbc3",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "messages = [\n",
+ " (\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
+ " ),\n",
+ " (\"human\", \"I love programming.\"),\n",
+ "]\n",
+ "ai_msg = model.invoke(messages)\n",
+ "ai_msg"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(ai_msg.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n",
+ "\n",
+ "- TODO: Run cells so output can be seen."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_core.prompts import ChatPromptTemplate\n",
+ "\n",
+ "prompt = ChatPromptTemplate(\n",
+ " [\n",
+ " (\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
+ " ),\n",
+ " (\"human\", \"{input}\"),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "chain = prompt | model\n",
+ "chain.invoke(\n",
+ " {\n",
+ " \"input_language\": \"English\",\n",
+ " \"output_language\": \"German\",\n",
+ " \"input\": \"I love programming.\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
+ "metadata": {},
+ "source": [
+ "## TODO: Any functionality specific to this model provider\n",
+ "\n",
+ "E.g. creating/using finetuned models via this provider. Delete if not relevant."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all Chat__ModuleName__ features and configurations head to the [API reference](https://python.langchain.com/api_reference/__package_name_short_snake__/chat_models/__module_name__.chat_models.Chat__ModuleName__.html)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.9"
+ }
},
- {
- "cell_type": "markdown",
- "id": "e49f1e0d",
- "metadata": {},
- "source": [
- "# Chat__ModuleName__\n",
- "\n",
- "- TODO: Make sure API reference link is correct.\n",
- "\n",
- "This will help you get started with __ModuleName__ [chat models](/docs/concepts/chat_models). For detailed documentation of all Chat__ModuleName__ features and configurations head to the [API reference](https://python.langchain.com/api_reference/__package_name_short_snake__/chat_models/__module_name__.chat_models.Chat__ModuleName__.html).\n",
- "\n",
- "- TODO: Add any other relevant links, like information about models, prices, context windows, etc. See https://python.langchain.com/docs/integrations/chat/openai/ for an example.\n",
- "\n",
- "## Overview\n",
- "### Integration details\n",
- "\n",
- "- TODO: Fill in table features.\n",
- "- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
- "- TODO: Make sure API reference links are correct.\n",
- "\n",
- "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
- "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
- "| [Chat__ModuleName__](https://python.langchain.com/api_reference/__package_name_short_snake__/chat_models/__module_name__.chat_models.Chat__ModuleName__.html) | [__package_name__](https://python.langchain.com/api_reference/__package_name_short_snake__/) | β /β | beta/β | β /β |  |  |\n",
- "\n",
- "### Model features\n",
- "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
- "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
- "| β /β | β /β | β /β | β /β | β /β | β /β | β /β | β /β | β /β | β /β |\n",
- "\n",
- "## Setup\n",
- "\n",
- "- TODO: Update with relevant info.\n",
- "\n",
- "To access __ModuleName__ models you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name__` integration package.\n",
- "\n",
- "### Credentials\n",
- "\n",
- "- TODO: Update with relevant info.\n",
- "\n",
- "Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
- "metadata": {},
- "outputs": [],
- "source": [
- "import getpass\n",
- "import os\n",
- "\n",
- "if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
- " os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
- " \"Enter your __ModuleName__ API key: \"\n",
- " )"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "72ee0c4b-9764-423a-9dbf-95129e185210",
- "metadata": {},
- "source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
- "metadata": {},
- "outputs": [],
- "source": [
- "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
- "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "0730d6a1-c893-4840-9817-5e5251676d5d",
- "metadata": {},
- "source": [
- "### Installation\n",
- "\n",
- "The LangChain __ModuleName__ integration lives in the `__package_name__` package:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "652d6238-1f87-422a-b135-f5abbb8652fc",
- "metadata": {},
- "outputs": [],
- "source": [
- "%pip install -qU __package_name__"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "a38cde65-254d-4219-a441-068766c0d4b5",
- "metadata": {},
- "source": [
- "## Instantiation\n",
- "\n",
- "Now we can instantiate our model object and generate chat completions:\n",
- "\n",
- "- TODO: Update model instantiation with relevant params."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
- "metadata": {},
- "outputs": [],
- "source": [
- "from __module_name__ import Chat__ModuleName__\n",
- "\n",
- "llm = Chat__ModuleName__(\n",
- " model=\"model-name\",\n",
- " temperature=0,\n",
- " max_tokens=None,\n",
- " timeout=None,\n",
- " max_retries=2,\n",
- " # other params...\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "2b4f3e15",
- "metadata": {},
- "source": [
- "## Invocation\n",
- "\n",
- "- TODO: Run cells so output can be seen."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "62e0dbc3",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "messages = [\n",
- " (\n",
- " \"system\",\n",
- " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
- " ),\n",
- " (\"human\", \"I love programming.\"),\n",
- "]\n",
- "ai_msg = llm.invoke(messages)\n",
- "ai_msg"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
- "metadata": {},
- "outputs": [],
- "source": [
- "print(ai_msg.content)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
- "metadata": {},
- "source": [
- "## Chaining\n",
- "\n",
- "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n",
- "\n",
- "- TODO: Run cells so output can be seen."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_core.prompts import ChatPromptTemplate\n",
- "\n",
- "prompt = ChatPromptTemplate(\n",
- " [\n",
- " (\n",
- " \"system\",\n",
- " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
- " ),\n",
- " (\"human\", \"{input}\"),\n",
- " ]\n",
- ")\n",
- "\n",
- "chain = prompt | llm\n",
- "chain.invoke(\n",
- " {\n",
- " \"input_language\": \"English\",\n",
- " \"output_language\": \"German\",\n",
- " \"input\": \"I love programming.\",\n",
- " }\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
- "metadata": {},
- "source": [
- "## TODO: Any functionality specific to this model provider\n",
- "\n",
- "E.g. creating/using finetuned models via this provider. Delete if not relevant."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
- "metadata": {},
- "source": [
- "## API reference\n",
- "\n",
- "For detailed documentation of all Chat__ModuleName__ features and configurations head to the [API reference](https://python.langchain.com/api_reference/__package_name_short_snake__/chat_models/__module_name__.chat_models.Chat__ModuleName__.html)"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.9"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
+ "nbformat": 4,
+ "nbformat_minor": 5
}
diff --git a/libs/cli/langchain_cli/integration_template/docs/llms.ipynb b/libs/cli/langchain_cli/integration_template/docs/llms.ipynb
index 217929fff71..ffeff84e280 100644
--- a/libs/cli/langchain_cli/integration_template/docs/llms.ipynb
+++ b/libs/cli/langchain_cli/integration_template/docs/llms.ipynb
@@ -1,236 +1,238 @@
{
- "cells": [
- {
- "cell_type": "raw",
- "id": "67db2992",
- "metadata": {},
- "source": [
- "---\n",
- "sidebar_label: __ModuleName__\n",
- "---"
- ]
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "67db2992",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_label: __ModuleName__\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9597802c",
+ "metadata": {},
+ "source": [
+ "# __ModuleName__LLM\n",
+ "\n",
+ "- [ ] TODO: Make sure API reference link is correct\n",
+ "\n",
+ "This will help you get started with __ModuleName__ completion models (LLMs) using LangChain. For detailed documentation on `__ModuleName__LLM` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/llms/__module_name__.llms.__ModuleName__LLM.html).\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "- TODO: Fill in table features.\n",
+ "- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
+ "- TODO: Make sure API reference links are correct.\n",
+ "\n",
+ "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/llms/__package_name_short_snake__) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [__ModuleName__LLM](https://api.python.langchain.com/en/latest/llms/__module_name__.llms.__ModuleName__LLM.html) | [__package_name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | β /β | beta/β | β /β |  |  |\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "- TODO: Update with relevant info.\n",
+ "\n",
+ "To access __ModuleName__ models you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name__` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "- TODO: Update with relevant info.\n",
+ "\n",
+ "Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bc51e756",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
+ " os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
+ " \"Enter your __ModuleName__ API key: \"\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4b6e1ca6",
+ "metadata": {},
+ "source": [
+ "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "196c2b41",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
+ "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "809c6577",
+ "metadata": {},
+ "source": [
+ "### Installation\n",
+ "\n",
+ "The LangChain __ModuleName__ integration lives in the `__package_name__` package:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "59c710c4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install -qU __package_name__"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0a760037",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our model object and generate chat completions:\n",
+ "\n",
+ "- TODO: Update model instantiation with relevant params."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a0562a13",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from __module_name__ import __ModuleName__LLM\n",
+ "\n",
+ "model = __ModuleName__LLM(\n",
+ " model=\"model-name\",\n",
+ " temperature=0,\n",
+ " max_tokens=None,\n",
+ " timeout=None,\n",
+ " max_retries=2,\n",
+ " # other params...\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0ee90032",
+ "metadata": {},
+ "source": [
+ "## Invocation\n",
+ "\n",
+ "- [ ] TODO: Run cells so output can be seen."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "035dea0f",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "input_text = \"__ModuleName__ is an AI company that \"\n",
+ "\n",
+ "completion = model.invoke(input_text)\n",
+ "completion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "add38532",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can [chain](/docs/how_to/sequence/) our completion model with a prompt template like so:\n",
+ "\n",
+ "- TODO: Run cells so output can be seen."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "078e9db2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_core.prompts import PromptTemplate\n",
+ "\n",
+ "prompt = PromptTemplate(\"How to say {input} in {output_language}:\\n\")\n",
+ "\n",
+ "chain = prompt | model\n",
+ "chain.invoke(\n",
+ " {\n",
+ " \"output_language\": \"German\",\n",
+ " \"input\": \"I love programming.\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e99eef30",
+ "metadata": {},
+ "source": [
+ "## TODO: Any functionality specific to this model provider\n",
+ "\n",
+ "E.g. creating/using finetuned models via this provider. Delete if not relevant"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e9bdfcef",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all `__ModuleName__LLM` features and configurations head to the API reference: https://api.python.langchain.com/en/latest/llms/__module_name__.llms.__ModuleName__LLM.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3.11.1 64-bit",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.7"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "e971737741ff4ec9aff7dc6155a1060a59a8a6d52c757dbbe66bf8ee389494b1"
+ }
+ }
},
- {
- "cell_type": "markdown",
- "id": "9597802c",
- "metadata": {},
- "source": [
- "# __ModuleName__LLM\n",
- "\n",
- "- [ ] TODO: Make sure API reference link is correct\n",
- "\n",
- "This will help you get started with __ModuleName__ completion models (LLMs) using LangChain. For detailed documentation on `__ModuleName__LLM` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/llms/__module_name__.llms.__ModuleName__LLM.html).\n",
- "\n",
- "## Overview\n",
- "### Integration details\n",
- "\n",
- "- TODO: Fill in table features.\n",
- "- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
- "- TODO: Make sure API reference links are correct.\n",
- "\n",
- "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/llms/__package_name_short_snake__) | Package downloads | Package latest |\n",
- "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
- "| [__ModuleName__LLM](https://api.python.langchain.com/en/latest/llms/__module_name__.llms.__ModuleName__LLM.html) | [__package_name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | β /β | beta/β | β /β |  |  |\n",
- "\n",
- "## Setup\n",
- "\n",
- "- TODO: Update with relevant info.\n",
- "\n",
- "To access __ModuleName__ models you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name__` integration package.\n",
- "\n",
- "### Credentials\n",
- "\n",
- "- TODO: Update with relevant info.\n",
- "\n",
- "Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "bc51e756",
- "metadata": {},
- "outputs": [],
- "source": [
- "import getpass\n",
- "import os\n",
- "\n",
- "if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
- " os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
- " \"Enter your __ModuleName__ API key: \"\n",
- " )"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "4b6e1ca6",
- "metadata": {},
- "source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "196c2b41",
- "metadata": {},
- "outputs": [],
- "source": [
- "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
- "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "809c6577",
- "metadata": {},
- "source": [
- "### Installation\n",
- "\n",
- "The LangChain __ModuleName__ integration lives in the `__package_name__` package:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "59c710c4",
- "metadata": {},
- "outputs": [],
- "source": [
- "%pip install -qU __package_name__"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "0a760037",
- "metadata": {},
- "source": [
- "## Instantiation\n",
- "\n",
- "Now we can instantiate our model object and generate chat completions:\n",
- "\n",
- "- TODO: Update model instantiation with relevant params."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "a0562a13",
- "metadata": {},
- "outputs": [],
- "source": [
- "from __module_name__ import __ModuleName__LLM\n",
- "\n",
- "llm = __ModuleName__LLM(\n",
- " model=\"model-name\",\n",
- " temperature=0,\n",
- " max_tokens=None,\n",
- " timeout=None,\n",
- " max_retries=2,\n",
- " # other params...\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "0ee90032",
- "metadata": {},
- "source": [
- "## Invocation\n",
- "\n",
- "- [ ] TODO: Run cells so output can be seen."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "035dea0f",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "input_text = \"__ModuleName__ is an AI company that \"\n",
- "\n",
- "completion = llm.invoke(input_text)\n",
- "completion"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "add38532",
- "metadata": {},
- "source": [
- "## Chaining\n",
- "\n",
- "We can [chain](/docs/how_to/sequence/) our completion model with a prompt template like so:\n",
- "\n",
- "- TODO: Run cells so output can be seen."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "078e9db2",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_core.prompts import PromptTemplate\n",
- "\n",
- "prompt = PromptTemplate(\"How to say {input} in {output_language}:\\n\")\n",
- "\n",
- "chain = prompt | llm\n",
- "chain.invoke(\n",
- " {\n",
- " \"output_language\": \"German\",\n",
- " \"input\": \"I love programming.\",\n",
- " }\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "e99eef30",
- "metadata": {},
- "source": [
- "## TODO: Any functionality specific to this model provider\n",
- "\n",
- "E.g. creating/using finetuned models via this provider. Delete if not relevant"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "e9bdfcef",
- "metadata": {},
- "source": [
- "## API reference\n",
- "\n",
- "For detailed documentation of all `__ModuleName__LLM` features and configurations head to the API reference: https://api.python.langchain.com/en/latest/llms/__module_name__.llms.__ModuleName__LLM.html"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3.11.1 64-bit",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.9.7"
- },
- "vscode": {
- "interpreter": {
- "hash": "e971737741ff4ec9aff7dc6155a1060a59a8a6d52c757dbbe66bf8ee389494b1"
- }
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
+ "nbformat": 4,
+ "nbformat_minor": 5
}
diff --git a/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb b/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb
index 300d3d87958..254633bdf23 100644
--- a/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb
+++ b/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb
@@ -155,7 +155,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
- "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
+ "model = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
@@ -185,7 +185,7 @@
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
- " | llm\n",
+ " | model\n",
" | StrOutputParser()\n",
")"
]
diff --git a/libs/cli/langchain_cli/integration_template/docs/stores.ipynb b/libs/cli/langchain_cli/integration_template/docs/stores.ipynb
index 5daa0568c4a..e250dcfa627 100644
--- a/libs/cli/langchain_cli/integration_template/docs/stores.ipynb
+++ b/libs/cli/langchain_cli/integration_template/docs/stores.ipynb
@@ -1,204 +1,204 @@
{
- "cells": [
- {
- "cell_type": "raw",
- "metadata": {
- "vscode": {
- "languageId": "raw"
+ "cells": [
+ {
+ "cell_type": "raw",
+ "metadata": {
+ "vscode": {
+ "languageId": "raw"
+ }
+ },
+ "source": [
+ "---\n",
+ "sidebar_label: __ModuleName__ByteStore\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# __ModuleName__ByteStore\n",
+ "\n",
+ "- TODO: Make sure API reference link is correct.\n",
+ "\n",
+ "This will help you get started with __ModuleName__ [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all __ModuleName__ByteStore features and configurations head to the [API reference](https://python.langchain.com/v0.2/api_reference/core/stores/langchain_core.stores.__module_name__ByteStore.html).\n",
+ "\n",
+ "- TODO: Add any other relevant links, like information about models, prices, context windows, etc. See https://python.langchain.com/docs/integrations/stores/in_memory/ for an example.\n",
+ "\n",
+ "## Overview\n",
+ "\n",
+ "- TODO: (Optional) A short introduction to the underlying technology/API.\n",
+ "\n",
+ "### Integration details\n",
+ "\n",
+ "- TODO: Fill in table features.\n",
+ "- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
+ "- TODO: Make sure API reference links are correct.\n",
+ "\n",
+ "| Class | Package | Local | [JS support](https://js.langchain.com/docs/integrations/stores/_package_name_) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: |\n",
+ "| [__ModuleName__ByteStore](https://api.python.langchain.com/en/latest/stores/__module_name__.stores.__ModuleName__ByteStore.html) | [__package_name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | β /β | β /β |  |  |\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "- TODO: Update with relevant info.\n",
+ "\n",
+ "To create a __ModuleName__ byte store, you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name__` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "- TODO: Update with relevant info, or omit if the service does not require any credentials.\n",
+ "\n",
+ "Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
+ " os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
+ " \"Enter your __ModuleName__ API key: \"\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Installation\n",
+ "\n",
+ "The LangChain __ModuleName__ integration lives in the `__package_name__` package:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install -qU __package_name__"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our byte store:\n",
+ "\n",
+ "- TODO: Update model instantiation with relevant params."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from __module_name__ import __ModuleName__ByteStore\n",
+ "\n",
+ "kv_store = __ModuleName__ByteStore(\n",
+ " # params...\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Usage\n",
+ "\n",
+ "- TODO: Run cells so output can be seen.\n",
+ "\n",
+ "You can set data under keys like this using the `mset` method:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "kv_store.mset(\n",
+ " [\n",
+ " [\"key1\", b\"value1\"],\n",
+ " [\"key2\", b\"value2\"],\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "kv_store.mget(\n",
+ " [\n",
+ " \"key1\",\n",
+ " \"key2\",\n",
+ " ]\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And you can delete data using the `mdelete` method:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "kv_store.mdelete(\n",
+ " [\n",
+ " \"key1\",\n",
+ " \"key2\",\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "kv_store.mget(\n",
+ " [\n",
+ " \"key1\",\n",
+ " \"key2\",\n",
+ " ]\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## TODO: Any functionality specific to this key-value store provider\n",
+ "\n",
+ "E.g. extra initialization. Delete if not relevant."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all __ModuleName__ByteStore features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/stores/__module_name__.stores.__ModuleName__ByteStore.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "name": "python",
+ "version": "3.10.5"
}
- },
- "source": [
- "---\n",
- "sidebar_label: __ModuleName__ByteStore\n",
- "---"
- ]
},
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# __ModuleName__ByteStore\n",
- "\n",
- "- TODO: Make sure API reference link is correct.\n",
- "\n",
- "This will help you get started with __ModuleName__ [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all __ModuleName__ByteStore features and configurations head to the [API reference](https://python.langchain.com/v0.2/api_reference/core/stores/langchain_core.stores.__module_name__ByteStore.html).\n",
- "\n",
- "- TODO: Add any other relevant links, like information about models, prices, context windows, etc. See https://python.langchain.com/docs/integrations/stores/in_memory/ for an example.\n",
- "\n",
- "## Overview\n",
- "\n",
- "- TODO: (Optional) A short introduction to the underlying technology/API.\n",
- "\n",
- "### Integration details\n",
- "\n",
- "- TODO: Fill in table features.\n",
- "- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
- "- TODO: Make sure API reference links are correct.\n",
- "\n",
- "| Class | Package | Local | [JS support](https://js.langchain.com/docs/integrations/stores/_package_name_) | Package downloads | Package latest |\n",
- "| :--- | :--- | :---: | :---: | :---: | :---: |\n",
- "| [__ModuleName__ByteStore](https://api.python.langchain.com/en/latest/stores/__module_name__.stores.__ModuleName__ByteStore.html) | [__package_name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | β /β | β /β |  |  |\n",
- "\n",
- "## Setup\n",
- "\n",
- "- TODO: Update with relevant info.\n",
- "\n",
- "To create a __ModuleName__ byte store, you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name__` integration package.\n",
- "\n",
- "### Credentials\n",
- "\n",
- "- TODO: Update with relevant info, or omit if the service does not require any credentials.\n",
- "\n",
- "Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import getpass\n",
- "import os\n",
- "\n",
- "if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
- " os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
- " \"Enter your __ModuleName__ API key: \"\n",
- " )"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Installation\n",
- "\n",
- "The LangChain __ModuleName__ integration lives in the `__package_name__` package:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "%pip install -qU __package_name__"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Instantiation\n",
- "\n",
- "Now we can instantiate our byte store:\n",
- "\n",
- "- TODO: Update model instantiation with relevant params."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from __module_name__ import __ModuleName__ByteStore\n",
- "\n",
- "kv_store = __ModuleName__ByteStore(\n",
- " # params...\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Usage\n",
- "\n",
- "- TODO: Run cells so output can be seen.\n",
- "\n",
- "You can set data under keys like this using the `mset` method:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "kv_store.mset(\n",
- " [\n",
- " [\"key1\", b\"value1\"],\n",
- " [\"key2\", b\"value2\"],\n",
- " ]\n",
- ")\n",
- "\n",
- "kv_store.mget(\n",
- " [\n",
- " \"key1\",\n",
- " \"key2\",\n",
- " ]\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "And you can delete data using the `mdelete` method:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "kv_store.mdelete(\n",
- " [\n",
- " \"key1\",\n",
- " \"key2\",\n",
- " ]\n",
- ")\n",
- "\n",
- "kv_store.mget(\n",
- " [\n",
- " \"key1\",\n",
- " \"key2\",\n",
- " ]\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## TODO: Any functionality specific to this key-value store provider\n",
- "\n",
- "E.g. extra initialization. Delete if not relevant."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## API reference\n",
- "\n",
- "For detailed documentation of all __ModuleName__ByteStore features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/stores/__module_name__.stores.__ModuleName__ByteStore.html"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "name": "python",
- "version": "3.10.5"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
+ "nbformat": 4,
+ "nbformat_minor": 2
}
diff --git a/libs/cli/langchain_cli/integration_template/docs/tools.ipynb b/libs/cli/langchain_cli/integration_template/docs/tools.ipynb
index a160b95658a..0310a839850 100644
--- a/libs/cli/langchain_cli/integration_template/docs/tools.ipynb
+++ b/libs/cli/langchain_cli/integration_template/docs/tools.ipynb
@@ -1,271 +1,271 @@
{
- "cells": [
- {
- "cell_type": "raw",
- "id": "10238e62-3465-4973-9279-606cbb7ccf16",
- "metadata": {},
- "source": [
- "---\n",
- "sidebar_label: __ModuleName__\n",
- "---"
- ]
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "10238e62-3465-4973-9279-606cbb7ccf16",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_label: __ModuleName__\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a6f91f20",
+ "metadata": {},
+ "source": [
+ "# __ModuleName__\n",
+ "\n",
+ "- TODO: Make sure API reference link is correct.\n",
+ "\n",
+ "This notebook provides a quick overview for getting started with __ModuleName__ [tool](/docs/integrations/tools/). For detailed documentation of all __ModuleName__ features and configurations head to the [API reference](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.__module_name__.tool.__ModuleName__.html).\n",
+ "\n",
+ "- TODO: Add any other relevant links, like information about underlying API, etc.\n",
+ "\n",
+ "## Overview\n",
+ "\n",
+ "### Integration details\n",
+ "\n",
+ "- TODO: Make sure links and features are correct\n",
+ "\n",
+ "| Class | Package | Serializable | [JS support](https://js.langchain.com/docs/integrations/tools/__module_name__) | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: |\n",
+ "| [__ModuleName__](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.__module_name__.tool.__ModuleName__.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | beta/β | β /β |  |\n",
+ "\n",
+ "### Tool features\n",
+ "\n",
+ "- TODO: Add feature table if it makes sense\n",
+ "\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "- TODO: Add any additional deps\n",
+ "\n",
+ "The integration lives in the `langchain-community` package."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f85b4089",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install --quiet -U langchain-community"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b15e9266",
+ "metadata": {},
+ "source": [
+ "### Credentials\n",
+ "\n",
+ "- TODO: Add any credentials that are needed"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "e0b178a2-8816-40ca-b57c-ccdd86dde9c9",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "# if not os.environ.get(\"__MODULE_NAME___API_KEY\"):\n",
+ "# os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\"__MODULE_NAME__ API key:\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bc5ab717-fd27-4c59-b912-bdd099541478",
+ "metadata": {},
+ "source": [
+ "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "a6c2f136-6367-4f1f-825d-ae741e1bf281",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
+ "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1c97218f-f366-479d-8bf7-fe9f2f6df73f",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "- TODO: Fill in instantiation params\n",
+ "\n",
+ "Here we show how to instantiate an instance of the __ModuleName__ tool, with "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "8b3ddfe9-ca79-494c-a7ab-1f56d9407a64",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_community.tools import __ModuleName__\n",
+ "\n",
+ "\n",
+ "tool = __ModuleName__(...)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "74147a1a",
+ "metadata": {},
+ "source": [
+ "## Invocation\n",
+ "\n",
+ "### [Invoke directly with args](/docs/concepts/tools/#use-the-tool-directly)\n",
+ "\n",
+ "- TODO: Describe what the tool args are, fill them in, run cell"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "65310a8b-eb0c-4d9e-a618-4f4abe2414fc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "tool.invoke({...})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d6e73897",
+ "metadata": {},
+ "source": [
+ "### [Invoke with ToolCall](/docs/concepts/tool_calling/#tool-execution)\n",
+ "\n",
+ "We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:\n",
+ "\n",
+ "- TODO: Fill in tool args and run cell"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f90e33a7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is usually generated by a model, but we'll create a tool call directly for demo purposes.\n",
+ "model_generated_tool_call = {\n",
+ " \"args\": {...}, # TODO: FILL IN\n",
+ " \"id\": \"1\",\n",
+ " \"name\": tool.name,\n",
+ " \"type\": \"tool_call\",\n",
+ "}\n",
+ "tool.invoke(model_generated_tool_call)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "659f9fbd-6fcf-445f-aa8c-72d8e60154bd",
+ "metadata": {},
+ "source": [
+ "## Use within an agent\n",
+ "\n",
+ "- TODO: Add user question and run cells\n",
+ "\n",
+ "We can use our tool in an [agent](/docs/concepts/agents/). For this we will need a LLM with [tool-calling](/docs/how_to/tool_calling/) capabilities:\n",
+ "\n",
+ "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "af3123ad-7a02-40e5-b58e-7d56e23e5830",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# | output: false\n",
+ "# | echo: false\n",
+ "\n",
+ "# !pip install -qU langchain langchain-openai\n",
+ "from langchain.chat_models import init_chat_model\n",
+ "\n",
+ "model = init_chat_model(model=\"gpt-4o\", model_provider=\"openai\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bea35fa1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langgraph.prebuilt import create_react_agent\n",
+ "\n",
+ "tools = [tool]\n",
+ "agent = create_react_agent(model, tools)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "fdbf35b5-3aaf-4947-9ec6-48c21533fb95",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "example_query = \"...\"\n",
+ "\n",
+ "events = agent.stream(\n",
+ " {\"messages\": [(\"user\", example_query)]},\n",
+ " stream_mode=\"values\",\n",
+ ")\n",
+ "for event in events:\n",
+ " event[\"messages\"][-1].pretty_print()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4ac8146c",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all __ModuleName__ features and configurations head to the API reference: https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.__module_name__.tool.__ModuleName__.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "poetry-venv-311",
+ "language": "python",
+ "name": "poetry-venv-311"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.9"
+ }
},
- {
- "cell_type": "markdown",
- "id": "a6f91f20",
- "metadata": {},
- "source": [
- "# __ModuleName__\n",
- "\n",
- "- TODO: Make sure API reference link is correct.\n",
- "\n",
- "This notebook provides a quick overview for getting started with __ModuleName__ [tool](/docs/integrations/tools/). For detailed documentation of all __ModuleName__ features and configurations head to the [API reference](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.__module_name__.tool.__ModuleName__.html).\n",
- "\n",
- "- TODO: Add any other relevant links, like information about underlying API, etc.\n",
- "\n",
- "## Overview\n",
- "\n",
- "### Integration details\n",
- "\n",
- "- TODO: Make sure links and features are correct\n",
- "\n",
- "| Class | Package | Serializable | [JS support](https://js.langchain.com/docs/integrations/tools/__module_name__) | Package latest |\n",
- "| :--- | :--- | :---: | :---: | :---: |\n",
- "| [__ModuleName__](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.__module_name__.tool.__ModuleName__.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | beta/β | β /β |  |\n",
- "\n",
- "### Tool features\n",
- "\n",
- "- TODO: Add feature table if it makes sense\n",
- "\n",
- "\n",
- "## Setup\n",
- "\n",
- "- TODO: Add any additional deps\n",
- "\n",
- "The integration lives in the `langchain-community` package."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "f85b4089",
- "metadata": {},
- "outputs": [],
- "source": [
- "%pip install --quiet -U langchain-community"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b15e9266",
- "metadata": {},
- "source": [
- "### Credentials\n",
- "\n",
- "- TODO: Add any credentials that are needed"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "id": "e0b178a2-8816-40ca-b57c-ccdd86dde9c9",
- "metadata": {},
- "outputs": [],
- "source": [
- "import getpass\n",
- "import os\n",
- "\n",
- "# if not os.environ.get(\"__MODULE_NAME___API_KEY\"):\n",
- "# os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\"__MODULE_NAME__ API key:\\n\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "bc5ab717-fd27-4c59-b912-bdd099541478",
- "metadata": {},
- "source": [
- "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "id": "a6c2f136-6367-4f1f-825d-ae741e1bf281",
- "metadata": {},
- "outputs": [],
- "source": [
- "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
- "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "1c97218f-f366-479d-8bf7-fe9f2f6df73f",
- "metadata": {},
- "source": [
- "## Instantiation\n",
- "\n",
- "- TODO: Fill in instantiation params\n",
- "\n",
- "Here we show how to instantiate an instance of the __ModuleName__ tool, with "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "8b3ddfe9-ca79-494c-a7ab-1f56d9407a64",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_community.tools import __ModuleName__\n",
- "\n",
- "\n",
- "tool = __ModuleName__(...)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "74147a1a",
- "metadata": {},
- "source": [
- "## Invocation\n",
- "\n",
- "### [Invoke directly with args](/docs/concepts/tools/#use-the-tool-directly)\n",
- "\n",
- "- TODO: Describe what the tool args are, fill them in, run cell"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "65310a8b-eb0c-4d9e-a618-4f4abe2414fc",
- "metadata": {},
- "outputs": [],
- "source": [
- "tool.invoke({...})"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d6e73897",
- "metadata": {},
- "source": [
- "### [Invoke with ToolCall](/docs/concepts/tool_calling/#tool-execution)\n",
- "\n",
- "We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:\n",
- "\n",
- "- TODO: Fill in tool args and run cell"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "f90e33a7",
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is usually generated by a model, but we'll create a tool call directly for demo purposes.\n",
- "model_generated_tool_call = {\n",
- " \"args\": {...}, # TODO: FILL IN\n",
- " \"id\": \"1\",\n",
- " \"name\": tool.name,\n",
- " \"type\": \"tool_call\",\n",
- "}\n",
- "tool.invoke(model_generated_tool_call)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "659f9fbd-6fcf-445f-aa8c-72d8e60154bd",
- "metadata": {},
- "source": [
- "## Use within an agent\n",
- "\n",
- "- TODO: Add user question and run cells\n",
- "\n",
- "We can use our tool in an [agent](/docs/concepts/agents/). For this we will need a LLM with [tool-calling](/docs/how_to/tool_calling/) capabilities:\n",
- "\n",
- "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "id": "af3123ad-7a02-40e5-b58e-7d56e23e5830",
- "metadata": {},
- "outputs": [],
- "source": [
- "# | output: false\n",
- "# | echo: false\n",
- "\n",
- "# !pip install -qU langchain langchain-openai\n",
- "from langchain.chat_models import init_chat_model\n",
- "\n",
- "llm = init_chat_model(model=\"gpt-4o\", model_provider=\"openai\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "bea35fa1",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langgraph.prebuilt import create_react_agent\n",
- "\n",
- "tools = [tool]\n",
- "agent = create_react_agent(llm, tools)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "fdbf35b5-3aaf-4947-9ec6-48c21533fb95",
- "metadata": {},
- "outputs": [],
- "source": [
- "example_query = \"...\"\n",
- "\n",
- "events = agent.stream(\n",
- " {\"messages\": [(\"user\", example_query)]},\n",
- " stream_mode=\"values\",\n",
- ")\n",
- "for event in events:\n",
- " event[\"messages\"][-1].pretty_print()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "4ac8146c",
- "metadata": {},
- "source": [
- "## API reference\n",
- "\n",
- "For detailed documentation of all __ModuleName__ features and configurations head to the API reference: https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.__module_name__.tool.__ModuleName__.html"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "poetry-venv-311",
- "language": "python",
- "name": "poetry-venv-311"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.9"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
+ "nbformat": 4,
+ "nbformat_minor": 5
}
diff --git a/libs/cli/langchain_cli/integration_template/docs/vectorstores.ipynb b/libs/cli/langchain_cli/integration_template/docs/vectorstores.ipynb
index 7f7f58b15c9..c77e495ad72 100644
--- a/libs/cli/langchain_cli/integration_template/docs/vectorstores.ipynb
+++ b/libs/cli/langchain_cli/integration_template/docs/vectorstores.ipynb
@@ -295,7 +295,7 @@
"source": [
"## TODO: Any functionality specific to this vector store\n",
"\n",
- "E.g. creating a persisten database to save to your disk, etc."
+ "E.g. creating a persistent database to save to your disk, etc."
]
},
{
diff --git a/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py b/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py
index 0e077cd4856..740da85137e 100644
--- a/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py
+++ b/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py
@@ -36,20 +36,20 @@ class Chat__ModuleName__(BaseChatModel):
# TODO: Populate with relevant params.
Key init args β completion params:
- model: str
+ model:
Name of __ModuleName__ model to use.
- temperature: float
+ temperature:
Sampling temperature.
- max_tokens: int | None
+ max_tokens:
Max number of tokens to generate.
# TODO: Populate with relevant params.
Key init args β client params:
- timeout: float | None
+ timeout:
Timeout for requests.
- max_retries: int
+ max_retries:
Max number of retries.
- api_key: str | None
+ api_key:
__ModuleName__ API key. If not passed in will be read from env var
__MODULE_NAME___API_KEY.
@@ -60,7 +60,7 @@ class Chat__ModuleName__(BaseChatModel):
```python
from __module_name__ import Chat__ModuleName__
- llm = Chat__ModuleName__(
+ model = Chat__ModuleName__(
model="...",
temperature=0,
max_tokens=None,
@@ -77,7 +77,7 @@ class Chat__ModuleName__(BaseChatModel):
("system", "You are a helpful translator. Translate the user sentence to French."),
("human", "I love programming."),
]
- llm.invoke(messages)
+ model.invoke(messages)
```
```python
@@ -87,7 +87,7 @@ class Chat__ModuleName__(BaseChatModel):
# TODO: Delete if token-level streaming isn't supported.
Stream:
```python
- for chunk in llm.stream(messages):
+ for chunk in model.stream(messages):
print(chunk.text, end="")
```
@@ -96,7 +96,7 @@ class Chat__ModuleName__(BaseChatModel):
```
```python
- stream = llm.stream(messages)
+ stream = model.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
@@ -110,13 +110,13 @@ class Chat__ModuleName__(BaseChatModel):
# TODO: Delete if native async isn't supported.
Async:
```python
- await llm.ainvoke(messages)
+ await model.ainvoke(messages)
# stream:
- # async for chunk in (await llm.astream(messages))
+ # async for chunk in (await model.astream(messages))
# batch:
- # await llm.abatch([messages])
+ # await model.abatch([messages])
```
```python
@@ -137,8 +137,8 @@ class Chat__ModuleName__(BaseChatModel):
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
- llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
- ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
+ model_with_tools = model.bind_tools([GetWeather, GetPopulation])
+ ai_msg = model_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls
```
@@ -162,8 +162,8 @@ class Chat__ModuleName__(BaseChatModel):
punchline: str = Field(description="The punchline to the joke")
rating: int | None = Field(description="How funny the joke is, from 1 to 10")
- structured_llm = llm.with_structured_output(Joke)
- structured_llm.invoke("Tell me a joke about cats")
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke("Tell me a joke about cats")
```
```python
@@ -176,8 +176,8 @@ class Chat__ModuleName__(BaseChatModel):
JSON mode:
```python
# TODO: Replace with appropriate bind arg.
- json_llm = llm.bind(response_format={"type": "json_object"})
- ai_msg = json_llm.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
+ json_model = model.bind(response_format={"type": "json_object"})
+ ai_msg = json_model.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
ai_msg.content
```
@@ -204,7 +204,7 @@ class Chat__ModuleName__(BaseChatModel):
},
],
)
- ai_msg = llm.invoke([message])
+ ai_msg = model.invoke([message])
ai_msg.content
```
@@ -235,7 +235,7 @@ class Chat__ModuleName__(BaseChatModel):
# TODO: Delete if token usage metadata isn't supported.
Token usage:
```python
- ai_msg = llm.invoke(messages)
+ ai_msg = model.invoke(messages)
ai_msg.usage_metadata
```
@@ -247,8 +247,8 @@ class Chat__ModuleName__(BaseChatModel):
Logprobs:
```python
# TODO: Replace with appropriate bind arg.
- logprobs_llm = llm.bind(logprobs=True)
- ai_msg = logprobs_llm.invoke(messages)
+ logprobs_model = model.bind(logprobs=True)
+ ai_msg = logprobs_model.invoke(messages)
ai_msg.response_metadata["logprobs"]
```
@@ -257,7 +257,7 @@ class Chat__ModuleName__(BaseChatModel):
```
Response metadata
```python
- ai_msg = llm.invoke(messages)
+ ai_msg = model.invoke(messages)
ai_msg.response_metadata
```
diff --git a/libs/cli/langchain_cli/integration_template/integration_template/retrievers.py b/libs/cli/langchain_cli/integration_template/integration_template/retrievers.py
index 48c5f735788..d4c6a966bfc 100644
--- a/libs/cli/langchain_cli/integration_template/integration_template/retrievers.py
+++ b/libs/cli/langchain_cli/integration_template/integration_template/retrievers.py
@@ -65,7 +65,7 @@ class __ModuleName__Retriever(BaseRetriever):
Question: {question}\"\"\"
)
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125")
def format_docs(docs):
return "\\n\\n".join(doc.page_content for doc in docs)
@@ -73,7 +73,7 @@ class __ModuleName__Retriever(BaseRetriever):
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
- | llm
+ | model
| StrOutputParser()
)
diff --git a/libs/cli/langchain_cli/integration_template/integration_template/vectorstores.py b/libs/cli/langchain_cli/integration_template/integration_template/vectorstores.py
index 9969932f8da..48dc5980102 100644
--- a/libs/cli/langchain_cli/integration_template/integration_template/vectorstores.py
+++ b/libs/cli/langchain_cli/integration_template/integration_template/vectorstores.py
@@ -37,16 +37,16 @@ class __ModuleName__VectorStore(VectorStore):
# TODO: Populate with relevant params.
Key init args β indexing params:
- collection_name: str
+ collection_name:
Name of the collection.
- embedding_function: Embeddings
+ embedding_function:
Embedding function to use.
# TODO: Populate with relevant params.
Key init args β client params:
- client: Client | None
+ client:
Client to use.
- connection_args: dict | None
+ connection_args:
Connection arguments.
# TODO: Replace with relevant init params.
diff --git a/libs/cli/langchain_cli/namespaces/migrate/generate/utils.py b/libs/cli/langchain_cli/namespaces/migrate/generate/utils.py
index e62f4b07db9..688276617c8 100644
--- a/libs/cli/langchain_cli/namespaces/migrate/generate/utils.py
+++ b/libs/cli/langchain_cli/namespaces/migrate/generate/utils.py
@@ -65,7 +65,7 @@ def is_subclass(class_obj: type, classes_: list[type]) -> bool:
classes_: A list of classes to check against.
Returns:
- True if `class_obj` is a subclass of any class in `classes_`, False otherwise.
+ True if `class_obj` is a subclass of any class in `classes_`, `False` otherwise.
"""
return any(
issubclass(class_obj, kls)
diff --git a/libs/cli/langchain_cli/utils/git.py b/libs/cli/langchain_cli/utils/git.py
index 36d99f55354..cf06b67ee6b 100644
--- a/libs/cli/langchain_cli/utils/git.py
+++ b/libs/cli/langchain_cli/utils/git.py
@@ -182,7 +182,7 @@ def parse_dependencies(
inner_branches = _list_arg_to_length(branch, num_deps)
return list(
- map( # type: ignore[call-overload]
+ map( # type: ignore[call-overload, unused-ignore]
parse_dependency_string,
inner_deps,
inner_repos,
diff --git a/libs/cli/pyproject.toml b/libs/cli/pyproject.toml
index 5c4a051336d..949756eac58 100644
--- a/libs/cli/pyproject.toml
+++ b/libs/cli/pyproject.toml
@@ -20,12 +20,13 @@ description = "CLI for interacting with LangChain"
readme = "README.md"
[project.urls]
-homepage = "https://docs.langchain.com/"
-repository = "https://github.com/langchain-ai/langchain/tree/master/libs/cli"
-changelog = "https://github.com/langchain-ai/langchain/releases?q=%22langchain-cli%3D%3D1%22"
-twitter = "https://x.com/LangChainAI"
-slack = "https://www.langchain.com/join-community"
-reddit = "https://www.reddit.com/r/LangChain/"
+Homepage = "https://docs.langchain.com/"
+Documentation = "https://docs.langchain.com/"
+Source = "https://github.com/langchain-ai/langchain/tree/master/libs/cli"
+Changelog = "https://github.com/langchain-ai/langchain/releases?q=%22langchain-cli%3D%3D1%22"
+Twitter = "https://x.com/LangChainAI"
+Slack = "https://www.langchain.com/join-community"
+Reddit = "https://www.reddit.com/r/LangChain/"
[project.scripts]
langchain = "langchain_cli.cli:app"
@@ -42,14 +43,14 @@ lint = [
]
test = [
"langchain-core",
- "langchain"
+ "langchain-classic"
]
-typing = ["langchain"]
+typing = ["langchain-classic"]
test_integration = []
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
-langchain = { path = "../langchain", editable = true }
+langchain-classic = { path = "../langchain", editable = true }
[tool.ruff.format]
docstring-code-format = true
diff --git a/libs/cli/tests/unit_tests/migrate/generate/test_langchain_migration.py b/libs/cli/tests/unit_tests/migrate/generate/test_langchain_migration.py
index 290a095609c..db243b5d470 100644
--- a/libs/cli/tests/unit_tests/migrate/generate/test_langchain_migration.py
+++ b/libs/cli/tests/unit_tests/migrate/generate/test_langchain_migration.py
@@ -1,5 +1,5 @@
import pytest
-from langchain._api import suppress_langchain_deprecation_warning as sup2
+from langchain_classic._api import suppress_langchain_deprecation_warning as sup2
from langchain_core._api import suppress_langchain_deprecation_warning as sup1
from langchain_cli.namespaces.migrate.generate.generic import (
diff --git a/libs/cli/uv.lock b/libs/cli/uv.lock
index b39b6a359ca..661cfb2defc 100644
--- a/libs/cli/uv.lock
+++ b/libs/cli/uv.lock
@@ -327,7 +327,21 @@ wheels = [
[[package]]
name = "langchain"
-version = "0.3.27"
+version = "1.0.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "langchain-core" },
+ { name = "langgraph" },
+ { name = "pydantic" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/7d/b8/36078257ba52351608129ee983079a4d77ee69eb1470ee248cd8f5728a31/langchain-1.0.0.tar.gz", hash = "sha256:56bf90d935ac1dda864519372d195ca58757b755dd4c44b87840b67d069085b7", size = 466932, upload-time = "2025-10-17T20:53:20.319Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/c4/4d/2758a16ad01716c0fb3fe9ec205fd530eae4528b35a27ff44837c399e032/langchain-1.0.0-py3-none-any.whl", hash = "sha256:8c95e41250fc86d09a978fbdf999f86c18d50a28a2addc5da88546af00a1ad15", size = 106202, upload-time = "2025-10-17T20:53:18.685Z" },
+]
+
+[[package]]
+name = "langchain-classic"
+version = "1.0.0"
source = { editable = "../langchain" }
dependencies = [
{ name = "async-timeout", marker = "python_full_version < '3.11'" },
@@ -344,20 +358,28 @@ dependencies = [
requires-dist = [
{ name = "async-timeout", marker = "python_full_version < '3.11'", specifier = ">=4.0.0,<5.0.0" },
{ name = "langchain-anthropic", marker = "extra == 'anthropic'" },
- { name = "langchain-community", marker = "extra == 'community'" },
+ { name = "langchain-aws", marker = "extra == 'aws'" },
{ name = "langchain-core", editable = "../core" },
+ { name = "langchain-deepseek", marker = "extra == 'deepseek'" },
+ { name = "langchain-fireworks", marker = "extra == 'fireworks'" },
{ name = "langchain-google-genai", marker = "extra == 'google-genai'" },
{ name = "langchain-google-vertexai", marker = "extra == 'google-vertexai'" },
+ { name = "langchain-groq", marker = "extra == 'groq'" },
+ { name = "langchain-huggingface", marker = "extra == 'huggingface'" },
+ { name = "langchain-mistralai", marker = "extra == 'mistralai'" },
+ { name = "langchain-ollama", marker = "extra == 'ollama'" },
{ name = "langchain-openai", marker = "extra == 'openai'", editable = "../partners/openai" },
+ { name = "langchain-perplexity", marker = "extra == 'perplexity'" },
{ name = "langchain-text-splitters", editable = "../text-splitters" },
{ name = "langchain-together", marker = "extra == 'together'" },
+ { name = "langchain-xai", marker = "extra == 'xai'" },
{ name = "langsmith", specifier = ">=0.1.17,<1.0.0" },
{ name = "pydantic", specifier = ">=2.7.4,<3.0.0" },
{ name = "pyyaml", specifier = ">=5.3.0,<7.0.0" },
{ name = "requests", specifier = ">=2.0.0,<3.0.0" },
{ name = "sqlalchemy", specifier = ">=1.4.0,<3.0.0" },
]
-provides-extras = ["community", "anthropic", "openai", "google-vertexai", "google-genai", "together"]
+provides-extras = ["anthropic", "openai", "google-vertexai", "google-genai", "fireworks", "ollama", "together", "mistralai", "huggingface", "groq", "aws", "deepseek", "xai", "perplexity"]
[package.metadata.requires-dev]
dev = [
@@ -376,7 +398,6 @@ test = [
{ name = "blockbuster", specifier = ">=1.5.18,<1.6.0" },
{ name = "cffi", marker = "python_full_version < '3.10'", specifier = "<1.17.1" },
{ name = "cffi", marker = "python_full_version >= '3.10'" },
- { name = "duckdb-engine", specifier = ">=0.9.2,<1.0.0" },
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
{ name = "langchain-core", editable = "../core" },
{ name = "langchain-openai", editable = "../partners/openai" },
@@ -411,9 +432,10 @@ test-integration = [
{ name = "wrapt", specifier = ">=1.15.0,<2.0.0" },
]
typing = [
+ { name = "fastapi", specifier = ">=0.116.1,<1.0.0" },
{ name = "langchain-core", editable = "../core" },
{ name = "langchain-text-splitters", editable = "../text-splitters" },
- { name = "mypy", specifier = ">=1.15.0,<1.16.0" },
+ { name = "mypy", specifier = ">=1.18.2,<1.19.0" },
{ name = "mypy-protobuf", specifier = ">=3.0.0,<4.0.0" },
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
@@ -448,11 +470,11 @@ lint = [
{ name = "ruff" },
]
test = [
- { name = "langchain" },
+ { name = "langchain-classic" },
{ name = "langchain-core" },
]
typing = [
- { name = "langchain" },
+ { name = "langchain-classic" },
]
[package.metadata]
@@ -475,15 +497,15 @@ lint = [
{ name = "ruff", specifier = ">=0.13.1,<0.14" },
]
test = [
- { name = "langchain", editable = "../langchain" },
+ { name = "langchain-classic", editable = "../langchain" },
{ name = "langchain-core", editable = "../core" },
]
test-integration = []
-typing = [{ name = "langchain", editable = "../langchain" }]
+typing = [{ name = "langchain-classic", editable = "../langchain" }]
[[package]]
name = "langchain-core"
-version = "1.0.0a6"
+version = "1.0.0"
source = { editable = "../core" }
dependencies = [
{ name = "jsonpatch" },
@@ -541,7 +563,7 @@ typing = [
[[package]]
name = "langchain-text-splitters"
-version = "1.0.0a1"
+version = "1.0.0"
source = { editable = "../text-splitters" }
dependencies = [
{ name = "langchain-core" },
@@ -574,8 +596,8 @@ test-integration = [
{ name = "nltk", specifier = ">=3.9.1,<4.0.0" },
{ name = "scipy", marker = "python_full_version == '3.12.*'", specifier = ">=1.7.0,<2.0.0" },
{ name = "scipy", marker = "python_full_version >= '3.13'", specifier = ">=1.14.1,<2.0.0" },
- { name = "sentence-transformers", specifier = ">=3.0.1,<4.0.0" },
- { name = "spacy", specifier = ">=3.8.7,<4.0.0" },
+ { name = "sentence-transformers", marker = "python_full_version < '3.14'", specifier = ">=3.0.1,<4.0.0" },
+ { name = "spacy", marker = "python_full_version < '3.14'", specifier = ">=3.8.7,<4.0.0" },
{ name = "thinc", specifier = ">=8.3.6,<9.0.0" },
{ name = "tiktoken", specifier = ">=0.8.0,<1.0.0" },
{ name = "transformers", specifier = ">=4.51.3,<5.0.0" },
@@ -588,6 +610,62 @@ typing = [
{ name = "types-requests", specifier = ">=2.31.0.20240218,<3.0.0.0" },
]
+[[package]]
+name = "langgraph"
+version = "1.0.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "langchain-core" },
+ { name = "langgraph-checkpoint" },
+ { name = "langgraph-prebuilt" },
+ { name = "langgraph-sdk" },
+ { name = "pydantic" },
+ { name = "xxhash" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/57/f7/7ae10f1832ab1a6a402f451e54d6dab277e28e7d4e4204e070c7897ca71c/langgraph-1.0.0.tar.gz", hash = "sha256:5f83ed0e9bbcc37635bc49cbc9b3d9306605fa07504f955b7a871ed715f9964c", size = 472835, upload-time = "2025-10-17T20:23:38.263Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/07/42/6f6d0fe4eb661b06da8e6c59e58044e9e4221fdbffdcacae864557de961e/langgraph-1.0.0-py3-none-any.whl", hash = "sha256:4d478781832a1bc67e06c3eb571412ec47d7c57a5467d1f3775adf0e9dd4042c", size = 155416, upload-time = "2025-10-17T20:23:36.978Z" },
+]
+
+[[package]]
+name = "langgraph-checkpoint"
+version = "2.1.2"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "langchain-core" },
+ { name = "ormsgpack" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/29/83/6404f6ed23a91d7bc63d7df902d144548434237d017820ceaa8d014035f2/langgraph_checkpoint-2.1.2.tar.gz", hash = "sha256:112e9d067a6eff8937caf198421b1ffba8d9207193f14ac6f89930c1260c06f9", size = 142420, upload-time = "2025-10-07T17:45:17.129Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/c4/f2/06bf5addf8ee664291e1b9ffa1f28fc9d97e59806dc7de5aea9844cbf335/langgraph_checkpoint-2.1.2-py3-none-any.whl", hash = "sha256:911ebffb069fd01775d4b5184c04aaafc2962fcdf50cf49d524cd4367c4d0c60", size = 45763, upload-time = "2025-10-07T17:45:16.19Z" },
+]
+
+[[package]]
+name = "langgraph-prebuilt"
+version = "1.0.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "langchain-core" },
+ { name = "langgraph-checkpoint" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/02/2d/934b1129e217216a0dfaf0f7df0a10cedf2dfafe6cc8e1ee238cafaaa4a7/langgraph_prebuilt-1.0.0.tar.gz", hash = "sha256:eb75dad9aca0137451ca0395aa8541a665b3f60979480b0431d626fd195dcda2", size = 119927, upload-time = "2025-10-17T20:15:21.429Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/33/2e/ffa698eedc4c355168a9207ee598b2cc74ede92ce2b55c3469ea06978b6e/langgraph_prebuilt-1.0.0-py3-none-any.whl", hash = "sha256:ceaae4c5cee8c1f9b6468f76c114cafebb748aed0c93483b7c450e5a89de9c61", size = 28455, upload-time = "2025-10-17T20:15:20.043Z" },
+]
+
+[[package]]
+name = "langgraph-sdk"
+version = "0.2.9"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "httpx" },
+ { name = "orjson" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/23/d8/40e01190a73c564a4744e29a6c902f78d34d43dad9b652a363a92a67059c/langgraph_sdk-0.2.9.tar.gz", hash = "sha256:b3bd04c6be4fa382996cd2be8fbc1e7cc94857d2bc6b6f4599a7f2a245975303", size = 99802, upload-time = "2025-09-20T18:49:14.734Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/66/05/b2d34e16638241e6f27a6946d28160d4b8b641383787646d41a3727e0896/langgraph_sdk-0.2.9-py3-none-any.whl", hash = "sha256:fbf302edadbf0fb343596f91c597794e936ef68eebc0d3e1d358b6f9f72a1429", size = 56752, upload-time = "2025-09-20T18:49:13.346Z" },
+]
+
[[package]]
name = "langserve"
version = "0.0.51"
@@ -780,6 +858,61 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/28/01/d6b274a0635be0468d4dbd9cafe80c47105937a0d42434e805e67cd2ed8b/orjson-3.11.3-cp314-cp314-win_arm64.whl", hash = "sha256:e8f6a7a27d7b7bec81bd5924163e9af03d49bbb63013f107b48eb5d16db711bc", size = 125985, upload-time = "2025-08-26T17:46:16.67Z" },
]
+[[package]]
+name = "ormsgpack"
+version = "1.11.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/65/f8/224c342c0e03e131aaa1a1f19aa2244e167001783a433f4eed10eedd834b/ormsgpack-1.11.0.tar.gz", hash = "sha256:7c9988e78fedba3292541eb3bb274fa63044ef4da2ddb47259ea70c05dee4206", size = 49357, upload-time = "2025-10-08T17:29:15.621Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/ff/3d/6996193cb2babc47fc92456223bef7d141065357ad4204eccf313f47a7b3/ormsgpack-1.11.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:03d4e658dd6e1882a552ce1d13cc7b49157414e7d56a4091fbe7823225b08cba", size = 367965, upload-time = "2025-10-08T17:28:06.736Z" },
+ { url = "https://files.pythonhosted.org/packages/35/89/c83b805dd9caebb046f4ceeed3706d0902ed2dbbcf08b8464e89f2c52e05/ormsgpack-1.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1bb67eb913c2b703f0ed39607fc56e50724dd41f92ce080a586b4d6149eb3fe4", size = 195209, upload-time = "2025-10-08T17:28:08.395Z" },
+ { url = "https://files.pythonhosted.org/packages/3a/17/427d9c4f77b120f0af01d7a71d8144771c9388c2a81f712048320e31353b/ormsgpack-1.11.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1e54175b92411f73a238e5653a998627f6660de3def37d9dd7213e0fd264ca56", size = 205868, upload-time = "2025-10-08T17:28:09.688Z" },
+ { url = "https://files.pythonhosted.org/packages/82/32/a9ce218478bdbf3fee954159900e24b314ab3064f7b6a217ccb1e3464324/ormsgpack-1.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ca2b197f4556e1823d1319869d4c5dc278be335286d2308b0ed88b59a5afcc25", size = 207391, upload-time = "2025-10-08T17:28:11.031Z" },
+ { url = "https://files.pythonhosted.org/packages/7a/d3/4413fe7454711596fdf08adabdfa686580e4656702015108e4975f00a022/ormsgpack-1.11.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:bc62388262f58c792fe1e450e1d9dbcc174ed2fb0b43db1675dd7c5ff2319d6a", size = 377078, upload-time = "2025-10-08T17:28:12.39Z" },
+ { url = "https://files.pythonhosted.org/packages/f0/ad/13fae555a45e35ca1ca929a27c9ee0a3ecada931b9d44454658c543f9b9c/ormsgpack-1.11.0-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:c48bc10af74adfbc9113f3fb160dc07c61ad9239ef264c17e449eba3de343dc2", size = 470776, upload-time = "2025-10-08T17:28:13.484Z" },
+ { url = "https://files.pythonhosted.org/packages/36/60/51178b093ffc4e2ef3381013a67223e7d56224434fba80047249f4a84b26/ormsgpack-1.11.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:a608d3a1d4fa4acdc5082168a54513cff91f47764cef435e81a483452f5f7647", size = 380862, upload-time = "2025-10-08T17:28:14.747Z" },
+ { url = "https://files.pythonhosted.org/packages/a6/e3/1cb6c161335e2ae7d711ecfb007a31a3936603626e347c13e5e53b7c7cf8/ormsgpack-1.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:97217b4f7f599ba45916b9c4c4b1d5656e8e2a4d91e2e191d72a7569d3c30923", size = 112058, upload-time = "2025-10-08T17:28:15.777Z" },
+ { url = "https://files.pythonhosted.org/packages/a4/7c/90164d00e8e94b48eff8a17bc2f4be6b71ae356a00904bc69d5e8afe80fb/ormsgpack-1.11.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:c7be823f47d8e36648d4bc90634b93f02b7d7cc7480081195f34767e86f181fb", size = 367964, upload-time = "2025-10-08T17:28:16.778Z" },
+ { url = "https://files.pythonhosted.org/packages/7b/c2/fb6331e880a3446c1341e72c77bd5a46da3e92a8e2edf7ea84a4c6c14fff/ormsgpack-1.11.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68accf15d1b013812755c0eb7a30e1fc2f81eb603a1a143bf0cda1b301cfa797", size = 195209, upload-time = "2025-10-08T17:28:17.796Z" },
+ { url = "https://files.pythonhosted.org/packages/18/50/4943fb5df8cc02da6b7b1ee2c2a7fb13aebc9f963d69280b1bb02b1fb178/ormsgpack-1.11.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:805d06fb277d9a4e503c0c707545b49cde66cbb2f84e5cf7c58d81dfc20d8658", size = 205869, upload-time = "2025-10-08T17:28:19.01Z" },
+ { url = "https://files.pythonhosted.org/packages/1c/fa/e7e06835bfea9adeef43915143ce818098aecab0cbd3df584815adf3e399/ormsgpack-1.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1e57cdf003e77acc43643bda151dc01f97147a64b11cdee1380bb9698a7601c", size = 207391, upload-time = "2025-10-08T17:28:20.352Z" },
+ { url = "https://files.pythonhosted.org/packages/33/f0/f28a19e938a14ec223396e94f4782fbcc023f8c91f2ab6881839d3550f32/ormsgpack-1.11.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:37fc05bdaabd994097c62e2f3e08f66b03f856a640ede6dc5ea340bd15b77f4d", size = 377081, upload-time = "2025-10-08T17:28:21.926Z" },
+ { url = "https://files.pythonhosted.org/packages/4f/e3/73d1d7287637401b0b6637e30ba9121e1aa1d9f5ea185ed9834ca15d512c/ormsgpack-1.11.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:a6e9db6c73eb46b2e4d97bdffd1368a66f54e6806b563a997b19c004ef165e1d", size = 470779, upload-time = "2025-10-08T17:28:22.993Z" },
+ { url = "https://files.pythonhosted.org/packages/9c/46/7ba7f9721e766dd0dfe4cedf444439447212abffe2d2f4538edeeec8ccbd/ormsgpack-1.11.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:e9c44eae5ac0196ffc8b5ed497c75511056508f2303fa4d36b208eb820cf209e", size = 380865, upload-time = "2025-10-08T17:28:24.012Z" },
+ { url = "https://files.pythonhosted.org/packages/a7/7d/bb92a0782bbe0626c072c0320001410cf3f6743ede7dc18f034b1a18edef/ormsgpack-1.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:11d0dfaf40ae7c6de4f7dbd1e4892e2e6a55d911ab1774357c481158d17371e4", size = 112058, upload-time = "2025-10-08T17:28:25.015Z" },
+ { url = "https://files.pythonhosted.org/packages/28/1a/f07c6f74142815d67e1d9d98c5b2960007100408ade8242edac96d5d1c73/ormsgpack-1.11.0-cp311-cp311-win_arm64.whl", hash = "sha256:0c63a3f7199a3099c90398a1bdf0cb577b06651a442dc5efe67f2882665e5b02", size = 105894, upload-time = "2025-10-08T17:28:25.93Z" },
+ { url = "https://files.pythonhosted.org/packages/1e/16/2805ebfb3d2cbb6c661b5fae053960fc90a2611d0d93e2207e753e836117/ormsgpack-1.11.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:3434d0c8d67de27d9010222de07fb6810fb9af3bb7372354ffa19257ac0eb83b", size = 368474, upload-time = "2025-10-08T17:28:27.532Z" },
+ { url = "https://files.pythonhosted.org/packages/6f/39/6afae47822dca0ce4465d894c0bbb860a850ce29c157882dbdf77a5dd26e/ormsgpack-1.11.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d2da5bd097e8dbfa4eb0d4ccfe79acd6f538dee4493579e2debfe4fc8f4ca89b", size = 195321, upload-time = "2025-10-08T17:28:28.573Z" },
+ { url = "https://files.pythonhosted.org/packages/f6/54/11eda6b59f696d2f16de469bfbe539c9f469c4b9eef5a513996b5879c6e9/ormsgpack-1.11.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fdbaa0a5a8606a486960b60c24f2d5235d30ac7a8b98eeaea9854bffef14dc3d", size = 206036, upload-time = "2025-10-08T17:28:29.785Z" },
+ { url = "https://files.pythonhosted.org/packages/1e/86/890430f704f84c4699ddad61c595d171ea2fd77a51fbc106f83981e83939/ormsgpack-1.11.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3682f24f800c1837017ee90ce321086b2cbaef88db7d4cdbbda1582aa6508159", size = 207615, upload-time = "2025-10-08T17:28:31.076Z" },
+ { url = "https://files.pythonhosted.org/packages/b6/b9/77383e16c991c0ecb772205b966fc68d9c519e0b5f9c3913283cbed30ffe/ormsgpack-1.11.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:fcca21202bb05ccbf3e0e92f560ee59b9331182e4c09c965a28155efbb134993", size = 377195, upload-time = "2025-10-08T17:28:32.436Z" },
+ { url = "https://files.pythonhosted.org/packages/20/e2/15f9f045d4947f3c8a5e0535259fddf027b17b1215367488b3565c573b9d/ormsgpack-1.11.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:c30e5c4655ba46152d722ec7468e8302195e6db362ec1ae2c206bc64f6030e43", size = 470960, upload-time = "2025-10-08T17:28:33.556Z" },
+ { url = "https://files.pythonhosted.org/packages/b8/61/403ce188c4c495bc99dff921a0ad3d9d352dd6d3c4b629f3638b7f0cf79b/ormsgpack-1.11.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7138a341f9e2c08c59368f03d3be25e8b87b3baaf10d30fb1f6f6b52f3d47944", size = 381174, upload-time = "2025-10-08T17:28:34.781Z" },
+ { url = "https://files.pythonhosted.org/packages/14/a8/94c94bc48c68da4374870a851eea03fc5a45eb041182ad4c5ed9acfc05a4/ormsgpack-1.11.0-cp312-cp312-win_amd64.whl", hash = "sha256:d4bd8589b78a11026d47f4edf13c1ceab9088bb12451f34396afe6497db28a27", size = 112314, upload-time = "2025-10-08T17:28:36.259Z" },
+ { url = "https://files.pythonhosted.org/packages/19/d0/aa4cf04f04e4cc180ce7a8d8ddb5a7f3af883329cbc59645d94d3ba157a5/ormsgpack-1.11.0-cp312-cp312-win_arm64.whl", hash = "sha256:e5e746a1223e70f111d4001dab9585ac8639eee8979ca0c8db37f646bf2961da", size = 106072, upload-time = "2025-10-08T17:28:37.518Z" },
+ { url = "https://files.pythonhosted.org/packages/8b/35/e34722edb701d053cf2240f55974f17b7dbfd11fdef72bd2f1835bcebf26/ormsgpack-1.11.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:0e7b36ab7b45cb95217ae1f05f1318b14a3e5ef73cb00804c0f06233f81a14e8", size = 368502, upload-time = "2025-10-08T17:28:38.547Z" },
+ { url = "https://files.pythonhosted.org/packages/2f/6a/c2fc369a79d6aba2aa28c8763856c95337ac7fcc0b2742185cd19397212a/ormsgpack-1.11.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:43402d67e03a9a35cc147c8c03f0c377cad016624479e1ee5b879b8425551484", size = 195344, upload-time = "2025-10-08T17:28:39.554Z" },
+ { url = "https://files.pythonhosted.org/packages/8b/6a/0f8e24b7489885534c1a93bdba7c7c434b9b8638713a68098867db9f254c/ormsgpack-1.11.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:64fd992f932764d6306b70ddc755c1bc3405c4c6a69f77a36acf7af1c8f5ada4", size = 206045, upload-time = "2025-10-08T17:28:40.561Z" },
+ { url = "https://files.pythonhosted.org/packages/99/71/8b460ba264f3c6f82ef5b1920335720094e2bd943057964ce5287d6df83a/ormsgpack-1.11.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0362fb7fe4a29c046c8ea799303079a09372653a1ce5a5a588f3bbb8088368d0", size = 207641, upload-time = "2025-10-08T17:28:41.736Z" },
+ { url = "https://files.pythonhosted.org/packages/50/cf/f369446abaf65972424ed2651f2df2b7b5c3b735c93fc7fa6cfb81e34419/ormsgpack-1.11.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:de2f7a65a9d178ed57be49eba3d0fc9b833c32beaa19dbd4ba56014d3c20b152", size = 377211, upload-time = "2025-10-08T17:28:43.12Z" },
+ { url = "https://files.pythonhosted.org/packages/2f/3f/948bb0047ce0f37c2efc3b9bb2bcfdccc61c63e0b9ce8088d4903ba39dcf/ormsgpack-1.11.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:f38cfae95461466055af966fc922d06db4e1654966385cda2828653096db34da", size = 470973, upload-time = "2025-10-08T17:28:44.465Z" },
+ { url = "https://files.pythonhosted.org/packages/31/a4/92a8114d1d017c14aaa403445060f345df9130ca532d538094f38e535988/ormsgpack-1.11.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:c88396189d238f183cea7831b07a305ab5c90d6d29b53288ae11200bd956357b", size = 381161, upload-time = "2025-10-08T17:28:46.063Z" },
+ { url = "https://files.pythonhosted.org/packages/d0/64/5b76447da654798bfcfdfd64ea29447ff2b7f33fe19d0e911a83ad5107fc/ormsgpack-1.11.0-cp313-cp313-win_amd64.whl", hash = "sha256:5403d1a945dd7c81044cebeca3f00a28a0f4248b33242a5d2d82111628043725", size = 112321, upload-time = "2025-10-08T17:28:47.393Z" },
+ { url = "https://files.pythonhosted.org/packages/46/5e/89900d06db9ab81e7ec1fd56a07c62dfbdcda398c435718f4252e1dc52a0/ormsgpack-1.11.0-cp313-cp313-win_arm64.whl", hash = "sha256:c57357b8d43b49722b876edf317bdad9e6d52071b523fdd7394c30cd1c67d5a0", size = 106084, upload-time = "2025-10-08T17:28:48.305Z" },
+ { url = "https://files.pythonhosted.org/packages/4c/0b/c659e8657085c8c13f6a0224789f422620cef506e26573b5434defe68483/ormsgpack-1.11.0-cp314-cp314-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:d390907d90fd0c908211592c485054d7a80990697ef4dff4e436ac18e1aab98a", size = 368497, upload-time = "2025-10-08T17:28:49.297Z" },
+ { url = "https://files.pythonhosted.org/packages/1b/0e/451e5848c7ed56bd287e8a2b5cb5926e54466f60936e05aec6cb299f9143/ormsgpack-1.11.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6153c2e92e789509098e04c9aa116b16673bd88ec78fbe0031deeb34ab642d10", size = 195385, upload-time = "2025-10-08T17:28:50.314Z" },
+ { url = "https://files.pythonhosted.org/packages/4c/28/90f78cbbe494959f2439c2ec571f08cd3464c05a6a380b0d621c622122a9/ormsgpack-1.11.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c2b2c2a065a94d742212b2018e1fecd8f8d72f3c50b53a97d1f407418093446d", size = 206114, upload-time = "2025-10-08T17:28:51.336Z" },
+ { url = "https://files.pythonhosted.org/packages/fb/db/34163f4c0923bea32dafe42cd878dcc66795a3e85669bc4b01c1e2b92a7b/ormsgpack-1.11.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:110e65b5340f3d7ef8b0009deae3c6b169437e6b43ad5a57fd1748085d29d2ac", size = 207679, upload-time = "2025-10-08T17:28:53.627Z" },
+ { url = "https://files.pythonhosted.org/packages/b6/14/04ee741249b16f380a9b4a0cc19d4134d0b7c74bab27a2117da09e525eb9/ormsgpack-1.11.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c27e186fca96ab34662723e65b420919910acbbc50fc8e1a44e08f26268cb0e0", size = 377237, upload-time = "2025-10-08T17:28:56.12Z" },
+ { url = "https://files.pythonhosted.org/packages/89/ff/53e588a6aaa833237471caec679582c2950f0e7e1a8ba28c1511b465c1f4/ormsgpack-1.11.0-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:d56b1f877c13d499052d37a3db2378a97d5e1588d264f5040b3412aee23d742c", size = 471021, upload-time = "2025-10-08T17:28:57.299Z" },
+ { url = "https://files.pythonhosted.org/packages/a6/f9/f20a6d9ef2be04da3aad05e8f5699957e9a30c6d5c043a10a296afa7e890/ormsgpack-1.11.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:c88e28cd567c0a3269f624b4ade28142d5e502c8e826115093c572007af5be0a", size = 381205, upload-time = "2025-10-08T17:28:58.872Z" },
+ { url = "https://files.pythonhosted.org/packages/f8/64/96c07d084b479ac8b7821a77ffc8d3f29d8b5c95ebfdf8db1c03dff02762/ormsgpack-1.11.0-cp314-cp314-win_amd64.whl", hash = "sha256:8811160573dc0a65f62f7e0792c4ca6b7108dfa50771edb93f9b84e2d45a08ae", size = 112374, upload-time = "2025-10-08T17:29:00Z" },
+ { url = "https://files.pythonhosted.org/packages/88/a5/5dcc18b818d50213a3cadfe336bb6163a102677d9ce87f3d2f1a1bee0f8c/ormsgpack-1.11.0-cp314-cp314-win_arm64.whl", hash = "sha256:23e30a8d3c17484cf74e75e6134322255bd08bc2b5b295cc9c442f4bae5f3c2d", size = 106056, upload-time = "2025-10-08T17:29:01.29Z" },
+ { url = "https://files.pythonhosted.org/packages/19/2b/776d1b411d2be50f77a6e6e94a25825cca55dcacfe7415fd691a144db71b/ormsgpack-1.11.0-cp314-cp314t-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:2905816502adfaf8386a01dd85f936cd378d243f4f5ee2ff46f67f6298dc90d5", size = 368661, upload-time = "2025-10-08T17:29:02.382Z" },
+ { url = "https://files.pythonhosted.org/packages/a9/0c/81a19e6115b15764db3d241788f9fac093122878aaabf872cc545b0c4650/ormsgpack-1.11.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c04402fb9a0a9b9f18fbafd6d5f8398ee99b3ec619fb63952d3a954bc9d47daa", size = 195539, upload-time = "2025-10-08T17:29:03.472Z" },
+ { url = "https://files.pythonhosted.org/packages/97/86/e5b50247a61caec5718122feb2719ea9d451d30ac0516c288c1dbc6408e8/ormsgpack-1.11.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a025ec07ac52056ecfd9e57b5cbc6fff163f62cb9805012b56cda599157f8ef2", size = 207718, upload-time = "2025-10-08T17:29:04.545Z" },
+]
+
[[package]]
name = "packaging"
version = "25.0"
@@ -809,7 +942,7 @@ wheels = [
[[package]]
name = "pydantic"
-version = "2.11.9"
+version = "2.12.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
@@ -817,96 +950,123 @@ dependencies = [
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/ff/5d/09a551ba512d7ca404d785072700d3f6727a02f6f3c24ecfd081c7cf0aa8/pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2", size = 788495, upload-time = "2025-09-13T11:26:39.325Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/f3/1e/4f0a3233767010308f2fd6bd0814597e3f63f1dc98304a9112b8759df4ff/pydantic-2.12.3.tar.gz", hash = "sha256:1da1c82b0fc140bb0103bc1441ffe062154c8d38491189751ee00fd8ca65ce74", size = 819383, upload-time = "2025-10-17T15:04:21.222Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/3e/d3/108f2006987c58e76691d5ae5d200dd3e0f532cb4e5fa3560751c3a1feba/pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2", size = 444855, upload-time = "2025-09-13T11:26:36.909Z" },
+ { url = "https://files.pythonhosted.org/packages/a1/6b/83661fa77dcefa195ad5f8cd9af3d1a7450fd57cc883ad04d65446ac2029/pydantic-2.12.3-py3-none-any.whl", hash = "sha256:6986454a854bc3bc6e5443e1369e06a3a456af9d339eda45510f517d9ea5c6bf", size = 462431, upload-time = "2025-10-17T15:04:19.346Z" },
]
[[package]]
name = "pydantic-core"
-version = "2.33.2"
+version = "2.41.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195, upload-time = "2025-04-23T18:33:52.104Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/df/18/d0944e8eaaa3efd0a91b0f1fc537d3be55ad35091b6a87638211ba691964/pydantic_core-2.41.4.tar.gz", hash = "sha256:70e47929a9d4a1905a67e4b687d5946026390568a8e952b92824118063cee4d5", size = 457557, upload-time = "2025-10-14T10:23:47.909Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/e5/92/b31726561b5dae176c2d2c2dc43a9c5bfba5d32f96f8b4c0a600dd492447/pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8", size = 2028817, upload-time = "2025-04-23T18:30:43.919Z" },
- { url = "https://files.pythonhosted.org/packages/a3/44/3f0b95fafdaca04a483c4e685fe437c6891001bf3ce8b2fded82b9ea3aa1/pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d", size = 1861357, upload-time = "2025-04-23T18:30:46.372Z" },
- { url = "https://files.pythonhosted.org/packages/30/97/e8f13b55766234caae05372826e8e4b3b96e7b248be3157f53237682e43c/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d", size = 1898011, upload-time = "2025-04-23T18:30:47.591Z" },
- { url = "https://files.pythonhosted.org/packages/9b/a3/99c48cf7bafc991cc3ee66fd544c0aae8dc907b752f1dad2d79b1b5a471f/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572", size = 1982730, upload-time = "2025-04-23T18:30:49.328Z" },
- { url = "https://files.pythonhosted.org/packages/de/8e/a5b882ec4307010a840fb8b58bd9bf65d1840c92eae7534c7441709bf54b/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02", size = 2136178, upload-time = "2025-04-23T18:30:50.907Z" },
- { url = "https://files.pythonhosted.org/packages/e4/bb/71e35fc3ed05af6834e890edb75968e2802fe98778971ab5cba20a162315/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b", size = 2736462, upload-time = "2025-04-23T18:30:52.083Z" },
- { url = "https://files.pythonhosted.org/packages/31/0d/c8f7593e6bc7066289bbc366f2235701dcbebcd1ff0ef8e64f6f239fb47d/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2", size = 2005652, upload-time = "2025-04-23T18:30:53.389Z" },
- { url = "https://files.pythonhosted.org/packages/d2/7a/996d8bd75f3eda405e3dd219ff5ff0a283cd8e34add39d8ef9157e722867/pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a", size = 2113306, upload-time = "2025-04-23T18:30:54.661Z" },
- { url = "https://files.pythonhosted.org/packages/ff/84/daf2a6fb2db40ffda6578a7e8c5a6e9c8affb251a05c233ae37098118788/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac", size = 2073720, upload-time = "2025-04-23T18:30:56.11Z" },
- { url = "https://files.pythonhosted.org/packages/77/fb/2258da019f4825128445ae79456a5499c032b55849dbd5bed78c95ccf163/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a", size = 2244915, upload-time = "2025-04-23T18:30:57.501Z" },
- { url = "https://files.pythonhosted.org/packages/d8/7a/925ff73756031289468326e355b6fa8316960d0d65f8b5d6b3a3e7866de7/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b", size = 2241884, upload-time = "2025-04-23T18:30:58.867Z" },
- { url = "https://files.pythonhosted.org/packages/0b/b0/249ee6d2646f1cdadcb813805fe76265745c4010cf20a8eba7b0e639d9b2/pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22", size = 1910496, upload-time = "2025-04-23T18:31:00.078Z" },
- { url = "https://files.pythonhosted.org/packages/66/ff/172ba8f12a42d4b552917aa65d1f2328990d3ccfc01d5b7c943ec084299f/pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640", size = 1955019, upload-time = "2025-04-23T18:31:01.335Z" },
- { url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584, upload-time = "2025-04-23T18:31:03.106Z" },
- { url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071, upload-time = "2025-04-23T18:31:04.621Z" },
- { url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823, upload-time = "2025-04-23T18:31:06.377Z" },
- { url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792, upload-time = "2025-04-23T18:31:07.93Z" },
- { url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338, upload-time = "2025-04-23T18:31:09.283Z" },
- { url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998, upload-time = "2025-04-23T18:31:11.7Z" },
- { url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200, upload-time = "2025-04-23T18:31:13.536Z" },
- { url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890, upload-time = "2025-04-23T18:31:15.011Z" },
- { url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359, upload-time = "2025-04-23T18:31:16.393Z" },
- { url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883, upload-time = "2025-04-23T18:31:17.892Z" },
- { url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074, upload-time = "2025-04-23T18:31:19.205Z" },
- { url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538, upload-time = "2025-04-23T18:31:20.541Z" },
- { url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909, upload-time = "2025-04-23T18:31:22.371Z" },
- { url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786, upload-time = "2025-04-23T18:31:24.161Z" },
- { url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000, upload-time = "2025-04-23T18:31:25.863Z" },
- { url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996, upload-time = "2025-04-23T18:31:27.341Z" },
- { url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957, upload-time = "2025-04-23T18:31:28.956Z" },
- { url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199, upload-time = "2025-04-23T18:31:31.025Z" },
- { url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296, upload-time = "2025-04-23T18:31:32.514Z" },
- { url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109, upload-time = "2025-04-23T18:31:33.958Z" },
- { url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028, upload-time = "2025-04-23T18:31:39.095Z" },
- { url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044, upload-time = "2025-04-23T18:31:41.034Z" },
- { url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881, upload-time = "2025-04-23T18:31:42.757Z" },
- { url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034, upload-time = "2025-04-23T18:31:44.304Z" },
- { url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187, upload-time = "2025-04-23T18:31:45.891Z" },
- { url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628, upload-time = "2025-04-23T18:31:47.819Z" },
- { url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866, upload-time = "2025-04-23T18:31:49.635Z" },
- { url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894, upload-time = "2025-04-23T18:31:51.609Z" },
- { url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688, upload-time = "2025-04-23T18:31:53.175Z" },
- { url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808, upload-time = "2025-04-23T18:31:54.79Z" },
- { url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580, upload-time = "2025-04-23T18:31:57.393Z" },
- { url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859, upload-time = "2025-04-23T18:31:59.065Z" },
- { url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810, upload-time = "2025-04-23T18:32:00.78Z" },
- { url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498, upload-time = "2025-04-23T18:32:02.418Z" },
- { url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611, upload-time = "2025-04-23T18:32:04.152Z" },
- { url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924, upload-time = "2025-04-23T18:32:06.129Z" },
- { url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196, upload-time = "2025-04-23T18:32:08.178Z" },
- { url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389, upload-time = "2025-04-23T18:32:10.242Z" },
- { url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223, upload-time = "2025-04-23T18:32:12.382Z" },
- { url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473, upload-time = "2025-04-23T18:32:14.034Z" },
- { url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269, upload-time = "2025-04-23T18:32:15.783Z" },
- { url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921, upload-time = "2025-04-23T18:32:18.473Z" },
- { url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162, upload-time = "2025-04-23T18:32:20.188Z" },
- { url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560, upload-time = "2025-04-23T18:32:22.354Z" },
- { url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777, upload-time = "2025-04-23T18:32:25.088Z" },
- { url = "https://files.pythonhosted.org/packages/30/68/373d55e58b7e83ce371691f6eaa7175e3a24b956c44628eb25d7da007917/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa", size = 2023982, upload-time = "2025-04-23T18:32:53.14Z" },
- { url = "https://files.pythonhosted.org/packages/a4/16/145f54ac08c96a63d8ed6442f9dec17b2773d19920b627b18d4f10a061ea/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29", size = 1858412, upload-time = "2025-04-23T18:32:55.52Z" },
- { url = "https://files.pythonhosted.org/packages/41/b1/c6dc6c3e2de4516c0bb2c46f6a373b91b5660312342a0cf5826e38ad82fa/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d", size = 1892749, upload-time = "2025-04-23T18:32:57.546Z" },
- { url = "https://files.pythonhosted.org/packages/12/73/8cd57e20afba760b21b742106f9dbdfa6697f1570b189c7457a1af4cd8a0/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e", size = 2067527, upload-time = "2025-04-23T18:32:59.771Z" },
- { url = "https://files.pythonhosted.org/packages/e3/d5/0bb5d988cc019b3cba4a78f2d4b3854427fc47ee8ec8e9eaabf787da239c/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c", size = 2108225, upload-time = "2025-04-23T18:33:04.51Z" },
- { url = "https://files.pythonhosted.org/packages/f1/c5/00c02d1571913d496aabf146106ad8239dc132485ee22efe08085084ff7c/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec", size = 2069490, upload-time = "2025-04-23T18:33:06.391Z" },
- { url = "https://files.pythonhosted.org/packages/22/a8/dccc38768274d3ed3a59b5d06f59ccb845778687652daa71df0cab4040d7/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052", size = 2237525, upload-time = "2025-04-23T18:33:08.44Z" },
- { url = "https://files.pythonhosted.org/packages/d4/e7/4f98c0b125dda7cf7ccd14ba936218397b44f50a56dd8c16a3091df116c3/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c", size = 2238446, upload-time = "2025-04-23T18:33:10.313Z" },
- { url = "https://files.pythonhosted.org/packages/ce/91/2ec36480fdb0b783cd9ef6795753c1dea13882f2e68e73bce76ae8c21e6a/pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808", size = 2066678, upload-time = "2025-04-23T18:33:12.224Z" },
- { url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200, upload-time = "2025-04-23T18:33:14.199Z" },
- { url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123, upload-time = "2025-04-23T18:33:16.555Z" },
- { url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852, upload-time = "2025-04-23T18:33:18.513Z" },
- { url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484, upload-time = "2025-04-23T18:33:20.475Z" },
- { url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896, upload-time = "2025-04-23T18:33:22.501Z" },
- { url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475, upload-time = "2025-04-23T18:33:24.528Z" },
- { url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013, upload-time = "2025-04-23T18:33:26.621Z" },
- { url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715, upload-time = "2025-04-23T18:33:28.656Z" },
- { url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757, upload-time = "2025-04-23T18:33:30.645Z" },
+ { url = "https://files.pythonhosted.org/packages/a7/3d/9b8ca77b0f76fcdbf8bc6b72474e264283f461284ca84ac3fde570c6c49a/pydantic_core-2.41.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2442d9a4d38f3411f22eb9dd0912b7cbf4b7d5b6c92c4173b75d3e1ccd84e36e", size = 2111197, upload-time = "2025-10-14T10:19:43.303Z" },
+ { url = "https://files.pythonhosted.org/packages/59/92/b7b0fe6ed4781642232755cb7e56a86e2041e1292f16d9ae410a0ccee5ac/pydantic_core-2.41.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:30a9876226dda131a741afeab2702e2d127209bde3c65a2b8133f428bc5d006b", size = 1917909, upload-time = "2025-10-14T10:19:45.194Z" },
+ { url = "https://files.pythonhosted.org/packages/52/8c/3eb872009274ffa4fb6a9585114e161aa1a0915af2896e2d441642929fe4/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d55bbac04711e2980645af68b97d445cdbcce70e5216de444a6c4b6943ebcccd", size = 1969905, upload-time = "2025-10-14T10:19:46.567Z" },
+ { url = "https://files.pythonhosted.org/packages/f4/21/35adf4a753bcfaea22d925214a0c5b880792e3244731b3f3e6fec0d124f7/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e1d778fb7849a42d0ee5927ab0f7453bf9f85eef8887a546ec87db5ddb178945", size = 2051938, upload-time = "2025-10-14T10:19:48.237Z" },
+ { url = "https://files.pythonhosted.org/packages/7d/d0/cdf7d126825e36d6e3f1eccf257da8954452934ede275a8f390eac775e89/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1b65077a4693a98b90ec5ad8f203ad65802a1b9b6d4a7e48066925a7e1606706", size = 2250710, upload-time = "2025-10-14T10:19:49.619Z" },
+ { url = "https://files.pythonhosted.org/packages/2e/1c/af1e6fd5ea596327308f9c8d1654e1285cc3d8de0d584a3c9d7705bf8a7c/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:62637c769dee16eddb7686bf421be48dfc2fae93832c25e25bc7242e698361ba", size = 2367445, upload-time = "2025-10-14T10:19:51.269Z" },
+ { url = "https://files.pythonhosted.org/packages/d3/81/8cece29a6ef1b3a92f956ea6da6250d5b2d2e7e4d513dd3b4f0c7a83dfea/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2dfe3aa529c8f501babf6e502936b9e8d4698502b2cfab41e17a028d91b1ac7b", size = 2072875, upload-time = "2025-10-14T10:19:52.671Z" },
+ { url = "https://files.pythonhosted.org/packages/e3/37/a6a579f5fc2cd4d5521284a0ab6a426cc6463a7b3897aeb95b12f1ba607b/pydantic_core-2.41.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ca2322da745bf2eeb581fc9ea3bbb31147702163ccbcbf12a3bb630e4bf05e1d", size = 2191329, upload-time = "2025-10-14T10:19:54.214Z" },
+ { url = "https://files.pythonhosted.org/packages/ae/03/505020dc5c54ec75ecba9f41119fd1e48f9e41e4629942494c4a8734ded1/pydantic_core-2.41.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e8cd3577c796be7231dcf80badcf2e0835a46665eaafd8ace124d886bab4d700", size = 2151658, upload-time = "2025-10-14T10:19:55.843Z" },
+ { url = "https://files.pythonhosted.org/packages/cb/5d/2c0d09fb53aa03bbd2a214d89ebfa6304be7df9ed86ee3dc7770257f41ee/pydantic_core-2.41.4-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:1cae8851e174c83633f0833e90636832857297900133705ee158cf79d40f03e6", size = 2316777, upload-time = "2025-10-14T10:19:57.607Z" },
+ { url = "https://files.pythonhosted.org/packages/ea/4b/c2c9c8f5e1f9c864b57d08539d9d3db160e00491c9f5ee90e1bfd905e644/pydantic_core-2.41.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a26d950449aae348afe1ac8be5525a00ae4235309b729ad4d3399623125b43c9", size = 2320705, upload-time = "2025-10-14T10:19:59.016Z" },
+ { url = "https://files.pythonhosted.org/packages/28/c3/a74c1c37f49c0a02c89c7340fafc0ba816b29bd495d1a31ce1bdeacc6085/pydantic_core-2.41.4-cp310-cp310-win32.whl", hash = "sha256:0cf2a1f599efe57fa0051312774280ee0f650e11152325e41dfd3018ef2c1b57", size = 1975464, upload-time = "2025-10-14T10:20:00.581Z" },
+ { url = "https://files.pythonhosted.org/packages/d6/23/5dd5c1324ba80303368f7569e2e2e1a721c7d9eb16acb7eb7b7f85cb1be2/pydantic_core-2.41.4-cp310-cp310-win_amd64.whl", hash = "sha256:a8c2e340d7e454dc3340d3d2e8f23558ebe78c98aa8f68851b04dcb7bc37abdc", size = 2024497, upload-time = "2025-10-14T10:20:03.018Z" },
+ { url = "https://files.pythonhosted.org/packages/62/4c/f6cbfa1e8efacd00b846764e8484fe173d25b8dab881e277a619177f3384/pydantic_core-2.41.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:28ff11666443a1a8cf2a044d6a545ebffa8382b5f7973f22c36109205e65dc80", size = 2109062, upload-time = "2025-10-14T10:20:04.486Z" },
+ { url = "https://files.pythonhosted.org/packages/21/f8/40b72d3868896bfcd410e1bd7e516e762d326201c48e5b4a06446f6cf9e8/pydantic_core-2.41.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:61760c3925d4633290292bad462e0f737b840508b4f722247d8729684f6539ae", size = 1916301, upload-time = "2025-10-14T10:20:06.857Z" },
+ { url = "https://files.pythonhosted.org/packages/94/4d/d203dce8bee7faeca791671c88519969d98d3b4e8f225da5b96dad226fc8/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eae547b7315d055b0de2ec3965643b0ab82ad0106a7ffd29615ee9f266a02827", size = 1968728, upload-time = "2025-10-14T10:20:08.353Z" },
+ { url = "https://files.pythonhosted.org/packages/65/f5/6a66187775df87c24d526985b3a5d78d861580ca466fbd9d4d0e792fcf6c/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ef9ee5471edd58d1fcce1c80ffc8783a650e3e3a193fe90d52e43bb4d87bff1f", size = 2050238, upload-time = "2025-10-14T10:20:09.766Z" },
+ { url = "https://files.pythonhosted.org/packages/5e/b9/78336345de97298cf53236b2f271912ce11f32c1e59de25a374ce12f9cce/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:15dd504af121caaf2c95cb90c0ebf71603c53de98305621b94da0f967e572def", size = 2249424, upload-time = "2025-10-14T10:20:11.732Z" },
+ { url = "https://files.pythonhosted.org/packages/99/bb/a4584888b70ee594c3d374a71af5075a68654d6c780369df269118af7402/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3a926768ea49a8af4d36abd6a8968b8790f7f76dd7cbd5a4c180db2b4ac9a3a2", size = 2366047, upload-time = "2025-10-14T10:20:13.647Z" },
+ { url = "https://files.pythonhosted.org/packages/5f/8d/17fc5de9d6418e4d2ae8c675f905cdafdc59d3bf3bf9c946b7ab796a992a/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916b9b7d134bff5440098a4deb80e4cb623e68974a87883299de9124126c2a8", size = 2071163, upload-time = "2025-10-14T10:20:15.307Z" },
+ { url = "https://files.pythonhosted.org/packages/54/e7/03d2c5c0b8ed37a4617430db68ec5e7dbba66358b629cd69e11b4d564367/pydantic_core-2.41.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5cf90535979089df02e6f17ffd076f07237efa55b7343d98760bde8743c4b265", size = 2190585, upload-time = "2025-10-14T10:20:17.3Z" },
+ { url = "https://files.pythonhosted.org/packages/be/fc/15d1c9fe5ad9266a5897d9b932b7f53d7e5cfc800573917a2c5d6eea56ec/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7533c76fa647fade2d7ec75ac5cc079ab3f34879626dae5689b27790a6cf5a5c", size = 2150109, upload-time = "2025-10-14T10:20:19.143Z" },
+ { url = "https://files.pythonhosted.org/packages/26/ef/e735dd008808226c83ba56972566138665b71477ad580fa5a21f0851df48/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:37e516bca9264cbf29612539801ca3cd5d1be465f940417b002905e6ed79d38a", size = 2315078, upload-time = "2025-10-14T10:20:20.742Z" },
+ { url = "https://files.pythonhosted.org/packages/90/00/806efdcf35ff2ac0f938362350cd9827b8afb116cc814b6b75cf23738c7c/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0c19cb355224037c83642429b8ce261ae108e1c5fbf5c028bac63c77b0f8646e", size = 2318737, upload-time = "2025-10-14T10:20:22.306Z" },
+ { url = "https://files.pythonhosted.org/packages/41/7e/6ac90673fe6cb36621a2283552897838c020db343fa86e513d3f563b196f/pydantic_core-2.41.4-cp311-cp311-win32.whl", hash = "sha256:09c2a60e55b357284b5f31f5ab275ba9f7f70b7525e18a132ec1f9160b4f1f03", size = 1974160, upload-time = "2025-10-14T10:20:23.817Z" },
+ { url = "https://files.pythonhosted.org/packages/e0/9d/7c5e24ee585c1f8b6356e1d11d40ab807ffde44d2db3b7dfd6d20b09720e/pydantic_core-2.41.4-cp311-cp311-win_amd64.whl", hash = "sha256:711156b6afb5cb1cb7c14a2cc2c4a8b4c717b69046f13c6b332d8a0a8f41ca3e", size = 2021883, upload-time = "2025-10-14T10:20:25.48Z" },
+ { url = "https://files.pythonhosted.org/packages/33/90/5c172357460fc28b2871eb4a0fb3843b136b429c6fa827e4b588877bf115/pydantic_core-2.41.4-cp311-cp311-win_arm64.whl", hash = "sha256:6cb9cf7e761f4f8a8589a45e49ed3c0d92d1d696a45a6feaee8c904b26efc2db", size = 1968026, upload-time = "2025-10-14T10:20:27.039Z" },
+ { url = "https://files.pythonhosted.org/packages/e9/81/d3b3e95929c4369d30b2a66a91db63c8ed0a98381ae55a45da2cd1cc1288/pydantic_core-2.41.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:ab06d77e053d660a6faaf04894446df7b0a7e7aba70c2797465a0a1af00fc887", size = 2099043, upload-time = "2025-10-14T10:20:28.561Z" },
+ { url = "https://files.pythonhosted.org/packages/58/da/46fdac49e6717e3a94fc9201403e08d9d61aa7a770fab6190b8740749047/pydantic_core-2.41.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:c53ff33e603a9c1179a9364b0a24694f183717b2e0da2b5ad43c316c956901b2", size = 1910699, upload-time = "2025-10-14T10:20:30.217Z" },
+ { url = "https://files.pythonhosted.org/packages/1e/63/4d948f1b9dd8e991a5a98b77dd66c74641f5f2e5225fee37994b2e07d391/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:304c54176af2c143bd181d82e77c15c41cbacea8872a2225dd37e6544dce9999", size = 1952121, upload-time = "2025-10-14T10:20:32.246Z" },
+ { url = "https://files.pythonhosted.org/packages/b2/a7/e5fc60a6f781fc634ecaa9ecc3c20171d238794cef69ae0af79ac11b89d7/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:025ba34a4cf4fb32f917d5d188ab5e702223d3ba603be4d8aca2f82bede432a4", size = 2041590, upload-time = "2025-10-14T10:20:34.332Z" },
+ { url = "https://files.pythonhosted.org/packages/70/69/dce747b1d21d59e85af433428978a1893c6f8a7068fa2bb4a927fba7a5ff/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b9f5f30c402ed58f90c70e12eff65547d3ab74685ffe8283c719e6bead8ef53f", size = 2219869, upload-time = "2025-10-14T10:20:35.965Z" },
+ { url = "https://files.pythonhosted.org/packages/83/6a/c070e30e295403bf29c4df1cb781317b6a9bac7cd07b8d3acc94d501a63c/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd96e5d15385d301733113bcaa324c8bcf111275b7675a9c6e88bfb19fc05e3b", size = 2345169, upload-time = "2025-10-14T10:20:37.627Z" },
+ { url = "https://files.pythonhosted.org/packages/f0/83/06d001f8043c336baea7fd202a9ac7ad71f87e1c55d8112c50b745c40324/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:98f348cbb44fae6e9653c1055db7e29de67ea6a9ca03a5fa2c2e11a47cff0e47", size = 2070165, upload-time = "2025-10-14T10:20:39.246Z" },
+ { url = "https://files.pythonhosted.org/packages/14/0a/e567c2883588dd12bcbc110232d892cf385356f7c8a9910311ac997ab715/pydantic_core-2.41.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ec22626a2d14620a83ca583c6f5a4080fa3155282718b6055c2ea48d3ef35970", size = 2189067, upload-time = "2025-10-14T10:20:41.015Z" },
+ { url = "https://files.pythonhosted.org/packages/f4/1d/3d9fca34273ba03c9b1c5289f7618bc4bd09c3ad2289b5420481aa051a99/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:3a95d4590b1f1a43bf33ca6d647b990a88f4a3824a8c4572c708f0b45a5290ed", size = 2132997, upload-time = "2025-10-14T10:20:43.106Z" },
+ { url = "https://files.pythonhosted.org/packages/52/70/d702ef7a6cd41a8afc61f3554922b3ed8d19dd54c3bd4bdbfe332e610827/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:f9672ab4d398e1b602feadcffcdd3af44d5f5e6ddc15bc7d15d376d47e8e19f8", size = 2307187, upload-time = "2025-10-14T10:20:44.849Z" },
+ { url = "https://files.pythonhosted.org/packages/68/4c/c06be6e27545d08b802127914156f38d10ca287a9e8489342793de8aae3c/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:84d8854db5f55fead3b579f04bda9a36461dab0730c5d570e1526483e7bb8431", size = 2305204, upload-time = "2025-10-14T10:20:46.781Z" },
+ { url = "https://files.pythonhosted.org/packages/b0/e5/35ae4919bcd9f18603419e23c5eaf32750224a89d41a8df1a3704b69f77e/pydantic_core-2.41.4-cp312-cp312-win32.whl", hash = "sha256:9be1c01adb2ecc4e464392c36d17f97e9110fbbc906bcbe1c943b5b87a74aabd", size = 1972536, upload-time = "2025-10-14T10:20:48.39Z" },
+ { url = "https://files.pythonhosted.org/packages/1e/c2/49c5bb6d2a49eb2ee3647a93e3dae7080c6409a8a7558b075027644e879c/pydantic_core-2.41.4-cp312-cp312-win_amd64.whl", hash = "sha256:d682cf1d22bab22a5be08539dca3d1593488a99998f9f412137bc323179067ff", size = 2031132, upload-time = "2025-10-14T10:20:50.421Z" },
+ { url = "https://files.pythonhosted.org/packages/06/23/936343dbcba6eec93f73e95eb346810fc732f71ba27967b287b66f7b7097/pydantic_core-2.41.4-cp312-cp312-win_arm64.whl", hash = "sha256:833eebfd75a26d17470b58768c1834dfc90141b7afc6eb0429c21fc5a21dcfb8", size = 1969483, upload-time = "2025-10-14T10:20:52.35Z" },
+ { url = "https://files.pythonhosted.org/packages/13/d0/c20adabd181a029a970738dfe23710b52a31f1258f591874fcdec7359845/pydantic_core-2.41.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:85e050ad9e5f6fe1004eec65c914332e52f429bc0ae12d6fa2092407a462c746", size = 2105688, upload-time = "2025-10-14T10:20:54.448Z" },
+ { url = "https://files.pythonhosted.org/packages/00/b6/0ce5c03cec5ae94cca220dfecddc453c077d71363b98a4bbdb3c0b22c783/pydantic_core-2.41.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e7393f1d64792763a48924ba31d1e44c2cfbc05e3b1c2c9abb4ceeadd912cced", size = 1910807, upload-time = "2025-10-14T10:20:56.115Z" },
+ { url = "https://files.pythonhosted.org/packages/68/3e/800d3d02c8beb0b5c069c870cbb83799d085debf43499c897bb4b4aaff0d/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94dab0940b0d1fb28bcab847adf887c66a27a40291eedf0b473be58761c9799a", size = 1956669, upload-time = "2025-10-14T10:20:57.874Z" },
+ { url = "https://files.pythonhosted.org/packages/60/a4/24271cc71a17f64589be49ab8bd0751f6a0a03046c690df60989f2f95c2c/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:de7c42f897e689ee6f9e93c4bec72b99ae3b32a2ade1c7e4798e690ff5246e02", size = 2051629, upload-time = "2025-10-14T10:21:00.006Z" },
+ { url = "https://files.pythonhosted.org/packages/68/de/45af3ca2f175d91b96bfb62e1f2d2f1f9f3b14a734afe0bfeff079f78181/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:664b3199193262277b8b3cd1e754fb07f2c6023289c815a1e1e8fb415cb247b1", size = 2224049, upload-time = "2025-10-14T10:21:01.801Z" },
+ { url = "https://files.pythonhosted.org/packages/af/8f/ae4e1ff84672bf869d0a77af24fd78387850e9497753c432875066b5d622/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d95b253b88f7d308b1c0b417c4624f44553ba4762816f94e6986819b9c273fb2", size = 2342409, upload-time = "2025-10-14T10:21:03.556Z" },
+ { url = "https://files.pythonhosted.org/packages/18/62/273dd70b0026a085c7b74b000394e1ef95719ea579c76ea2f0cc8893736d/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1351f5bbdbbabc689727cb91649a00cb9ee7203e0a6e54e9f5ba9e22e384b84", size = 2069635, upload-time = "2025-10-14T10:21:05.385Z" },
+ { url = "https://files.pythonhosted.org/packages/30/03/cf485fff699b4cdaea469bc481719d3e49f023241b4abb656f8d422189fc/pydantic_core-2.41.4-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1affa4798520b148d7182da0615d648e752de4ab1a9566b7471bc803d88a062d", size = 2194284, upload-time = "2025-10-14T10:21:07.122Z" },
+ { url = "https://files.pythonhosted.org/packages/f9/7e/c8e713db32405dfd97211f2fc0a15d6bf8adb7640f3d18544c1f39526619/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7b74e18052fea4aa8dea2fb7dbc23d15439695da6cbe6cfc1b694af1115df09d", size = 2137566, upload-time = "2025-10-14T10:21:08.981Z" },
+ { url = "https://files.pythonhosted.org/packages/04/f7/db71fd4cdccc8b75990f79ccafbbd66757e19f6d5ee724a6252414483fb4/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:285b643d75c0e30abda9dc1077395624f314a37e3c09ca402d4015ef5979f1a2", size = 2316809, upload-time = "2025-10-14T10:21:10.805Z" },
+ { url = "https://files.pythonhosted.org/packages/76/63/a54973ddb945f1bca56742b48b144d85c9fc22f819ddeb9f861c249d5464/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:f52679ff4218d713b3b33f88c89ccbf3a5c2c12ba665fb80ccc4192b4608dbab", size = 2311119, upload-time = "2025-10-14T10:21:12.583Z" },
+ { url = "https://files.pythonhosted.org/packages/f8/03/5d12891e93c19218af74843a27e32b94922195ded2386f7b55382f904d2f/pydantic_core-2.41.4-cp313-cp313-win32.whl", hash = "sha256:ecde6dedd6fff127c273c76821bb754d793be1024bc33314a120f83a3c69460c", size = 1981398, upload-time = "2025-10-14T10:21:14.584Z" },
+ { url = "https://files.pythonhosted.org/packages/be/d8/fd0de71f39db91135b7a26996160de71c073d8635edfce8b3c3681be0d6d/pydantic_core-2.41.4-cp313-cp313-win_amd64.whl", hash = "sha256:d081a1f3800f05409ed868ebb2d74ac39dd0c1ff6c035b5162356d76030736d4", size = 2030735, upload-time = "2025-10-14T10:21:16.432Z" },
+ { url = "https://files.pythonhosted.org/packages/72/86/c99921c1cf6650023c08bfab6fe2d7057a5142628ef7ccfa9921f2dda1d5/pydantic_core-2.41.4-cp313-cp313-win_arm64.whl", hash = "sha256:f8e49c9c364a7edcbe2a310f12733aad95b022495ef2a8d653f645e5d20c1564", size = 1973209, upload-time = "2025-10-14T10:21:18.213Z" },
+ { url = "https://files.pythonhosted.org/packages/36/0d/b5706cacb70a8414396efdda3d72ae0542e050b591119e458e2490baf035/pydantic_core-2.41.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:ed97fd56a561f5eb5706cebe94f1ad7c13b84d98312a05546f2ad036bafe87f4", size = 1877324, upload-time = "2025-10-14T10:21:20.363Z" },
+ { url = "https://files.pythonhosted.org/packages/de/2d/cba1fa02cfdea72dfb3a9babb067c83b9dff0bbcb198368e000a6b756ea7/pydantic_core-2.41.4-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a870c307bf1ee91fc58a9a61338ff780d01bfae45922624816878dce784095d2", size = 1884515, upload-time = "2025-10-14T10:21:22.339Z" },
+ { url = "https://files.pythonhosted.org/packages/07/ea/3df927c4384ed9b503c9cc2d076cf983b4f2adb0c754578dfb1245c51e46/pydantic_core-2.41.4-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d25e97bc1f5f8f7985bdc2335ef9e73843bb561eb1fa6831fdfc295c1c2061cf", size = 2042819, upload-time = "2025-10-14T10:21:26.683Z" },
+ { url = "https://files.pythonhosted.org/packages/6a/ee/df8e871f07074250270a3b1b82aad4cd0026b588acd5d7d3eb2fcb1471a3/pydantic_core-2.41.4-cp313-cp313t-win_amd64.whl", hash = "sha256:d405d14bea042f166512add3091c1af40437c2e7f86988f3915fabd27b1e9cd2", size = 1995866, upload-time = "2025-10-14T10:21:28.951Z" },
+ { url = "https://files.pythonhosted.org/packages/fc/de/b20f4ab954d6d399499c33ec4fafc46d9551e11dc1858fb7f5dca0748ceb/pydantic_core-2.41.4-cp313-cp313t-win_arm64.whl", hash = "sha256:19f3684868309db5263a11bace3c45d93f6f24afa2ffe75a647583df22a2ff89", size = 1970034, upload-time = "2025-10-14T10:21:30.869Z" },
+ { url = "https://files.pythonhosted.org/packages/54/28/d3325da57d413b9819365546eb9a6e8b7cbd9373d9380efd5f74326143e6/pydantic_core-2.41.4-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:e9205d97ed08a82ebb9a307e92914bb30e18cdf6f6b12ca4bedadb1588a0bfe1", size = 2102022, upload-time = "2025-10-14T10:21:32.809Z" },
+ { url = "https://files.pythonhosted.org/packages/9e/24/b58a1bc0d834bf1acc4361e61233ee217169a42efbdc15a60296e13ce438/pydantic_core-2.41.4-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:82df1f432b37d832709fbcc0e24394bba04a01b6ecf1ee87578145c19cde12ac", size = 1905495, upload-time = "2025-10-14T10:21:34.812Z" },
+ { url = "https://files.pythonhosted.org/packages/fb/a4/71f759cc41b7043e8ecdaab81b985a9b6cad7cec077e0b92cff8b71ecf6b/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fc3b4cc4539e055cfa39a3763c939f9d409eb40e85813257dcd761985a108554", size = 1956131, upload-time = "2025-10-14T10:21:36.924Z" },
+ { url = "https://files.pythonhosted.org/packages/b0/64/1e79ac7aa51f1eec7c4cda8cbe456d5d09f05fdd68b32776d72168d54275/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b1eb1754fce47c63d2ff57fdb88c351a6c0150995890088b33767a10218eaa4e", size = 2052236, upload-time = "2025-10-14T10:21:38.927Z" },
+ { url = "https://files.pythonhosted.org/packages/e9/e3/a3ffc363bd4287b80f1d43dc1c28ba64831f8dfc237d6fec8f2661138d48/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e6ab5ab30ef325b443f379ddb575a34969c333004fca5a1daa0133a6ffaad616", size = 2223573, upload-time = "2025-10-14T10:21:41.574Z" },
+ { url = "https://files.pythonhosted.org/packages/28/27/78814089b4d2e684a9088ede3790763c64693c3d1408ddc0a248bc789126/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:31a41030b1d9ca497634092b46481b937ff9397a86f9f51bd41c4767b6fc04af", size = 2342467, upload-time = "2025-10-14T10:21:44.018Z" },
+ { url = "https://files.pythonhosted.org/packages/92/97/4de0e2a1159cb85ad737e03306717637842c88c7fd6d97973172fb183149/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a44ac1738591472c3d020f61c6df1e4015180d6262ebd39bf2aeb52571b60f12", size = 2063754, upload-time = "2025-10-14T10:21:46.466Z" },
+ { url = "https://files.pythonhosted.org/packages/0f/50/8cb90ce4b9efcf7ae78130afeb99fd1c86125ccdf9906ef64b9d42f37c25/pydantic_core-2.41.4-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d72f2b5e6e82ab8f94ea7d0d42f83c487dc159c5240d8f83beae684472864e2d", size = 2196754, upload-time = "2025-10-14T10:21:48.486Z" },
+ { url = "https://files.pythonhosted.org/packages/34/3b/ccdc77af9cd5082723574a1cc1bcae7a6acacc829d7c0a06201f7886a109/pydantic_core-2.41.4-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:c4d1e854aaf044487d31143f541f7aafe7b482ae72a022c664b2de2e466ed0ad", size = 2137115, upload-time = "2025-10-14T10:21:50.63Z" },
+ { url = "https://files.pythonhosted.org/packages/ca/ba/e7c7a02651a8f7c52dc2cff2b64a30c313e3b57c7d93703cecea76c09b71/pydantic_core-2.41.4-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:b568af94267729d76e6ee5ececda4e283d07bbb28e8148bb17adad93d025d25a", size = 2317400, upload-time = "2025-10-14T10:21:52.959Z" },
+ { url = "https://files.pythonhosted.org/packages/2c/ba/6c533a4ee8aec6b812c643c49bb3bd88d3f01e3cebe451bb85512d37f00f/pydantic_core-2.41.4-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:6d55fb8b1e8929b341cc313a81a26e0d48aa3b519c1dbaadec3a6a2b4fcad025", size = 2312070, upload-time = "2025-10-14T10:21:55.419Z" },
+ { url = "https://files.pythonhosted.org/packages/22/ae/f10524fcc0ab8d7f96cf9a74c880243576fd3e72bd8ce4f81e43d22bcab7/pydantic_core-2.41.4-cp314-cp314-win32.whl", hash = "sha256:5b66584e549e2e32a1398df11da2e0a7eff45d5c2d9db9d5667c5e6ac764d77e", size = 1982277, upload-time = "2025-10-14T10:21:57.474Z" },
+ { url = "https://files.pythonhosted.org/packages/b4/dc/e5aa27aea1ad4638f0c3fb41132f7eb583bd7420ee63204e2d4333a3bbf9/pydantic_core-2.41.4-cp314-cp314-win_amd64.whl", hash = "sha256:557a0aab88664cc552285316809cab897716a372afaf8efdbef756f8b890e894", size = 2024608, upload-time = "2025-10-14T10:21:59.557Z" },
+ { url = "https://files.pythonhosted.org/packages/3e/61/51d89cc2612bd147198e120a13f150afbf0bcb4615cddb049ab10b81b79e/pydantic_core-2.41.4-cp314-cp314-win_arm64.whl", hash = "sha256:3f1ea6f48a045745d0d9f325989d8abd3f1eaf47dd00485912d1a3a63c623a8d", size = 1967614, upload-time = "2025-10-14T10:22:01.847Z" },
+ { url = "https://files.pythonhosted.org/packages/0d/c2/472f2e31b95eff099961fa050c376ab7156a81da194f9edb9f710f68787b/pydantic_core-2.41.4-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:6c1fe4c5404c448b13188dd8bd2ebc2bdd7e6727fa61ff481bcc2cca894018da", size = 1876904, upload-time = "2025-10-14T10:22:04.062Z" },
+ { url = "https://files.pythonhosted.org/packages/4a/07/ea8eeb91173807ecdae4f4a5f4b150a520085b35454350fc219ba79e66a3/pydantic_core-2.41.4-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:523e7da4d43b113bf8e7b49fa4ec0c35bf4fe66b2230bfc5c13cc498f12c6c3e", size = 1882538, upload-time = "2025-10-14T10:22:06.39Z" },
+ { url = "https://files.pythonhosted.org/packages/1e/29/b53a9ca6cd366bfc928823679c6a76c7a4c69f8201c0ba7903ad18ebae2f/pydantic_core-2.41.4-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5729225de81fb65b70fdb1907fcf08c75d498f4a6f15af005aabb1fdadc19dfa", size = 2041183, upload-time = "2025-10-14T10:22:08.812Z" },
+ { url = "https://files.pythonhosted.org/packages/c7/3d/f8c1a371ceebcaf94d6dd2d77c6cf4b1c078e13a5837aee83f760b4f7cfd/pydantic_core-2.41.4-cp314-cp314t-win_amd64.whl", hash = "sha256:de2cfbb09e88f0f795fd90cf955858fc2c691df65b1f21f0aa00b99f3fbc661d", size = 1993542, upload-time = "2025-10-14T10:22:11.332Z" },
+ { url = "https://files.pythonhosted.org/packages/8a/ac/9fc61b4f9d079482a290afe8d206b8f490e9fd32d4fc03ed4fc698214e01/pydantic_core-2.41.4-cp314-cp314t-win_arm64.whl", hash = "sha256:d34f950ae05a83e0ede899c595f312ca976023ea1db100cd5aa188f7005e3ab0", size = 1973897, upload-time = "2025-10-14T10:22:13.444Z" },
+ { url = "https://files.pythonhosted.org/packages/b0/12/5ba58daa7f453454464f92b3ca7b9d7c657d8641c48e370c3ebc9a82dd78/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:a1b2cfec3879afb742a7b0bcfa53e4f22ba96571c9e54d6a3afe1052d17d843b", size = 2122139, upload-time = "2025-10-14T10:22:47.288Z" },
+ { url = "https://files.pythonhosted.org/packages/21/fb/6860126a77725c3108baecd10fd3d75fec25191d6381b6eb2ac660228eac/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:d175600d975b7c244af6eb9c9041f10059f20b8bbffec9e33fdd5ee3f67cdc42", size = 1936674, upload-time = "2025-10-14T10:22:49.555Z" },
+ { url = "https://files.pythonhosted.org/packages/de/be/57dcaa3ed595d81f8757e2b44a38240ac5d37628bce25fb20d02c7018776/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0f184d657fa4947ae5ec9c47bd7e917730fa1cbb78195037e32dcbab50aca5ee", size = 1956398, upload-time = "2025-10-14T10:22:52.19Z" },
+ { url = "https://files.pythonhosted.org/packages/2f/1d/679a344fadb9695f1a6a294d739fbd21d71fa023286daeea8c0ed49e7c2b/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1ed810568aeffed3edc78910af32af911c835cc39ebbfacd1f0ab5dd53028e5c", size = 2138674, upload-time = "2025-10-14T10:22:54.499Z" },
+ { url = "https://files.pythonhosted.org/packages/c4/48/ae937e5a831b7c0dc646b2ef788c27cd003894882415300ed21927c21efa/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:4f5d640aeebb438517150fdeec097739614421900e4a08db4a3ef38898798537", size = 2112087, upload-time = "2025-10-14T10:22:56.818Z" },
+ { url = "https://files.pythonhosted.org/packages/5e/db/6db8073e3d32dae017da7e0d16a9ecb897d0a4d92e00634916e486097961/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:4a9ab037b71927babc6d9e7fc01aea9e66dc2a4a34dff06ef0724a4049629f94", size = 1920387, upload-time = "2025-10-14T10:22:59.342Z" },
+ { url = "https://files.pythonhosted.org/packages/0d/c1/dd3542d072fcc336030d66834872f0328727e3b8de289c662faa04aa270e/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4dab9484ec605c3016df9ad4fd4f9a390bc5d816a3b10c6550f8424bb80b18c", size = 1951495, upload-time = "2025-10-14T10:23:02.089Z" },
+ { url = "https://files.pythonhosted.org/packages/2b/c6/db8d13a1f8ab3f1eb08c88bd00fd62d44311e3456d1e85c0e59e0a0376e7/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd8a5028425820731d8c6c098ab642d7b8b999758e24acae03ed38a66eca8335", size = 2139008, upload-time = "2025-10-14T10:23:04.539Z" },
+ { url = "https://files.pythonhosted.org/packages/5d/d4/912e976a2dd0b49f31c98a060ca90b353f3b73ee3ea2fd0030412f6ac5ec/pydantic_core-2.41.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1e5ab4fc177dd41536b3c32b2ea11380dd3d4619a385860621478ac2d25ceb00", size = 2106739, upload-time = "2025-10-14T10:23:06.934Z" },
+ { url = "https://files.pythonhosted.org/packages/71/f0/66ec5a626c81eba326072d6ee2b127f8c139543f1bf609b4842978d37833/pydantic_core-2.41.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:3d88d0054d3fa11ce936184896bed3c1c5441d6fa483b498fac6a5d0dd6f64a9", size = 1932549, upload-time = "2025-10-14T10:23:09.24Z" },
+ { url = "https://files.pythonhosted.org/packages/c4/af/625626278ca801ea0a658c2dcf290dc9f21bb383098e99e7c6a029fccfc0/pydantic_core-2.41.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b2a054a8725f05b4b6503357e0ac1c4e8234ad3b0c2ac130d6ffc66f0e170e2", size = 2135093, upload-time = "2025-10-14T10:23:11.626Z" },
+ { url = "https://files.pythonhosted.org/packages/20/f6/2fba049f54e0f4975fef66be654c597a1d005320fa141863699180c7697d/pydantic_core-2.41.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0d9db5a161c99375a0c68c058e227bee1d89303300802601d76a3d01f74e258", size = 2187971, upload-time = "2025-10-14T10:23:14.437Z" },
+ { url = "https://files.pythonhosted.org/packages/0e/80/65ab839a2dfcd3b949202f9d920c34f9de5a537c3646662bdf2f7d999680/pydantic_core-2.41.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:6273ea2c8ffdac7b7fda2653c49682db815aebf4a89243a6feccf5e36c18c347", size = 2147939, upload-time = "2025-10-14T10:23:16.831Z" },
+ { url = "https://files.pythonhosted.org/packages/44/58/627565d3d182ce6dfda18b8e1c841eede3629d59c9d7cbc1e12a03aeb328/pydantic_core-2.41.4-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:4c973add636efc61de22530b2ef83a65f39b6d6f656df97f678720e20de26caa", size = 2311400, upload-time = "2025-10-14T10:23:19.234Z" },
+ { url = "https://files.pythonhosted.org/packages/24/06/8a84711162ad5a5f19a88cead37cca81b4b1f294f46260ef7334ae4f24d3/pydantic_core-2.41.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:b69d1973354758007f46cf2d44a4f3d0933f10b6dc9bf15cf1356e037f6f731a", size = 2316840, upload-time = "2025-10-14T10:23:21.738Z" },
+ { url = "https://files.pythonhosted.org/packages/aa/8b/b7bb512a4682a2f7fbfae152a755d37351743900226d29bd953aaf870eaa/pydantic_core-2.41.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:3619320641fd212aaf5997b6ca505e97540b7e16418f4a241f44cdf108ffb50d", size = 2149135, upload-time = "2025-10-14T10:23:24.379Z" },
+ { url = "https://files.pythonhosted.org/packages/7e/7d/138e902ed6399b866f7cfe4435d22445e16fff888a1c00560d9dc79a780f/pydantic_core-2.41.4-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:491535d45cd7ad7e4a2af4a5169b0d07bebf1adfd164b0368da8aa41e19907a5", size = 2104721, upload-time = "2025-10-14T10:23:26.906Z" },
+ { url = "https://files.pythonhosted.org/packages/47/13/0525623cf94627f7b53b4c2034c81edc8491cbfc7c28d5447fa318791479/pydantic_core-2.41.4-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:54d86c0cada6aba4ec4c047d0e348cbad7063b87ae0f005d9f8c9ad04d4a92a2", size = 1931608, upload-time = "2025-10-14T10:23:29.306Z" },
+ { url = "https://files.pythonhosted.org/packages/d6/f9/744bc98137d6ef0a233f808bfc9b18cf94624bf30836a18d3b05d08bf418/pydantic_core-2.41.4-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eca1124aced216b2500dc2609eade086d718e8249cb9696660ab447d50a758bd", size = 2132986, upload-time = "2025-10-14T10:23:32.057Z" },
+ { url = "https://files.pythonhosted.org/packages/17/c8/629e88920171173f6049386cc71f893dff03209a9ef32b4d2f7e7c264bcf/pydantic_core-2.41.4-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6c9024169becccf0cb470ada03ee578d7348c119a0d42af3dcf9eda96e3a247c", size = 2187516, upload-time = "2025-10-14T10:23:34.871Z" },
+ { url = "https://files.pythonhosted.org/packages/2e/0f/4f2734688d98488782218ca61bcc118329bf5de05bb7fe3adc7dd79b0b86/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:26895a4268ae5a2849269f4991cdc97236e4b9c010e51137becf25182daac405", size = 2146146, upload-time = "2025-10-14T10:23:37.342Z" },
+ { url = "https://files.pythonhosted.org/packages/ed/f2/ab385dbd94a052c62224b99cf99002eee99dbec40e10006c78575aead256/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:ca4df25762cf71308c446e33c9b1fdca2923a3f13de616e2a949f38bf21ff5a8", size = 2311296, upload-time = "2025-10-14T10:23:40.145Z" },
+ { url = "https://files.pythonhosted.org/packages/fc/8e/e4f12afe1beeb9823bba5375f8f258df0cc61b056b0195fb1cf9f62a1a58/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:5a28fcedd762349519276c36634e71853b4541079cab4acaaac60c4421827308", size = 2315386, upload-time = "2025-10-14T10:23:42.624Z" },
+ { url = "https://files.pythonhosted.org/packages/48/f7/925f65d930802e3ea2eb4d5afa4cb8730c8dc0d2cb89a59dc4ed2fcb2d74/pydantic_core-2.41.4-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c173ddcd86afd2535e2b695217e82191580663a1d1928239f877f5a1649ef39f", size = 2147775, upload-time = "2025-10-14T10:23:45.406Z" },
]
[[package]]
@@ -1327,6 +1487,124 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/33/e8/e40370e6d74ddba47f002a32919d91310d6074130fe4e17dabcafc15cbf1/watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f", size = 79067, upload-time = "2024-11-01T14:07:11.845Z" },
]
+[[package]]
+name = "xxhash"
+version = "3.6.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/02/84/30869e01909fb37a6cc7e18688ee8bf1e42d57e7e0777636bd47524c43c7/xxhash-3.6.0.tar.gz", hash = "sha256:f0162a78b13a0d7617b2845b90c763339d1f1d82bb04a4b07f4ab535cc5e05d6", size = 85160, upload-time = "2025-10-02T14:37:08.097Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/34/ee/f9f1d656ad168681bb0f6b092372c1e533c4416b8069b1896a175c46e484/xxhash-3.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:87ff03d7e35c61435976554477a7f4cd1704c3596a89a8300d5ce7fc83874a71", size = 32845, upload-time = "2025-10-02T14:33:51.573Z" },
+ { url = "https://files.pythonhosted.org/packages/a3/b1/93508d9460b292c74a09b83d16750c52a0ead89c51eea9951cb97a60d959/xxhash-3.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f572dfd3d0e2eb1a57511831cf6341242f5a9f8298a45862d085f5b93394a27d", size = 30807, upload-time = "2025-10-02T14:33:52.964Z" },
+ { url = "https://files.pythonhosted.org/packages/07/55/28c93a3662f2d200c70704efe74aab9640e824f8ce330d8d3943bf7c9b3c/xxhash-3.6.0-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:89952ea539566b9fed2bbd94e589672794b4286f342254fad28b149f9615fef8", size = 193786, upload-time = "2025-10-02T14:33:54.272Z" },
+ { url = "https://files.pythonhosted.org/packages/c1/96/fec0be9bb4b8f5d9c57d76380a366f31a1781fb802f76fc7cda6c84893c7/xxhash-3.6.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:48e6f2ffb07a50b52465a1032c3cf1f4a5683f944acaca8a134a2f23674c2058", size = 212830, upload-time = "2025-10-02T14:33:55.706Z" },
+ { url = "https://files.pythonhosted.org/packages/c4/a0/c706845ba77b9611f81fd2e93fad9859346b026e8445e76f8c6fd057cc6d/xxhash-3.6.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b5b848ad6c16d308c3ac7ad4ba6bede80ed5df2ba8ed382f8932df63158dd4b2", size = 211606, upload-time = "2025-10-02T14:33:57.133Z" },
+ { url = "https://files.pythonhosted.org/packages/67/1e/164126a2999e5045f04a69257eea946c0dc3e86541b400d4385d646b53d7/xxhash-3.6.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a034590a727b44dd8ac5914236a7b8504144447a9682586c3327e935f33ec8cc", size = 444872, upload-time = "2025-10-02T14:33:58.446Z" },
+ { url = "https://files.pythonhosted.org/packages/2d/4b/55ab404c56cd70a2cf5ecfe484838865d0fea5627365c6c8ca156bd09c8f/xxhash-3.6.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8a8f1972e75ebdd161d7896743122834fe87378160c20e97f8b09166213bf8cc", size = 193217, upload-time = "2025-10-02T14:33:59.724Z" },
+ { url = "https://files.pythonhosted.org/packages/45/e6/52abf06bac316db33aa269091ae7311bd53cfc6f4b120ae77bac1b348091/xxhash-3.6.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ee34327b187f002a596d7b167ebc59a1b729e963ce645964bbc050d2f1b73d07", size = 210139, upload-time = "2025-10-02T14:34:02.041Z" },
+ { url = "https://files.pythonhosted.org/packages/34/37/db94d490b8691236d356bc249c08819cbcef9273a1a30acf1254ff9ce157/xxhash-3.6.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:339f518c3c7a850dd033ab416ea25a692759dc7478a71131fe8869010d2b75e4", size = 197669, upload-time = "2025-10-02T14:34:03.664Z" },
+ { url = "https://files.pythonhosted.org/packages/b7/36/c4f219ef4a17a4f7a64ed3569bc2b5a9c8311abdb22249ac96093625b1a4/xxhash-3.6.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:bf48889c9630542d4709192578aebbd836177c9f7a4a2778a7d6340107c65f06", size = 210018, upload-time = "2025-10-02T14:34:05.325Z" },
+ { url = "https://files.pythonhosted.org/packages/fd/06/bfac889a374fc2fc439a69223d1750eed2e18a7db8514737ab630534fa08/xxhash-3.6.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:5576b002a56207f640636056b4160a378fe36a58db73ae5c27a7ec8db35f71d4", size = 413058, upload-time = "2025-10-02T14:34:06.925Z" },
+ { url = "https://files.pythonhosted.org/packages/c9/d1/555d8447e0dd32ad0930a249a522bb2e289f0d08b6b16204cfa42c1f5a0c/xxhash-3.6.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:af1f3278bd02814d6dedc5dec397993b549d6f16c19379721e5a1d31e132c49b", size = 190628, upload-time = "2025-10-02T14:34:08.669Z" },
+ { url = "https://files.pythonhosted.org/packages/d1/15/8751330b5186cedc4ed4b597989882ea05e0408b53fa47bcb46a6125bfc6/xxhash-3.6.0-cp310-cp310-win32.whl", hash = "sha256:aed058764db109dc9052720da65fafe84873b05eb8b07e5e653597951af57c3b", size = 30577, upload-time = "2025-10-02T14:34:10.234Z" },
+ { url = "https://files.pythonhosted.org/packages/bb/cc/53f87e8b5871a6eb2ff7e89c48c66093bda2be52315a8161ddc54ea550c4/xxhash-3.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:e82da5670f2d0d98950317f82a0e4a0197150ff19a6df2ba40399c2a3b9ae5fb", size = 31487, upload-time = "2025-10-02T14:34:11.618Z" },
+ { url = "https://files.pythonhosted.org/packages/9f/00/60f9ea3bb697667a14314d7269956f58bf56bb73864f8f8d52a3c2535e9a/xxhash-3.6.0-cp310-cp310-win_arm64.whl", hash = "sha256:4a082ffff8c6ac07707fb6b671caf7c6e020c75226c561830b73d862060f281d", size = 27863, upload-time = "2025-10-02T14:34:12.619Z" },
+ { url = "https://files.pythonhosted.org/packages/17/d4/cc2f0400e9154df4b9964249da78ebd72f318e35ccc425e9f403c392f22a/xxhash-3.6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b47bbd8cf2d72797f3c2772eaaac0ded3d3af26481a26d7d7d41dc2d3c46b04a", size = 32844, upload-time = "2025-10-02T14:34:14.037Z" },
+ { url = "https://files.pythonhosted.org/packages/5e/ec/1cc11cd13e26ea8bc3cb4af4eaadd8d46d5014aebb67be3f71fb0b68802a/xxhash-3.6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2b6821e94346f96db75abaa6e255706fb06ebd530899ed76d32cd99f20dc52fa", size = 30809, upload-time = "2025-10-02T14:34:15.484Z" },
+ { url = "https://files.pythonhosted.org/packages/04/5f/19fe357ea348d98ca22f456f75a30ac0916b51c753e1f8b2e0e6fb884cce/xxhash-3.6.0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d0a9751f71a1a65ce3584e9cae4467651c7e70c9d31017fa57574583a4540248", size = 194665, upload-time = "2025-10-02T14:34:16.541Z" },
+ { url = "https://files.pythonhosted.org/packages/90/3b/d1f1a8f5442a5fd8beedae110c5af7604dc37349a8e16519c13c19a9a2de/xxhash-3.6.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8b29ee68625ab37b04c0b40c3fafdf24d2f75ccd778333cfb698f65f6c463f62", size = 213550, upload-time = "2025-10-02T14:34:17.878Z" },
+ { url = "https://files.pythonhosted.org/packages/c4/ef/3a9b05eb527457d5db13a135a2ae1a26c80fecd624d20f3e8dcc4cb170f3/xxhash-3.6.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:6812c25fe0d6c36a46ccb002f40f27ac903bf18af9f6dd8f9669cb4d176ab18f", size = 212384, upload-time = "2025-10-02T14:34:19.182Z" },
+ { url = "https://files.pythonhosted.org/packages/0f/18/ccc194ee698c6c623acbf0f8c2969811a8a4b6185af5e824cd27b9e4fd3e/xxhash-3.6.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:4ccbff013972390b51a18ef1255ef5ac125c92dc9143b2d1909f59abc765540e", size = 445749, upload-time = "2025-10-02T14:34:20.659Z" },
+ { url = "https://files.pythonhosted.org/packages/a5/86/cf2c0321dc3940a7aa73076f4fd677a0fb3e405cb297ead7d864fd90847e/xxhash-3.6.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:297b7fbf86c82c550e12e8fb71968b3f033d27b874276ba3624ea868c11165a8", size = 193880, upload-time = "2025-10-02T14:34:22.431Z" },
+ { url = "https://files.pythonhosted.org/packages/82/fb/96213c8560e6f948a1ecc9a7613f8032b19ee45f747f4fca4eb31bb6d6ed/xxhash-3.6.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:dea26ae1eb293db089798d3973a5fc928a18fdd97cc8801226fae705b02b14b0", size = 210912, upload-time = "2025-10-02T14:34:23.937Z" },
+ { url = "https://files.pythonhosted.org/packages/40/aa/4395e669b0606a096d6788f40dbdf2b819d6773aa290c19e6e83cbfc312f/xxhash-3.6.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:7a0b169aafb98f4284f73635a8e93f0735f9cbde17bd5ec332480484241aaa77", size = 198654, upload-time = "2025-10-02T14:34:25.644Z" },
+ { url = "https://files.pythonhosted.org/packages/67/74/b044fcd6b3d89e9b1b665924d85d3f400636c23590226feb1eb09e1176ce/xxhash-3.6.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:08d45aef063a4531b785cd72de4887766d01dc8f362a515693df349fdb825e0c", size = 210867, upload-time = "2025-10-02T14:34:27.203Z" },
+ { url = "https://files.pythonhosted.org/packages/bc/fd/3ce73bf753b08cb19daee1eb14aa0d7fe331f8da9c02dd95316ddfe5275e/xxhash-3.6.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:929142361a48ee07f09121fe9e96a84950e8d4df3bb298ca5d88061969f34d7b", size = 414012, upload-time = "2025-10-02T14:34:28.409Z" },
+ { url = "https://files.pythonhosted.org/packages/ba/b3/5a4241309217c5c876f156b10778f3ab3af7ba7e3259e6d5f5c7d0129eb2/xxhash-3.6.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:51312c768403d8540487dbbfb557454cfc55589bbde6424456951f7fcd4facb3", size = 191409, upload-time = "2025-10-02T14:34:29.696Z" },
+ { url = "https://files.pythonhosted.org/packages/c0/01/99bfbc15fb9abb9a72b088c1d95219fc4782b7d01fc835bd5744d66dd0b8/xxhash-3.6.0-cp311-cp311-win32.whl", hash = "sha256:d1927a69feddc24c987b337ce81ac15c4720955b667fe9b588e02254b80446fd", size = 30574, upload-time = "2025-10-02T14:34:31.028Z" },
+ { url = "https://files.pythonhosted.org/packages/65/79/9d24d7f53819fe301b231044ea362ce64e86c74f6e8c8e51320de248b3e5/xxhash-3.6.0-cp311-cp311-win_amd64.whl", hash = "sha256:26734cdc2d4ffe449b41d186bbeac416f704a482ed835d375a5c0cb02bc63fef", size = 31481, upload-time = "2025-10-02T14:34:32.062Z" },
+ { url = "https://files.pythonhosted.org/packages/30/4e/15cd0e3e8772071344eab2961ce83f6e485111fed8beb491a3f1ce100270/xxhash-3.6.0-cp311-cp311-win_arm64.whl", hash = "sha256:d72f67ef8bf36e05f5b6c65e8524f265bd61071471cd4cf1d36743ebeeeb06b7", size = 27861, upload-time = "2025-10-02T14:34:33.555Z" },
+ { url = "https://files.pythonhosted.org/packages/9a/07/d9412f3d7d462347e4511181dea65e47e0d0e16e26fbee2ea86a2aefb657/xxhash-3.6.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:01362c4331775398e7bb34e3ab403bc9ee9f7c497bc7dee6272114055277dd3c", size = 32744, upload-time = "2025-10-02T14:34:34.622Z" },
+ { url = "https://files.pythonhosted.org/packages/79/35/0429ee11d035fc33abe32dca1b2b69e8c18d236547b9a9b72c1929189b9a/xxhash-3.6.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b7b2df81a23f8cb99656378e72501b2cb41b1827c0f5a86f87d6b06b69f9f204", size = 30816, upload-time = "2025-10-02T14:34:36.043Z" },
+ { url = "https://files.pythonhosted.org/packages/b7/f2/57eb99aa0f7d98624c0932c5b9a170e1806406cdbcdb510546634a1359e0/xxhash-3.6.0-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:dc94790144e66b14f67b10ac8ed75b39ca47536bf8800eb7c24b50271ea0c490", size = 194035, upload-time = "2025-10-02T14:34:37.354Z" },
+ { url = "https://files.pythonhosted.org/packages/4c/ed/6224ba353690d73af7a3f1c7cdb1fc1b002e38f783cb991ae338e1eb3d79/xxhash-3.6.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:93f107c673bccf0d592cdba077dedaf52fe7f42dcd7676eba1f6d6f0c3efffd2", size = 212914, upload-time = "2025-10-02T14:34:38.6Z" },
+ { url = "https://files.pythonhosted.org/packages/38/86/fb6b6130d8dd6b8942cc17ab4d90e223653a89aa32ad2776f8af7064ed13/xxhash-3.6.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2aa5ee3444c25b69813663c9f8067dcfaa2e126dc55e8dddf40f4d1c25d7effa", size = 212163, upload-time = "2025-10-02T14:34:39.872Z" },
+ { url = "https://files.pythonhosted.org/packages/ee/dc/e84875682b0593e884ad73b2d40767b5790d417bde603cceb6878901d647/xxhash-3.6.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:f7f99123f0e1194fa59cc69ad46dbae2e07becec5df50a0509a808f90a0f03f0", size = 445411, upload-time = "2025-10-02T14:34:41.569Z" },
+ { url = "https://files.pythonhosted.org/packages/11/4f/426f91b96701ec2f37bb2b8cec664eff4f658a11f3fa9d94f0a887ea6d2b/xxhash-3.6.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:49e03e6fe2cac4a1bc64952dd250cf0dbc5ef4ebb7b8d96bce82e2de163c82a2", size = 193883, upload-time = "2025-10-02T14:34:43.249Z" },
+ { url = "https://files.pythonhosted.org/packages/53/5a/ddbb83eee8e28b778eacfc5a85c969673e4023cdeedcfcef61f36731610b/xxhash-3.6.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:bd17fede52a17a4f9a7bc4472a5867cb0b160deeb431795c0e4abe158bc784e9", size = 210392, upload-time = "2025-10-02T14:34:45.042Z" },
+ { url = "https://files.pythonhosted.org/packages/1e/c2/ff69efd07c8c074ccdf0a4f36fcdd3d27363665bcdf4ba399abebe643465/xxhash-3.6.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:6fb5f5476bef678f69db04f2bd1efbed3030d2aba305b0fc1773645f187d6a4e", size = 197898, upload-time = "2025-10-02T14:34:46.302Z" },
+ { url = "https://files.pythonhosted.org/packages/58/ca/faa05ac19b3b622c7c9317ac3e23954187516298a091eb02c976d0d3dd45/xxhash-3.6.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:843b52f6d88071f87eba1631b684fcb4b2068cd2180a0224122fe4ef011a9374", size = 210655, upload-time = "2025-10-02T14:34:47.571Z" },
+ { url = "https://files.pythonhosted.org/packages/d4/7a/06aa7482345480cc0cb597f5c875b11a82c3953f534394f620b0be2f700c/xxhash-3.6.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:7d14a6cfaf03b1b6f5f9790f76880601ccc7896aff7ab9cd8978a939c1eb7e0d", size = 414001, upload-time = "2025-10-02T14:34:49.273Z" },
+ { url = "https://files.pythonhosted.org/packages/23/07/63ffb386cd47029aa2916b3d2f454e6cc5b9f5c5ada3790377d5430084e7/xxhash-3.6.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:418daf3db71e1413cfe211c2f9a528456936645c17f46b5204705581a45390ae", size = 191431, upload-time = "2025-10-02T14:34:50.798Z" },
+ { url = "https://files.pythonhosted.org/packages/0f/93/14fde614cadb4ddf5e7cebf8918b7e8fac5ae7861c1875964f17e678205c/xxhash-3.6.0-cp312-cp312-win32.whl", hash = "sha256:50fc255f39428a27299c20e280d6193d8b63b8ef8028995323bf834a026b4fbb", size = 30617, upload-time = "2025-10-02T14:34:51.954Z" },
+ { url = "https://files.pythonhosted.org/packages/13/5d/0d125536cbe7565a83d06e43783389ecae0c0f2ed037b48ede185de477c0/xxhash-3.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:c0f2ab8c715630565ab8991b536ecded9416d615538be8ecddce43ccf26cbc7c", size = 31534, upload-time = "2025-10-02T14:34:53.276Z" },
+ { url = "https://files.pythonhosted.org/packages/54/85/6ec269b0952ec7e36ba019125982cf11d91256a778c7c3f98a4c5043d283/xxhash-3.6.0-cp312-cp312-win_arm64.whl", hash = "sha256:eae5c13f3bc455a3bbb68bdc513912dc7356de7e2280363ea235f71f54064829", size = 27876, upload-time = "2025-10-02T14:34:54.371Z" },
+ { url = "https://files.pythonhosted.org/packages/33/76/35d05267ac82f53ae9b0e554da7c5e281ee61f3cad44c743f0fcd354f211/xxhash-3.6.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:599e64ba7f67472481ceb6ee80fa3bd828fd61ba59fb11475572cc5ee52b89ec", size = 32738, upload-time = "2025-10-02T14:34:55.839Z" },
+ { url = "https://files.pythonhosted.org/packages/31/a8/3fbce1cd96534a95e35d5120637bf29b0d7f5d8fa2f6374e31b4156dd419/xxhash-3.6.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7d8b8aaa30fca4f16f0c84a5c8d7ddee0e25250ec2796c973775373257dde8f1", size = 30821, upload-time = "2025-10-02T14:34:57.219Z" },
+ { url = "https://files.pythonhosted.org/packages/0c/ea/d387530ca7ecfa183cb358027f1833297c6ac6098223fd14f9782cd0015c/xxhash-3.6.0-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d597acf8506d6e7101a4a44a5e428977a51c0fadbbfd3c39650cca9253f6e5a6", size = 194127, upload-time = "2025-10-02T14:34:59.21Z" },
+ { url = "https://files.pythonhosted.org/packages/ba/0c/71435dcb99874b09a43b8d7c54071e600a7481e42b3e3ce1eb5226a5711a/xxhash-3.6.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:858dc935963a33bc33490128edc1c12b0c14d9c7ebaa4e387a7869ecc4f3e263", size = 212975, upload-time = "2025-10-02T14:35:00.816Z" },
+ { url = "https://files.pythonhosted.org/packages/84/7a/c2b3d071e4bb4a90b7057228a99b10d51744878f4a8a6dd643c8bd897620/xxhash-3.6.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ba284920194615cb8edf73bf52236ce2e1664ccd4a38fdb543506413529cc546", size = 212241, upload-time = "2025-10-02T14:35:02.207Z" },
+ { url = "https://files.pythonhosted.org/packages/81/5f/640b6eac0128e215f177df99eadcd0f1b7c42c274ab6a394a05059694c5a/xxhash-3.6.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:4b54219177f6c6674d5378bd862c6aedf64725f70dd29c472eaae154df1a2e89", size = 445471, upload-time = "2025-10-02T14:35:03.61Z" },
+ { url = "https://files.pythonhosted.org/packages/5e/1e/3c3d3ef071b051cc3abbe3721ffb8365033a172613c04af2da89d5548a87/xxhash-3.6.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:42c36dd7dbad2f5238950c377fcbf6811b1cdb1c444fab447960030cea60504d", size = 193936, upload-time = "2025-10-02T14:35:05.013Z" },
+ { url = "https://files.pythonhosted.org/packages/2c/bd/4a5f68381939219abfe1c22a9e3a5854a4f6f6f3c4983a87d255f21f2e5d/xxhash-3.6.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f22927652cba98c44639ffdc7aaf35828dccf679b10b31c4ad72a5b530a18eb7", size = 210440, upload-time = "2025-10-02T14:35:06.239Z" },
+ { url = "https://files.pythonhosted.org/packages/eb/37/b80fe3d5cfb9faff01a02121a0f4d565eb7237e9e5fc66e73017e74dcd36/xxhash-3.6.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b45fad44d9c5c119e9c6fbf2e1c656a46dc68e280275007bbfd3d572b21426db", size = 197990, upload-time = "2025-10-02T14:35:07.735Z" },
+ { url = "https://files.pythonhosted.org/packages/d7/fd/2c0a00c97b9e18f72e1f240ad4e8f8a90fd9d408289ba9c7c495ed7dc05c/xxhash-3.6.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:6f2580ffab1a8b68ef2b901cde7e55fa8da5e4be0977c68f78fc80f3c143de42", size = 210689, upload-time = "2025-10-02T14:35:09.438Z" },
+ { url = "https://files.pythonhosted.org/packages/93/86/5dd8076a926b9a95db3206aba20d89a7fc14dd5aac16e5c4de4b56033140/xxhash-3.6.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:40c391dd3cd041ebc3ffe6f2c862f402e306eb571422e0aa918d8070ba31da11", size = 414068, upload-time = "2025-10-02T14:35:11.162Z" },
+ { url = "https://files.pythonhosted.org/packages/af/3c/0bb129170ee8f3650f08e993baee550a09593462a5cddd8e44d0011102b1/xxhash-3.6.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f205badabde7aafd1a31e8ca2a3e5a763107a71c397c4481d6a804eb5063d8bd", size = 191495, upload-time = "2025-10-02T14:35:12.971Z" },
+ { url = "https://files.pythonhosted.org/packages/e9/3a/6797e0114c21d1725e2577508e24006fd7ff1d8c0c502d3b52e45c1771d8/xxhash-3.6.0-cp313-cp313-win32.whl", hash = "sha256:2577b276e060b73b73a53042ea5bd5203d3e6347ce0d09f98500f418a9fcf799", size = 30620, upload-time = "2025-10-02T14:35:14.129Z" },
+ { url = "https://files.pythonhosted.org/packages/86/15/9bc32671e9a38b413a76d24722a2bf8784a132c043063a8f5152d390b0f9/xxhash-3.6.0-cp313-cp313-win_amd64.whl", hash = "sha256:757320d45d2fbcce8f30c42a6b2f47862967aea7bf458b9625b4bbe7ee390392", size = 31542, upload-time = "2025-10-02T14:35:15.21Z" },
+ { url = "https://files.pythonhosted.org/packages/39/c5/cc01e4f6188656e56112d6a8e0dfe298a16934b8c47a247236549a3f7695/xxhash-3.6.0-cp313-cp313-win_arm64.whl", hash = "sha256:457b8f85dec5825eed7b69c11ae86834a018b8e3df5e77783c999663da2f96d6", size = 27880, upload-time = "2025-10-02T14:35:16.315Z" },
+ { url = "https://files.pythonhosted.org/packages/f3/30/25e5321c8732759e930c555176d37e24ab84365482d257c3b16362235212/xxhash-3.6.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:a42e633d75cdad6d625434e3468126c73f13f7584545a9cf34e883aa1710e702", size = 32956, upload-time = "2025-10-02T14:35:17.413Z" },
+ { url = "https://files.pythonhosted.org/packages/9f/3c/0573299560d7d9f8ab1838f1efc021a280b5ae5ae2e849034ef3dee18810/xxhash-3.6.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:568a6d743219e717b07b4e03b0a828ce593833e498c3b64752e0f5df6bfe84db", size = 31072, upload-time = "2025-10-02T14:35:18.844Z" },
+ { url = "https://files.pythonhosted.org/packages/7a/1c/52d83a06e417cd9d4137722693424885cc9878249beb3a7c829e74bf7ce9/xxhash-3.6.0-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:bec91b562d8012dae276af8025a55811b875baace6af510412a5e58e3121bc54", size = 196409, upload-time = "2025-10-02T14:35:20.31Z" },
+ { url = "https://files.pythonhosted.org/packages/e3/8e/c6d158d12a79bbd0b878f8355432075fc82759e356ab5a111463422a239b/xxhash-3.6.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:78e7f2f4c521c30ad5e786fdd6bae89d47a32672a80195467b5de0480aa97b1f", size = 215736, upload-time = "2025-10-02T14:35:21.616Z" },
+ { url = "https://files.pythonhosted.org/packages/bc/68/c4c80614716345d55071a396cf03d06e34b5f4917a467faf43083c995155/xxhash-3.6.0-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:3ed0df1b11a79856df5ffcab572cbd6b9627034c1c748c5566fa79df9048a7c5", size = 214833, upload-time = "2025-10-02T14:35:23.32Z" },
+ { url = "https://files.pythonhosted.org/packages/7e/e9/ae27c8ffec8b953efa84c7c4a6c6802c263d587b9fc0d6e7cea64e08c3af/xxhash-3.6.0-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0e4edbfc7d420925b0dd5e792478ed393d6e75ff8fc219a6546fb446b6a417b1", size = 448348, upload-time = "2025-10-02T14:35:25.111Z" },
+ { url = "https://files.pythonhosted.org/packages/d7/6b/33e21afb1b5b3f46b74b6bd1913639066af218d704cc0941404ca717fc57/xxhash-3.6.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fba27a198363a7ef87f8c0f6b171ec36b674fe9053742c58dd7e3201c1ab30ee", size = 196070, upload-time = "2025-10-02T14:35:26.586Z" },
+ { url = "https://files.pythonhosted.org/packages/96/b6/fcabd337bc5fa624e7203aa0fa7d0c49eed22f72e93229431752bddc83d9/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:794fe9145fe60191c6532fa95063765529770edcdd67b3d537793e8004cabbfd", size = 212907, upload-time = "2025-10-02T14:35:28.087Z" },
+ { url = "https://files.pythonhosted.org/packages/4b/d3/9ee6160e644d660fcf176c5825e61411c7f62648728f69c79ba237250143/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:6105ef7e62b5ac73a837778efc331a591d8442f8ef5c7e102376506cb4ae2729", size = 200839, upload-time = "2025-10-02T14:35:29.857Z" },
+ { url = "https://files.pythonhosted.org/packages/0d/98/e8de5baa5109394baf5118f5e72ab21a86387c4f89b0e77ef3e2f6b0327b/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:f01375c0e55395b814a679b3eea205db7919ac2af213f4a6682e01220e5fe292", size = 213304, upload-time = "2025-10-02T14:35:31.222Z" },
+ { url = "https://files.pythonhosted.org/packages/7b/1d/71056535dec5c3177eeb53e38e3d367dd1d16e024e63b1cee208d572a033/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:d706dca2d24d834a4661619dcacf51a75c16d65985718d6a7d73c1eeeb903ddf", size = 416930, upload-time = "2025-10-02T14:35:32.517Z" },
+ { url = "https://files.pythonhosted.org/packages/dc/6c/5cbde9de2cd967c322e651c65c543700b19e7ae3e0aae8ece3469bf9683d/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:5f059d9faeacd49c0215d66f4056e1326c80503f51a1532ca336a385edadd033", size = 193787, upload-time = "2025-10-02T14:35:33.827Z" },
+ { url = "https://files.pythonhosted.org/packages/19/fa/0172e350361d61febcea941b0cc541d6e6c8d65d153e85f850a7b256ff8a/xxhash-3.6.0-cp313-cp313t-win32.whl", hash = "sha256:1244460adc3a9be84731d72b8e80625788e5815b68da3da8b83f78115a40a7ec", size = 30916, upload-time = "2025-10-02T14:35:35.107Z" },
+ { url = "https://files.pythonhosted.org/packages/ad/e6/e8cf858a2b19d6d45820f072eff1bea413910592ff17157cabc5f1227a16/xxhash-3.6.0-cp313-cp313t-win_amd64.whl", hash = "sha256:b1e420ef35c503869c4064f4a2f2b08ad6431ab7b229a05cce39d74268bca6b8", size = 31799, upload-time = "2025-10-02T14:35:36.165Z" },
+ { url = "https://files.pythonhosted.org/packages/56/15/064b197e855bfb7b343210e82490ae672f8bc7cdf3ddb02e92f64304ee8a/xxhash-3.6.0-cp313-cp313t-win_arm64.whl", hash = "sha256:ec44b73a4220623235f67a996c862049f375df3b1052d9899f40a6382c32d746", size = 28044, upload-time = "2025-10-02T14:35:37.195Z" },
+ { url = "https://files.pythonhosted.org/packages/7e/5e/0138bc4484ea9b897864d59fce9be9086030825bc778b76cb5a33a906d37/xxhash-3.6.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:a40a3d35b204b7cc7643cbcf8c9976d818cb47befcfac8bbefec8038ac363f3e", size = 32754, upload-time = "2025-10-02T14:35:38.245Z" },
+ { url = "https://files.pythonhosted.org/packages/18/d7/5dac2eb2ec75fd771957a13e5dda560efb2176d5203f39502a5fc571f899/xxhash-3.6.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:a54844be970d3fc22630b32d515e79a90d0a3ddb2644d8d7402e3c4c8da61405", size = 30846, upload-time = "2025-10-02T14:35:39.6Z" },
+ { url = "https://files.pythonhosted.org/packages/fe/71/8bc5be2bb00deb5682e92e8da955ebe5fa982da13a69da5a40a4c8db12fb/xxhash-3.6.0-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:016e9190af8f0a4e3741343777710e3d5717427f175adfdc3e72508f59e2a7f3", size = 194343, upload-time = "2025-10-02T14:35:40.69Z" },
+ { url = "https://files.pythonhosted.org/packages/e7/3b/52badfb2aecec2c377ddf1ae75f55db3ba2d321c5e164f14461c90837ef3/xxhash-3.6.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4f6f72232f849eb9d0141e2ebe2677ece15adfd0fa599bc058aad83c714bb2c6", size = 213074, upload-time = "2025-10-02T14:35:42.29Z" },
+ { url = "https://files.pythonhosted.org/packages/a2/2b/ae46b4e9b92e537fa30d03dbc19cdae57ed407e9c26d163895e968e3de85/xxhash-3.6.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:63275a8aba7865e44b1813d2177e0f5ea7eadad3dd063a21f7cf9afdc7054063", size = 212388, upload-time = "2025-10-02T14:35:43.929Z" },
+ { url = "https://files.pythonhosted.org/packages/f5/80/49f88d3afc724b4ac7fbd664c8452d6db51b49915be48c6982659e0e7942/xxhash-3.6.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3cd01fa2aa00d8b017c97eb46b9a794fbdca53fc14f845f5a328c71254b0abb7", size = 445614, upload-time = "2025-10-02T14:35:45.216Z" },
+ { url = "https://files.pythonhosted.org/packages/ed/ba/603ce3961e339413543d8cd44f21f2c80e2a7c5cfe692a7b1f2cccf58f3c/xxhash-3.6.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0226aa89035b62b6a86d3c68df4d7c1f47a342b8683da2b60cedcddb46c4d95b", size = 194024, upload-time = "2025-10-02T14:35:46.959Z" },
+ { url = "https://files.pythonhosted.org/packages/78/d1/8e225ff7113bf81545cfdcd79eef124a7b7064a0bba53605ff39590b95c2/xxhash-3.6.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c6e193e9f56e4ca4923c61238cdaced324f0feac782544eb4c6d55ad5cc99ddd", size = 210541, upload-time = "2025-10-02T14:35:48.301Z" },
+ { url = "https://files.pythonhosted.org/packages/6f/58/0f89d149f0bad89def1a8dd38feb50ccdeb643d9797ec84707091d4cb494/xxhash-3.6.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:9176dcaddf4ca963d4deb93866d739a343c01c969231dbe21680e13a5d1a5bf0", size = 198305, upload-time = "2025-10-02T14:35:49.584Z" },
+ { url = "https://files.pythonhosted.org/packages/11/38/5eab81580703c4df93feb5f32ff8fa7fe1e2c51c1f183ee4e48d4bb9d3d7/xxhash-3.6.0-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:c1ce4009c97a752e682b897aa99aef84191077a9433eb237774689f14f8ec152", size = 210848, upload-time = "2025-10-02T14:35:50.877Z" },
+ { url = "https://files.pythonhosted.org/packages/5e/6b/953dc4b05c3ce678abca756416e4c130d2382f877a9c30a20d08ee6a77c0/xxhash-3.6.0-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:8cb2f4f679b01513b7adbb9b1b2f0f9cdc31b70007eaf9d59d0878809f385b11", size = 414142, upload-time = "2025-10-02T14:35:52.15Z" },
+ { url = "https://files.pythonhosted.org/packages/08/a9/238ec0d4e81a10eb5026d4a6972677cbc898ba6c8b9dbaec12ae001b1b35/xxhash-3.6.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:653a91d7c2ab54a92c19ccf43508b6a555440b9be1bc8be553376778be7f20b5", size = 191547, upload-time = "2025-10-02T14:35:53.547Z" },
+ { url = "https://files.pythonhosted.org/packages/f1/ee/3cf8589e06c2164ac77c3bf0aa127012801128f1feebf2a079272da5737c/xxhash-3.6.0-cp314-cp314-win32.whl", hash = "sha256:a756fe893389483ee8c394d06b5ab765d96e68fbbfe6fde7aa17e11f5720559f", size = 31214, upload-time = "2025-10-02T14:35:54.746Z" },
+ { url = "https://files.pythonhosted.org/packages/02/5d/a19552fbc6ad4cb54ff953c3908bbc095f4a921bc569433d791f755186f1/xxhash-3.6.0-cp314-cp314-win_amd64.whl", hash = "sha256:39be8e4e142550ef69629c9cd71b88c90e9a5db703fecbcf265546d9536ca4ad", size = 32290, upload-time = "2025-10-02T14:35:55.791Z" },
+ { url = "https://files.pythonhosted.org/packages/b1/11/dafa0643bc30442c887b55baf8e73353a344ee89c1901b5a5c54a6c17d39/xxhash-3.6.0-cp314-cp314-win_arm64.whl", hash = "sha256:25915e6000338999236f1eb68a02a32c3275ac338628a7eaa5a269c401995679", size = 28795, upload-time = "2025-10-02T14:35:57.162Z" },
+ { url = "https://files.pythonhosted.org/packages/2c/db/0e99732ed7f64182aef4a6fb145e1a295558deec2a746265dcdec12d191e/xxhash-3.6.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:c5294f596a9017ca5a3e3f8884c00b91ab2ad2933cf288f4923c3fd4346cf3d4", size = 32955, upload-time = "2025-10-02T14:35:58.267Z" },
+ { url = "https://files.pythonhosted.org/packages/55/f4/2a7c3c68e564a099becfa44bb3d398810cc0ff6749b0d3cb8ccb93f23c14/xxhash-3.6.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1cf9dcc4ab9cff01dfbba78544297a3a01dafd60f3bde4e2bfd016cf7e4ddc67", size = 31072, upload-time = "2025-10-02T14:35:59.382Z" },
+ { url = "https://files.pythonhosted.org/packages/c6/d9/72a29cddc7250e8a5819dad5d466facb5dc4c802ce120645630149127e73/xxhash-3.6.0-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:01262da8798422d0685f7cef03b2bd3f4f46511b02830861df548d7def4402ad", size = 196579, upload-time = "2025-10-02T14:36:00.838Z" },
+ { url = "https://files.pythonhosted.org/packages/63/93/b21590e1e381040e2ca305a884d89e1c345b347404f7780f07f2cdd47ef4/xxhash-3.6.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:51a73fb7cb3a3ead9f7a8b583ffd9b8038e277cdb8cb87cf890e88b3456afa0b", size = 215854, upload-time = "2025-10-02T14:36:02.207Z" },
+ { url = "https://files.pythonhosted.org/packages/ce/b8/edab8a7d4fa14e924b29be877d54155dcbd8b80be85ea00d2be3413a9ed4/xxhash-3.6.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b9c6df83594f7df8f7f708ce5ebeacfc69f72c9fbaaababf6cf4758eaada0c9b", size = 214965, upload-time = "2025-10-02T14:36:03.507Z" },
+ { url = "https://files.pythonhosted.org/packages/27/67/dfa980ac7f0d509d54ea0d5a486d2bb4b80c3f1bb22b66e6a05d3efaf6c0/xxhash-3.6.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:627f0af069b0ea56f312fd5189001c24578868643203bca1abbc2c52d3a6f3ca", size = 448484, upload-time = "2025-10-02T14:36:04.828Z" },
+ { url = "https://files.pythonhosted.org/packages/8c/63/8ffc2cc97e811c0ca5d00ab36604b3ea6f4254f20b7bc658ca825ce6c954/xxhash-3.6.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:aa912c62f842dfd013c5f21a642c9c10cd9f4c4e943e0af83618b4a404d9091a", size = 196162, upload-time = "2025-10-02T14:36:06.182Z" },
+ { url = "https://files.pythonhosted.org/packages/4b/77/07f0e7a3edd11a6097e990f6e5b815b6592459cb16dae990d967693e6ea9/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:b465afd7909db30168ab62afe40b2fcf79eedc0b89a6c0ab3123515dc0df8b99", size = 213007, upload-time = "2025-10-02T14:36:07.733Z" },
+ { url = "https://files.pythonhosted.org/packages/ae/d8/bc5fa0d152837117eb0bef6f83f956c509332ce133c91c63ce07ee7c4873/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:a881851cf38b0a70e7c4d3ce81fc7afd86fbc2a024f4cfb2a97cf49ce04b75d3", size = 200956, upload-time = "2025-10-02T14:36:09.106Z" },
+ { url = "https://files.pythonhosted.org/packages/26/a5/d749334130de9411783873e9b98ecc46688dad5db64ca6e04b02acc8b473/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:9b3222c686a919a0f3253cfc12bb118b8b103506612253b5baeaac10d8027cf6", size = 213401, upload-time = "2025-10-02T14:36:10.585Z" },
+ { url = "https://files.pythonhosted.org/packages/89/72/abed959c956a4bfc72b58c0384bb7940663c678127538634d896b1195c10/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:c5aa639bc113e9286137cec8fadc20e9cd732b2cc385c0b7fa673b84fc1f2a93", size = 417083, upload-time = "2025-10-02T14:36:12.276Z" },
+ { url = "https://files.pythonhosted.org/packages/0c/b3/62fd2b586283b7d7d665fb98e266decadf31f058f1cf6c478741f68af0cb/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5c1343d49ac102799905e115aee590183c3921d475356cb24b4de29a4bc56518", size = 193913, upload-time = "2025-10-02T14:36:14.025Z" },
+ { url = "https://files.pythonhosted.org/packages/9a/9a/c19c42c5b3f5a4aad748a6d5b4f23df3bed7ee5445accc65a0fb3ff03953/xxhash-3.6.0-cp314-cp314t-win32.whl", hash = "sha256:5851f033c3030dd95c086b4a36a2683c2ff4a799b23af60977188b057e467119", size = 31586, upload-time = "2025-10-02T14:36:15.603Z" },
+ { url = "https://files.pythonhosted.org/packages/03/d6/4cc450345be9924fd5dc8c590ceda1db5b43a0a889587b0ae81a95511360/xxhash-3.6.0-cp314-cp314t-win_amd64.whl", hash = "sha256:0444e7967dac37569052d2409b00a8860c2135cff05502df4da80267d384849f", size = 32526, upload-time = "2025-10-02T14:36:16.708Z" },
+ { url = "https://files.pythonhosted.org/packages/0f/c9/7243eb3f9eaabd1a88a5a5acadf06df2d83b100c62684b7425c6a11bcaa8/xxhash-3.6.0-cp314-cp314t-win_arm64.whl", hash = "sha256:bb79b1e63f6fd84ec778a4b1916dfe0a7c3fdb986c06addd5db3a0d413819d95", size = 28898, upload-time = "2025-10-02T14:36:17.843Z" },
+ { url = "https://files.pythonhosted.org/packages/93/1e/8aec23647a34a249f62e2398c42955acd9b4c6ed5cf08cbea94dc46f78d2/xxhash-3.6.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0f7b7e2ec26c1666ad5fc9dbfa426a6a3367ceaf79db5dd76264659d509d73b0", size = 30662, upload-time = "2025-10-02T14:37:01.743Z" },
+ { url = "https://files.pythonhosted.org/packages/b8/0b/b14510b38ba91caf43006209db846a696ceea6a847a0c9ba0a5b1adc53d6/xxhash-3.6.0-pp311-pypy311_pp73-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5dc1e14d14fa0f5789ec29a7062004b5933964bb9b02aae6622b8f530dc40296", size = 41056, upload-time = "2025-10-02T14:37:02.879Z" },
+ { url = "https://files.pythonhosted.org/packages/50/55/15a7b8a56590e66ccd374bbfa3f9ffc45b810886c8c3b614e3f90bd2367c/xxhash-3.6.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:881b47fc47e051b37d94d13e7455131054b56749b91b508b0907eb07900d1c13", size = 36251, upload-time = "2025-10-02T14:37:04.44Z" },
+ { url = "https://files.pythonhosted.org/packages/62/b2/5ac99a041a29e58e95f907876b04f7067a0242cb85b5f39e726153981503/xxhash-3.6.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c6dc31591899f5e5666f04cc2e529e69b4072827085c1ef15294d91a004bc1bd", size = 32481, upload-time = "2025-10-02T14:37:05.869Z" },
+ { url = "https://files.pythonhosted.org/packages/7b/d9/8d95e906764a386a3d3b596f3c68bb63687dfca806373509f51ce8eea81f/xxhash-3.6.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:15e0dac10eb9309508bfc41f7f9deaa7755c69e35af835db9cb10751adebc35d", size = 31565, upload-time = "2025-10-02T14:37:06.966Z" },
+]
+
[[package]]
name = "zstandard"
version = "0.25.0"
diff --git a/libs/core/README.md b/libs/core/README.md
index 9aacc631c36..80cfbe0d8ec 100644
--- a/libs/core/README.md
+++ b/libs/core/README.md
@@ -1,7 +1,14 @@
# π¦ποΈ LangChain Core
-[](https://opensource.org/licenses/MIT)
+[](https://pypi.org/project/langchain-core/#history)
+[](https://opensource.org/licenses/MIT)
[](https://pypistats.org/packages/langchain-core)
+[](https://twitter.com/langchainai)
+
+Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
+
+To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
+[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
## Quick Install
@@ -9,16 +16,14 @@
pip install langchain-core
```
-## What is it?
+## π€ What is this?
-LangChain Core contains the base abstractions that power the the LangChain ecosystem.
+LangChain Core contains the base abstractions that power the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
-For full documentation see the [API reference](https://reference.langchain.com/python/).
-
## β°οΈ Why build on top of LangChain Core?
The LangChain ecosystem is built on top of `langchain-core`. Some of the benefits:
@@ -27,12 +32,16 @@ The LangChain ecosystem is built on top of `langchain-core`. Some of the benefit
- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
- **Battle-tested**: Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
+## π Documentation
+
+For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_core/).
+
## π Releases & Versioning
-See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning Policy](https://docs.langchain.com/oss/python/versioning).
+See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.
## π Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
-For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing).
+For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
diff --git a/libs/core/langchain_core/agents.py b/libs/core/langchain_core/agents.py
index 62a44623972..4a020a99c95 100644
--- a/libs/core/langchain_core/agents.py
+++ b/libs/core/langchain_core/agents.py
@@ -5,12 +5,10 @@
!!! warning
New agents should be built using the
- [langgraph library](https://github.com/langchain-ai/langgraph), which provides a
+ [`langchain` library](https://pypi.org/project/langchain/), which provides a
simpler and more flexible way to define agents.
- Please see the
- [migration guide](https://python.langchain.com/docs/how_to/migrate_agent/) for
- information on how to migrate existing agents to modern langgraph agents.
+ See docs on [building agents](https://docs.langchain.com/oss/python/langchain/agents).
Agents use language models to choose a sequence of actions to take.
@@ -54,37 +52,39 @@ class AgentAction(Serializable):
"""The input to pass in to the Tool."""
log: str
"""Additional information to log about the action.
- This log can be used in a few ways. First, it can be used to audit
- what exactly the LLM predicted to lead to this (tool, tool_input).
- Second, it can be used in future iterations to show the LLMs prior
- thoughts. This is useful when (tool, tool_input) does not contain
- full information about the LLM prediction (for example, any `thought`
- before the tool/tool_input)."""
+
+ This log can be used in a few ways. First, it can be used to audit what exactly the
+ LLM predicted to lead to this `(tool, tool_input)`.
+
+ Second, it can be used in future iterations to show the LLMs prior thoughts. This is
+ useful when `(tool, tool_input)` does not contain full information about the LLM
+ prediction (for example, any `thought` before the tool/tool_input).
+ """
type: Literal["AgentAction"] = "AgentAction"
# Override init to support instantiation by position for backward compat.
def __init__(self, tool: str, tool_input: str | dict, log: str, **kwargs: Any):
- """Create an AgentAction.
+ """Create an `AgentAction`.
Args:
tool: The name of the tool to execute.
- tool_input: The input to pass in to the Tool.
+ tool_input: The input to pass in to the `Tool`.
log: Additional information to log about the action.
"""
super().__init__(tool=tool, tool_input=tool_input, log=log, **kwargs)
@classmethod
def is_lc_serializable(cls) -> bool:
- """AgentAction is serializable.
+ """`AgentAction` is serializable.
Returns:
- True
+ `True`
"""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "agent"]`
@@ -100,19 +100,23 @@ class AgentAction(Serializable):
class AgentActionMessageLog(AgentAction):
"""Representation of an action to be executed by an agent.
- This is similar to AgentAction, but includes a message log consisting of
- chat messages. This is useful when working with ChatModels, and is used
- to reconstruct conversation history from the agent's perspective.
+ This is similar to `AgentAction`, but includes a message log consisting of
+ chat messages.
+
+ This is useful when working with `ChatModels`, and is used to reconstruct
+ conversation history from the agent's perspective.
"""
message_log: Sequence[BaseMessage]
- """Similar to log, this can be used to pass along extra
- information about what exact messages were predicted by the LLM
- before parsing out the (tool, tool_input). This is again useful
- if (tool, tool_input) cannot be used to fully recreate the LLM
- prediction, and you need that LLM prediction (for future agent iteration).
+ """Similar to log, this can be used to pass along extra information about what exact
+ messages were predicted by the LLM before parsing out the `(tool, tool_input)`.
+
+ This is again useful if `(tool, tool_input)` cannot be used to fully recreate the
+ LLM prediction, and you need that LLM prediction (for future agent iteration).
+
Compared to `log`, this is useful when the underlying LLM is a
- ChatModel (and therefore returns messages rather than a string)."""
+ chat model (and therefore returns messages rather than a string).
+ """
# Ignoring type because we're overriding the type from AgentAction.
# And this is the correct thing to do in this case.
# The type literal is used for serialization purposes.
@@ -120,12 +124,12 @@ class AgentActionMessageLog(AgentAction):
class AgentStep(Serializable):
- """Result of running an AgentAction."""
+ """Result of running an `AgentAction`."""
action: AgentAction
- """The AgentAction that was executed."""
+ """The `AgentAction` that was executed."""
observation: Any
- """The result of the AgentAction."""
+ """The result of the `AgentAction`."""
@property
def messages(self) -> Sequence[BaseMessage]:
@@ -134,19 +138,22 @@ class AgentStep(Serializable):
class AgentFinish(Serializable):
- """Final return value of an ActionAgent.
+ """Final return value of an `ActionAgent`.
- Agents return an AgentFinish when they have reached a stopping condition.
+ Agents return an `AgentFinish` when they have reached a stopping condition.
"""
return_values: dict
"""Dictionary of return values."""
log: str
"""Additional information to log about the return value.
+
This is used to pass along the full LLM prediction, not just the parsed out
- return value. For example, if the full LLM prediction was
- `Final Answer: 2` you may want to just return `2` as a return value, but pass
- along the full string as a `log` (for debugging or observability purposes).
+ return value.
+
+ For example, if the full LLM prediction was `Final Answer: 2` you may want to just
+ return `2` as a return value, but pass along the full string as a `log` (for
+ debugging or observability purposes).
"""
type: Literal["AgentFinish"] = "AgentFinish"
@@ -156,12 +163,12 @@ class AgentFinish(Serializable):
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "agent"]`
@@ -204,7 +211,7 @@ def _convert_agent_observation_to_messages(
observation: Observation to convert to a message.
Returns:
- AIMessage that corresponds to the original tool invocation.
+ `AIMessage` that corresponds to the original tool invocation.
"""
if isinstance(agent_action, AgentActionMessageLog):
return [_create_function_message(agent_action, observation)]
@@ -227,7 +234,7 @@ def _create_function_message(
observation: the result of the tool invocation.
Returns:
- FunctionMessage that corresponds to the original tool invocation.
+ `FunctionMessage` that corresponds to the original tool invocation.
"""
if not isinstance(observation, str):
try:
diff --git a/libs/core/langchain_core/caches.py b/libs/core/langchain_core/caches.py
index 9db037e913b..86139c5d821 100644
--- a/libs/core/langchain_core/caches.py
+++ b/libs/core/langchain_core/caches.py
@@ -1,18 +1,17 @@
-"""Cache classes.
+"""Optional caching layer for language models.
-!!! warning
- Beta Feature!
+Distinct from provider-based [prompt caching](https://docs.langchain.com/oss/python/langchain/models#prompt-caching).
-**Cache** provides an optional caching layer for LLMs.
+!!! warning "Beta feature"
+ This is a beta feature. Please be wary of deploying experimental code to production
+ unless you've taken appropriate precautions.
-Cache is useful for two reasons:
+A cache is useful for two reasons:
-- It can save you money by reducing the number of API calls you make to the LLM
+1. It can save you money by reducing the number of API calls you make to the LLM
provider if you're often requesting the same completion multiple times.
-- It can speed up your application by reducing the number of API calls you make
- to the LLM provider.
-
-Cache directly competes with Memory. See documentation for Pros and Cons.
+2. It can speed up your application by reducing the number of API calls you make to the
+ LLM provider.
"""
from __future__ import annotations
@@ -34,8 +33,8 @@ class BaseCache(ABC):
The cache interface consists of the following methods:
- - lookup: Look up a value based on a prompt and llm_string.
- - update: Update the cache based on a prompt and llm_string.
+ - lookup: Look up a value based on a prompt and `llm_string`.
+ - update: Update the cache based on a prompt and `llm_string`.
- clear: Clear the cache.
In addition, the cache interface provides an async version of each method.
@@ -47,43 +46,46 @@ class BaseCache(ABC):
@abstractmethod
def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
- """Look up based on prompt and llm_string.
+ """Look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
- of prompt and llm_string (e.g., by concatenating them with a delimiter).
+ of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
+
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
- These invocation parameters are serialized into a string
- representation.
+
+ These invocation parameters are serialized into a string representation.
Returns:
- On a cache miss, return None. On a cache hit, return the cached value.
- The cached value is a list of Generations (or subclasses).
+ On a cache miss, return `None`. On a cache hit, return the cached value.
+ The cached value is a list of `Generation` (or subclasses).
"""
@abstractmethod
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
- """Update cache based on prompt and llm_string.
+ """Update cache based on `prompt` and `llm_string`.
The prompt and llm_string are used to generate a key for the cache.
The key should match that of the lookup method.
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
+
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
+
These invocation parameters are serialized into a string
representation.
- return_val: The value to be cached. The value is a list of Generations
+ return_val: The value to be cached. The value is a list of `Generation`
(or subclasses).
"""
@@ -92,45 +94,49 @@ class BaseCache(ABC):
"""Clear cache that can take additional keyword arguments."""
async def alookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
- """Async look up based on prompt and llm_string.
+ """Async look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
- of prompt and llm_string (e.g., by concatenating them with a delimiter).
+ of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
+
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
+
These invocation parameters are serialized into a string
representation.
Returns:
- On a cache miss, return None. On a cache hit, return the cached value.
- The cached value is a list of Generations (or subclasses).
+ On a cache miss, return `None`. On a cache hit, return the cached value.
+ The cached value is a list of `Generation` (or subclasses).
"""
return await run_in_executor(None, self.lookup, prompt, llm_string)
async def aupdate(
self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE
) -> None:
- """Async update cache based on prompt and llm_string.
+ """Async update cache based on `prompt` and `llm_string`.
The prompt and llm_string are used to generate a key for the cache.
The key should match that of the look up method.
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
+
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
+
These invocation parameters are serialized into a string
representation.
- return_val: The value to be cached. The value is a list of Generations
+ return_val: The value to be cached. The value is a list of `Generation`
(or subclasses).
"""
return await run_in_executor(None, self.update, prompt, llm_string, return_val)
@@ -150,10 +156,9 @@ class InMemoryCache(BaseCache):
maxsize: The maximum number of items to store in the cache.
If `None`, the cache has no maximum size.
If the cache exceeds the maximum size, the oldest items are removed.
- Default is None.
Raises:
- ValueError: If maxsize is less than or equal to 0.
+ ValueError: If `maxsize` is less than or equal to `0`.
"""
self._cache: dict[tuple[str, str], RETURN_VAL_TYPE] = {}
if maxsize is not None and maxsize <= 0:
@@ -162,28 +167,28 @@ class InMemoryCache(BaseCache):
self._maxsize = maxsize
def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
- """Look up based on prompt and llm_string.
+ """Look up based on `prompt` and `llm_string`.
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
Returns:
- On a cache miss, return None. On a cache hit, return the cached value.
+ On a cache miss, return `None`. On a cache hit, return the cached value.
"""
return self._cache.get((prompt, llm_string), None)
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
- """Update cache based on prompt and llm_string.
+ """Update cache based on `prompt` and `llm_string`.
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
- return_val: The value to be cached. The value is a list of Generations
+ return_val: The value to be cached. The value is a list of `Generation`
(or subclasses).
"""
if self._maxsize is not None and len(self._cache) == self._maxsize:
@@ -196,30 +201,30 @@ class InMemoryCache(BaseCache):
self._cache = {}
async def alookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
- """Async look up based on prompt and llm_string.
+ """Async look up based on `prompt` and `llm_string`.
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
Returns:
- On a cache miss, return None. On a cache hit, return the cached value.
+ On a cache miss, return `None`. On a cache hit, return the cached value.
"""
return self.lookup(prompt, llm_string)
async def aupdate(
self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE
) -> None:
- """Async update cache based on prompt and llm_string.
+ """Async update cache based on `prompt` and `llm_string`.
Args:
- prompt: a string representation of the prompt.
- In the case of a Chat model, the prompt is a non-trivial
+ prompt: A string representation of the prompt.
+ In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
- return_val: The value to be cached. The value is a list of Generations
+ return_val: The value to be cached. The value is a list of `Generation`
(or subclasses).
"""
self.update(prompt, llm_string, return_val)
diff --git a/libs/core/langchain_core/callbacks/base.py b/libs/core/langchain_core/callbacks/base.py
index 4265902d0c3..af455bd9700 100644
--- a/libs/core/langchain_core/callbacks/base.py
+++ b/libs/core/langchain_core/callbacks/base.py
@@ -420,8 +420,6 @@ class RunManagerMixin:
(includes inherited tags).
metadata: The metadata associated with the custom event
(includes inherited metadata).
-
- !!! version-added "Added in version 0.2.15"
"""
@@ -882,8 +880,6 @@ class AsyncCallbackHandler(BaseCallbackHandler):
(includes inherited tags).
metadata: The metadata associated with the custom event
(includes inherited metadata).
-
- !!! version-added "Added in version 0.2.15"
"""
@@ -1001,7 +997,7 @@ class BaseCallbackManager(CallbackManagerMixin):
Args:
handler: The handler to add.
- inherit: Whether to inherit the handler. Default is True.
+ inherit: Whether to inherit the handler.
"""
if handler not in self.handlers:
self.handlers.append(handler)
@@ -1028,7 +1024,7 @@ class BaseCallbackManager(CallbackManagerMixin):
Args:
handlers: The handlers to set.
- inherit: Whether to inherit the handlers. Default is True.
+ inherit: Whether to inherit the handlers.
"""
self.handlers = []
self.inheritable_handlers = []
@@ -1044,7 +1040,7 @@ class BaseCallbackManager(CallbackManagerMixin):
Args:
handler: The handler to set.
- inherit: Whether to inherit the handler. Default is True.
+ inherit: Whether to inherit the handler.
"""
self.set_handlers([handler], inherit=inherit)
@@ -1057,7 +1053,7 @@ class BaseCallbackManager(CallbackManagerMixin):
Args:
tags: The tags to add.
- inherit: Whether to inherit the tags. Default is True.
+ inherit: Whether to inherit the tags.
"""
for tag in tags:
if tag in self.tags:
@@ -1087,7 +1083,7 @@ class BaseCallbackManager(CallbackManagerMixin):
Args:
metadata: The metadata to add.
- inherit: Whether to inherit the metadata. Default is True.
+ inherit: Whether to inherit the metadata.
"""
self.metadata.update(metadata)
if inherit:
diff --git a/libs/core/langchain_core/callbacks/file.py b/libs/core/langchain_core/callbacks/file.py
index e1bfc05dcb8..7f0c82531c1 100644
--- a/libs/core/langchain_core/callbacks/file.py
+++ b/libs/core/langchain_core/callbacks/file.py
@@ -47,7 +47,7 @@ class FileCallbackHandler(BaseCallbackHandler):
Args:
filename: The file path to write to.
mode: The file open mode. Defaults to `'a'` (append).
- color: Default color for text output. Defaults to `None`.
+ color: Default color for text output.
!!! note
When not used as a context manager, a deprecation warning will be issued
@@ -64,7 +64,7 @@ class FileCallbackHandler(BaseCallbackHandler):
Args:
filename: Path to the output file.
mode: File open mode (e.g., `'w'`, `'a'`, `'x'`). Defaults to `'a'`.
- color: Default text color for output. Defaults to `None`.
+ color: Default text color for output.
"""
self.filename = filename
@@ -132,7 +132,7 @@ class FileCallbackHandler(BaseCallbackHandler):
Args:
text: The text to write to the file.
color: Optional color for the text. Defaults to `self.color`.
- end: String appended after the text. Defaults to `""`.
+ end: String appended after the text.
file: Optional file to write to. Defaults to `self.file`.
Raises:
@@ -239,7 +239,7 @@ class FileCallbackHandler(BaseCallbackHandler):
text: The text to write.
color: Color override for this specific output. If `None`, uses
`self.color`.
- end: String appended after the text. Defaults to `""`.
+ end: String appended after the text.
**kwargs: Additional keyword arguments.
"""
diff --git a/libs/core/langchain_core/callbacks/manager.py b/libs/core/langchain_core/callbacks/manager.py
index 428bd7cdb23..29db63609db 100644
--- a/libs/core/langchain_core/callbacks/manager.py
+++ b/libs/core/langchain_core/callbacks/manager.py
@@ -79,13 +79,13 @@ def trace_as_chain_group(
Args:
group_name: The name of the chain group.
- callback_manager: The callback manager to use. Defaults to `None`.
- inputs: The inputs to the chain group. Defaults to `None`.
- project_name: The name of the project. Defaults to `None`.
- example_id: The ID of the example. Defaults to `None`.
+ callback_manager: The callback manager to use.
+ inputs: The inputs to the chain group.
+ project_name: The name of the project.
+ example_id: The ID of the example.
run_id: The ID of the run.
- tags: The inheritable tags to apply to all runs. Defaults to `None`.
- metadata: The metadata to apply to all runs. Defaults to `None`.
+ tags: The inheritable tags to apply to all runs.
+ metadata: The metadata to apply to all runs.
!!! note
Must have `LANGCHAIN_TRACING_V2` env var set to true to see the trace in
@@ -155,13 +155,13 @@ async def atrace_as_chain_group(
Args:
group_name: The name of the chain group.
callback_manager: The async callback manager to use,
- which manages tracing and other callback behavior. Defaults to `None`.
- inputs: The inputs to the chain group. Defaults to `None`.
- project_name: The name of the project. Defaults to `None`.
- example_id: The ID of the example. Defaults to `None`.
+ which manages tracing and other callback behavior.
+ inputs: The inputs to the chain group.
+ project_name: The name of the project.
+ example_id: The ID of the example.
run_id: The ID of the run.
- tags: The inheritable tags to apply to all runs. Defaults to `None`.
- metadata: The metadata to apply to all runs. Defaults to `None`.
+ tags: The inheritable tags to apply to all runs.
+ metadata: The metadata to apply to all runs.
Yields:
The async callback manager for the chain group.
@@ -229,7 +229,24 @@ def shielded(func: Func) -> Func:
@functools.wraps(func)
async def wrapped(*args: Any, **kwargs: Any) -> Any:
- return await asyncio.shield(func(*args, **kwargs))
+ # Capture the current context to preserve context variables
+ ctx = copy_context()
+
+ # Create the coroutine
+ coro = func(*args, **kwargs)
+
+ # For Python 3.11+, create task with explicit context
+ # For older versions, fallback to original behavior
+ try:
+ # Create a task with the captured context to preserve context variables
+ task = asyncio.create_task(coro, context=ctx) # type: ignore[call-arg, unused-ignore]
+ # `call-arg` used to not fail 3.9 or 3.10 tests
+ return await asyncio.shield(task)
+ except TypeError:
+ # Python < 3.11 fallback - create task normally then shield
+ # This won't preserve context perfectly but is better than nothing
+ task = asyncio.create_task(coro)
+ return await asyncio.shield(task)
return cast("Func", wrapped)
@@ -462,11 +479,11 @@ class BaseRunManager(RunManagerMixin):
run_id: The ID of the run.
handlers: The list of handlers.
inheritable_handlers: The list of inheritable handlers.
- parent_run_id: The ID of the parent run. Defaults to `None`.
- tags: The list of tags. Defaults to `None`.
- inheritable_tags: The list of inheritable tags. Defaults to `None`.
- metadata: The metadata. Defaults to `None`.
- inheritable_metadata: The inheritable metadata. Defaults to `None`.
+ parent_run_id: The ID of the parent run.
+ tags: The list of tags.
+ inheritable_tags: The list of inheritable tags.
+ metadata: The metadata.
+ inheritable_metadata: The inheritable metadata.
"""
self.run_id = run_id
@@ -557,7 +574,7 @@ class ParentRunManager(RunManager):
"""Get a child callback manager.
Args:
- tag: The tag for the child callback manager. Defaults to `None`.
+ tag: The tag for the child callback manager.
Returns:
The child callback manager.
@@ -641,7 +658,7 @@ class AsyncParentRunManager(AsyncRunManager):
"""Get a child callback manager.
Args:
- tag: The tag for the child callback manager. Defaults to `None`.
+ tag: The tag for the child callback manager.
Returns:
The child callback manager.
@@ -1303,7 +1320,7 @@ class CallbackManager(BaseCallbackManager):
Args:
serialized: The serialized LLM.
prompts: The list of prompts.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
**kwargs: Additional keyword arguments.
Returns:
@@ -1354,7 +1371,7 @@ class CallbackManager(BaseCallbackManager):
Args:
serialized: The serialized LLM.
messages: The list of messages.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
**kwargs: Additional keyword arguments.
Returns:
@@ -1408,7 +1425,7 @@ class CallbackManager(BaseCallbackManager):
Args:
serialized: The serialized chain.
inputs: The inputs to the chain.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
**kwargs: Additional keyword arguments.
Returns:
@@ -1457,8 +1474,8 @@ class CallbackManager(BaseCallbackManager):
serialized: Serialized representation of the tool.
input_str: The input to the tool as a string.
Non-string inputs are cast to strings.
- run_id: ID for the run. Defaults to `None`.
- parent_run_id: The ID of the parent run. Defaults to `None`.
+ run_id: ID for the run.
+ parent_run_id: The ID of the parent run.
inputs: The original input to the tool if provided.
Recommended for usage instead of input_str when the original
input is needed.
@@ -1512,8 +1529,8 @@ class CallbackManager(BaseCallbackManager):
Args:
serialized: The serialized retriever.
query: The query.
- run_id: The ID of the run. Defaults to `None`.
- parent_run_id: The ID of the parent run. Defaults to `None`.
+ run_id: The ID of the run.
+ parent_run_id: The ID of the parent run.
**kwargs: Additional keyword arguments.
Returns:
@@ -1562,13 +1579,10 @@ class CallbackManager(BaseCallbackManager):
Args:
name: The name of the adhoc event.
data: The data for the adhoc event.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
Raises:
ValueError: If additional keyword arguments are passed.
-
- !!! version-added "Added in version 0.2.14"
-
"""
if not self.handlers:
return
@@ -1782,7 +1796,7 @@ class AsyncCallbackManager(BaseCallbackManager):
Args:
serialized: The serialized LLM.
prompts: The list of prompts.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
**kwargs: Additional keyword arguments.
Returns:
@@ -1870,7 +1884,7 @@ class AsyncCallbackManager(BaseCallbackManager):
Args:
serialized: The serialized LLM.
messages: The list of messages.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
**kwargs: Additional keyword arguments.
Returns:
@@ -1941,7 +1955,7 @@ class AsyncCallbackManager(BaseCallbackManager):
Args:
serialized: The serialized chain.
inputs: The inputs to the chain.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
**kwargs: Additional keyword arguments.
Returns:
@@ -1988,8 +2002,8 @@ class AsyncCallbackManager(BaseCallbackManager):
Args:
serialized: The serialized tool.
input_str: The input to the tool.
- run_id: The ID of the run. Defaults to `None`.
- parent_run_id: The ID of the parent run. Defaults to `None`.
+ run_id: The ID of the run.
+ parent_run_id: The ID of the parent run.
**kwargs: Additional keyword arguments.
Returns:
@@ -2038,12 +2052,10 @@ class AsyncCallbackManager(BaseCallbackManager):
Args:
name: The name of the adhoc event.
data: The data for the adhoc event.
- run_id: The ID of the run. Defaults to `None`.
+ run_id: The ID of the run.
Raises:
ValueError: If additional keyword arguments are passed.
-
- !!! version-added "Added in version 0.2.14"
"""
if not self.handlers:
return
@@ -2082,8 +2094,8 @@ class AsyncCallbackManager(BaseCallbackManager):
Args:
serialized: The serialized retriever.
query: The query.
- run_id: The ID of the run. Defaults to `None`.
- parent_run_id: The ID of the parent run. Defaults to `None`.
+ run_id: The ID of the run.
+ parent_run_id: The ID of the parent run.
**kwargs: Additional keyword arguments.
Returns:
@@ -2555,9 +2567,6 @@ async def adispatch_custom_event(
This is due to a limitation in asyncio for python <= 3.10 that prevents
LangChain from automatically propagating the config object on the user's
behalf.
-
- !!! version-added "Added in version 0.2.15"
-
"""
# Import locally to prevent circular imports.
from langchain_core.runnables.config import ( # noqa: PLC0415
@@ -2630,9 +2639,6 @@ def dispatch_custom_event(
foo_ = RunnableLambda(foo)
foo_.invoke({"a": "1"}, {"callbacks": [CustomCallbackManager()]})
```
-
- !!! version-added "Added in version 0.2.15"
-
"""
# Import locally to prevent circular imports.
from langchain_core.runnables.config import ( # noqa: PLC0415
diff --git a/libs/core/langchain_core/callbacks/stdout.py b/libs/core/langchain_core/callbacks/stdout.py
index c2a1cfe805b..95259cfb38a 100644
--- a/libs/core/langchain_core/callbacks/stdout.py
+++ b/libs/core/langchain_core/callbacks/stdout.py
@@ -20,7 +20,7 @@ class StdOutCallbackHandler(BaseCallbackHandler):
"""Initialize callback handler.
Args:
- color: The color to use for the text. Defaults to `None`.
+ color: The color to use for the text.
"""
self.color = color
@@ -61,7 +61,7 @@ class StdOutCallbackHandler(BaseCallbackHandler):
Args:
action: The agent action.
- color: The color to use for the text. Defaults to `None`.
+ color: The color to use for the text.
**kwargs: Additional keyword arguments.
"""
print_text(action.log, color=color or self.color)
@@ -79,9 +79,9 @@ class StdOutCallbackHandler(BaseCallbackHandler):
Args:
output: The output to print.
- color: The color to use for the text. Defaults to `None`.
- observation_prefix: The observation prefix. Defaults to `None`.
- llm_prefix: The LLM prefix. Defaults to `None`.
+ color: The color to use for the text.
+ observation_prefix: The observation prefix.
+ llm_prefix: The LLM prefix.
**kwargs: Additional keyword arguments.
"""
output = str(output)
@@ -103,8 +103,8 @@ class StdOutCallbackHandler(BaseCallbackHandler):
Args:
text: The text to print.
- color: The color to use for the text. Defaults to `None`.
- end: The end character to use. Defaults to "".
+ color: The color to use for the text.
+ end: The end character to use.
**kwargs: Additional keyword arguments.
"""
print_text(text, color=color or self.color, end=end)
@@ -117,7 +117,7 @@ class StdOutCallbackHandler(BaseCallbackHandler):
Args:
finish: The agent finish.
- color: The color to use for the text. Defaults to `None`.
+ color: The color to use for the text.
**kwargs: Additional keyword arguments.
"""
print_text(finish.log, color=color or self.color, end="\n")
diff --git a/libs/core/langchain_core/callbacks/usage.py b/libs/core/langchain_core/callbacks/usage.py
index f5b20aef0c1..b183a51383e 100644
--- a/libs/core/langchain_core/callbacks/usage.py
+++ b/libs/core/langchain_core/callbacks/usage.py
@@ -24,7 +24,7 @@ class UsageMetadataCallbackHandler(BaseCallbackHandler):
from langchain_core.callbacks import UsageMetadataCallbackHandler
llm_1 = init_chat_model(model="openai:gpt-4o-mini")
- llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-latest")
+ llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-20241022")
callback = UsageMetadataCallbackHandler()
result_1 = llm_1.invoke("Hello", config={"callbacks": [callback]})
@@ -43,7 +43,7 @@ class UsageMetadataCallbackHandler(BaseCallbackHandler):
'input_token_details': {'cache_read': 0, 'cache_creation': 0}}}
```
- !!! version-added "Added in version 0.3.49"
+ !!! version-added "Added in `langchain-core` 0.3.49"
"""
@@ -109,7 +109,7 @@ def get_usage_metadata_callback(
from langchain_core.callbacks import get_usage_metadata_callback
llm_1 = init_chat_model(model="openai:gpt-4o-mini")
- llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-latest")
+ llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-20241022")
with get_usage_metadata_callback() as cb:
llm_1.invoke("Hello")
@@ -134,7 +134,7 @@ def get_usage_metadata_callback(
}
```
- !!! version-added "Added in version 0.3.49"
+ !!! version-added "Added in `langchain-core` 0.3.49"
"""
usage_metadata_callback_var: ContextVar[UsageMetadataCallbackHandler | None] = (
diff --git a/libs/core/langchain_core/chat_history.py b/libs/core/langchain_core/chat_history.py
index a4f135b8dfc..7c315d44d46 100644
--- a/libs/core/langchain_core/chat_history.py
+++ b/libs/core/langchain_core/chat_history.py
@@ -121,7 +121,7 @@ class BaseChatMessageHistory(ABC):
This method may be deprecated in a future release.
Args:
- message: The human message to add to the store.
+ message: The `HumanMessage` to add to the store.
"""
if isinstance(message, HumanMessage):
self.add_message(message)
@@ -129,7 +129,7 @@ class BaseChatMessageHistory(ABC):
self.add_message(HumanMessage(content=message))
def add_ai_message(self, message: AIMessage | str) -> None:
- """Convenience method for adding an AI message string to the store.
+ """Convenience method for adding an `AIMessage` string to the store.
!!! note
This is a convenience method. Code should favor the bulk `add_messages`
@@ -138,7 +138,7 @@ class BaseChatMessageHistory(ABC):
This method may be deprecated in a future release.
Args:
- message: The AI message to add.
+ message: The `AIMessage` to add.
"""
if isinstance(message, AIMessage):
self.add_message(message)
@@ -153,7 +153,7 @@ class BaseChatMessageHistory(ABC):
Raises:
NotImplementedError: If the sub-class has not implemented an efficient
- add_messages method.
+ `add_messages` method.
"""
if type(self).add_messages != BaseChatMessageHistory.add_messages:
# This means that the sub-class has implemented an efficient add_messages
@@ -173,7 +173,7 @@ class BaseChatMessageHistory(ABC):
in an efficient manner to avoid unnecessary round-trips to the underlying store.
Args:
- messages: A sequence of BaseMessage objects to store.
+ messages: A sequence of `BaseMessage` objects to store.
"""
for message in messages:
self.add_message(message)
@@ -182,7 +182,7 @@ class BaseChatMessageHistory(ABC):
"""Async add a list of messages.
Args:
- messages: A sequence of BaseMessage objects to store.
+ messages: A sequence of `BaseMessage` objects to store.
"""
await run_in_executor(None, self.add_messages, messages)
diff --git a/libs/core/langchain_core/document_loaders/base.py b/libs/core/langchain_core/document_loaders/base.py
index 6f46b99b71c..e74bc5976c0 100644
--- a/libs/core/langchain_core/document_loaders/base.py
+++ b/libs/core/langchain_core/document_loaders/base.py
@@ -27,7 +27,7 @@ class BaseLoader(ABC): # noqa: B024
"""Interface for Document Loader.
Implementations should implement the lazy-loading method using generators
- to avoid loading all Documents into memory at once.
+ to avoid loading all documents into memory at once.
`load` is provided just for user convenience and should not be overridden.
"""
@@ -35,38 +35,40 @@ class BaseLoader(ABC): # noqa: B024
# Sub-classes should not implement this method directly. Instead, they
# should implement the lazy load method.
def load(self) -> list[Document]:
- """Load data into Document objects.
+ """Load data into `Document` objects.
Returns:
- the documents.
+ The documents.
"""
return list(self.lazy_load())
async def aload(self) -> list[Document]:
- """Load data into Document objects.
+ """Load data into `Document` objects.
Returns:
- the documents.
+ The documents.
"""
return [document async for document in self.alazy_load()]
def load_and_split(
self, text_splitter: TextSplitter | None = None
) -> list[Document]:
- """Load Documents and split into chunks. Chunks are returned as Documents.
+ """Load `Document` and split into chunks. Chunks are returned as `Document`.
- Do not override this method. It should be considered to be deprecated!
+ !!! danger
+
+ Do not override this method. It should be considered to be deprecated!
Args:
- text_splitter: TextSplitter instance to use for splitting documents.
- Defaults to RecursiveCharacterTextSplitter.
+ text_splitter: `TextSplitter` instance to use for splitting documents.
+ Defaults to `RecursiveCharacterTextSplitter`.
Raises:
- ImportError: If langchain-text-splitters is not installed
- and no text_splitter is provided.
+ ImportError: If `langchain-text-splitters` is not installed
+ and no `text_splitter` is provided.
Returns:
- List of Documents.
+ List of `Document`.
"""
if text_splitter is None:
if not _HAS_TEXT_SPLITTERS:
@@ -86,10 +88,10 @@ class BaseLoader(ABC): # noqa: B024
# Attention: This method will be upgraded into an abstractmethod once it's
# implemented in all the existing subclasses.
def lazy_load(self) -> Iterator[Document]:
- """A lazy loader for Documents.
+ """A lazy loader for `Document`.
Yields:
- the documents.
+ The `Document` objects.
"""
if type(self).load != BaseLoader.load:
return iter(self.load())
@@ -97,10 +99,10 @@ class BaseLoader(ABC): # noqa: B024
raise NotImplementedError(msg)
async def alazy_load(self) -> AsyncIterator[Document]:
- """A lazy loader for Documents.
+ """A lazy loader for `Document`.
Yields:
- the documents.
+ The `Document` objects.
"""
iterator = await run_in_executor(None, self.lazy_load)
done = object()
@@ -115,7 +117,7 @@ class BaseBlobParser(ABC):
"""Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
- or more documents.
+ or more `Document` objects.
The parser can be composed with blob loaders, making it easy to reuse
a parser independent of how the blob was originally loaded.
@@ -128,25 +130,25 @@ class BaseBlobParser(ABC):
Subclasses are required to implement this method.
Args:
- blob: Blob instance
+ blob: `Blob` instance
Returns:
- Generator of documents
+ Generator of `Document` objects
"""
def parse(self, blob: Blob) -> list[Document]:
- """Eagerly parse the blob into a document or documents.
+ """Eagerly parse the blob into a `Document` or list of `Document` objects.
This is a convenience method for interactive development environment.
- Production applications should favor the lazy_parse method instead.
+ Production applications should favor the `lazy_parse` method instead.
Subclasses should generally not over-ride this parse method.
Args:
- blob: Blob instance
+ blob: `Blob` instance
Returns:
- List of documents
+ List of `Document` objects
"""
return list(self.lazy_parse(blob))
diff --git a/libs/core/langchain_core/document_loaders/blob_loaders.py b/libs/core/langchain_core/document_loaders/blob_loaders.py
index 6f26106ee30..8c6832177fd 100644
--- a/libs/core/langchain_core/document_loaders/blob_loaders.py
+++ b/libs/core/langchain_core/document_loaders/blob_loaders.py
@@ -28,7 +28,7 @@ class BlobLoader(ABC):
def yield_blobs(
self,
) -> Iterable[Blob]:
- """A lazy loader for raw data represented by LangChain's Blob object.
+ """A lazy loader for raw data represented by LangChain's `Blob` object.
Returns:
A generator over blobs
diff --git a/libs/core/langchain_core/document_loaders/langsmith.py b/libs/core/langchain_core/document_loaders/langsmith.py
index ac69bef81e5..0ac054dc3b7 100644
--- a/libs/core/langchain_core/document_loaders/langsmith.py
+++ b/libs/core/langchain_core/document_loaders/langsmith.py
@@ -14,13 +14,13 @@ from langchain_core.documents import Document
class LangSmithLoader(BaseLoader):
- """Load LangSmith Dataset examples as Documents.
+ """Load LangSmith Dataset examples as `Document` objects.
- Loads the example inputs as the Document page content and places the entire example
- into the Document metadata. This allows you to easily create few-shot example
- retrievers from the loaded documents.
+ Loads the example inputs as the `Document` page content and places the entire
+ example into the `Document` metadata. This allows you to easily create few-shot
+ example retrievers from the loaded documents.
- ??? note "Lazy load"
+ ??? note "Lazy loading example"
```python
from langchain_core.document_loaders import LangSmithLoader
@@ -34,9 +34,6 @@ class LangSmithLoader(BaseLoader):
```python
# -> [Document("...", metadata={"inputs": {...}, "outputs": {...}, ...}), ...]
```
-
- !!! version-added "Added in version 0.2.34"
-
"""
def __init__(
@@ -60,26 +57,25 @@ class LangSmithLoader(BaseLoader):
"""Create a LangSmith loader.
Args:
- dataset_id: The ID of the dataset to filter by. Defaults to `None`.
- dataset_name: The name of the dataset to filter by. Defaults to `None`.
+ dataset_id: The ID of the dataset to filter by.
+ dataset_name: The name of the dataset to filter by.
content_key: The inputs key to set as Document page content. `'.'` characters
are interpreted as nested keys. E.g. `content_key="first.second"` will
result in
`Document(page_content=format_content(example.inputs["first"]["second"]))`
format_content: Function for converting the content extracted from the example
inputs into a string. Defaults to JSON-encoding the contents.
- example_ids: The IDs of the examples to filter by. Defaults to `None`.
- as_of: The dataset version tag OR
- timestamp to retrieve the examples as of.
- Response examples will only be those that were present at the time
- of the tagged (or timestamped) version.
+ example_ids: The IDs of the examples to filter by.
+ as_of: The dataset version tag or timestamp to retrieve the examples as of.
+ Response examples will only be those that were present at the time of
+ the tagged (or timestamped) version.
splits: A list of dataset splits, which are
- divisions of your dataset such as 'train', 'test', or 'validation'.
+ divisions of your dataset such as `train`, `test`, or `validation`.
Returns examples only from the specified splits.
- inline_s3_urls: Whether to inline S3 URLs. Defaults to `True`.
- offset: The offset to start from. Defaults to 0.
+ inline_s3_urls: Whether to inline S3 URLs.
+ offset: The offset to start from.
limit: The maximum number of examples to return.
- metadata: Metadata to filter by. Defaults to `None`.
+ metadata: Metadata to filter by.
filter: A structured filter string to apply to the examples.
client: LangSmith Client. If not provided will be initialized from below args.
client_kwargs: Keyword args to pass to LangSmith client init. Should only be
diff --git a/libs/core/langchain_core/documents/__init__.py b/libs/core/langchain_core/documents/__init__.py
index 2bf3f802197..1969f10935d 100644
--- a/libs/core/langchain_core/documents/__init__.py
+++ b/libs/core/langchain_core/documents/__init__.py
@@ -1,7 +1,28 @@
-"""Documents module.
+"""Documents module for data retrieval and processing workflows.
-**Document** module is a collection of classes that handle documents
-and their transformations.
+This module provides core abstractions for handling data in retrieval-augmented
+generation (RAG) pipelines, vector stores, and document processing workflows.
+
+!!! warning "Documents vs. message content"
+ This module is distinct from `langchain_core.messages.content`, which provides
+ multimodal content blocks for **LLM chat I/O** (text, images, audio, etc. within
+ messages).
+
+ **Key distinction:**
+
+ - **Documents** (this module): For **data retrieval and processing workflows**
+ - Vector stores, retrievers, RAG pipelines
+ - Text chunking, embedding, and semantic search
+ - Example: Chunks of a PDF stored in a vector database
+
+ - **Content Blocks** (`messages.content`): For **LLM conversational I/O**
+ - Multimodal message content sent to/from models
+ - Tool calls, reasoning, citations within chat
+ - Example: An image sent to a vision model in a chat message (via
+ [`ImageContentBlock`][langchain.messages.ImageContentBlock])
+
+ While both can represent similar data types (text, files), they serve different
+ architectural purposes in LangChain applications.
"""
from typing import TYPE_CHECKING
diff --git a/libs/core/langchain_core/documents/base.py b/libs/core/langchain_core/documents/base.py
index 8631392ac36..0341c8c184e 100644
--- a/libs/core/langchain_core/documents/base.py
+++ b/libs/core/langchain_core/documents/base.py
@@ -1,4 +1,16 @@
-"""Base classes for media and documents."""
+"""Base classes for media and documents.
+
+This module contains core abstractions for **data retrieval and processing workflows**:
+
+- `BaseMedia`: Base class providing `id` and `metadata` fields
+- `Blob`: Raw data loading (files, binary data) - used by document loaders
+- `Document`: Text content for retrieval (RAG, vector stores, semantic search)
+
+!!! note "Not for LLM chat messages"
+ These classes are for data processing pipelines, not LLM I/O. For multimodal
+ content in chat messages (images, audio in conversations), see
+ `langchain.messages` content blocks instead.
+"""
from __future__ import annotations
@@ -19,27 +31,23 @@ PathLike = str | PurePath
class BaseMedia(Serializable):
- """Use to represent media content.
+ """Base class for content used in retrieval and data processing workflows.
- Media objects can be used to represent raw data, such as text or binary data.
+ Provides common fields for content that needs to be stored, indexed, or searched.
- LangChain Media objects allow associating metadata and an optional identifier
- with the content.
-
- The presence of an ID and metadata make it easier to store, index, and search
- over the content in a structured way.
+ !!! note
+ For multimodal content in **chat messages** (images, audio sent to/from LLMs),
+ use `langchain.messages` content blocks instead.
"""
# The ID field is optional at the moment.
# It will likely become required in a future major release after
- # it has been adopted by enough vectorstore implementations.
+ # it has been adopted by enough VectorStore implementations.
id: str | None = Field(default=None, coerce_numbers_to_str=True)
"""An optional identifier for the document.
Ideally this should be unique across the document collection and formatted
as a UUID, but this will not be enforced.
-
- !!! version-added "Added in version 0.2.11"
"""
metadata: dict = Field(default_factory=dict)
@@ -47,15 +55,14 @@ class BaseMedia(Serializable):
class Blob(BaseMedia):
- """Blob represents raw data by either reference or value.
+ """Raw data abstraction for document loading and file processing.
- Provides an interface to materialize the blob in different representations, and
- help to decouple the development of data loaders from the downstream parsing of
- the raw data.
+ Represents raw bytes or text, either in-memory or by file reference. Used
+ primarily by document loaders to decouple data loading from parsing.
- Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
+ Inspired by [Mozilla's `Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob)
- Example: Initialize a blob from in-memory data
+ ???+ example "Initialize a blob from in-memory data"
```python
from langchain_core.documents import Blob
@@ -73,7 +80,7 @@ class Blob(BaseMedia):
print(f.read())
```
- Example: Load from memory and specify mime-type and metadata
+ ??? example "Load from memory and specify MIME type and metadata"
```python
from langchain_core.documents import Blob
@@ -85,7 +92,7 @@ class Blob(BaseMedia):
)
```
- Example: Load the blob from a file
+ ??? example "Load the blob from a file"
```python
from langchain_core.documents import Blob
@@ -105,13 +112,13 @@ class Blob(BaseMedia):
"""
data: bytes | str | None = None
- """Raw data associated with the blob."""
+ """Raw data associated with the `Blob`."""
mimetype: str | None = None
- """MimeType not to be confused with a file extension."""
+ """MIME type, not to be confused with a file extension."""
encoding: str = "utf-8"
"""Encoding to use if decoding the bytes into a string.
- Use utf-8 as default encoding, if decoding to string.
+ Uses `utf-8` as default encoding if decoding to string.
"""
path: PathLike | None = None
"""Location where the original content was found."""
@@ -125,9 +132,9 @@ class Blob(BaseMedia):
def source(self) -> str | None:
"""The source location of the blob as string if known otherwise none.
- If a path is associated with the blob, it will default to the path location.
+ If a path is associated with the `Blob`, it will default to the path location.
- Unless explicitly set via a metadata field called "source", in which
+ Unless explicitly set via a metadata field called `'source'`, in which
case that value will be used instead.
"""
if self.metadata and "source" in self.metadata:
@@ -211,15 +218,15 @@ class Blob(BaseMedia):
"""Load the blob from a path like object.
Args:
- path: path like object to file to be read
+ path: Path-like object to file to be read
encoding: Encoding to use if decoding the bytes into a string
- mime_type: if provided, will be set as the mime-type of the data
- guess_type: If `True`, the mimetype will be guessed from the file extension,
- if a mime-type was not provided
- metadata: Metadata to associate with the blob
+ mime_type: If provided, will be set as the MIME type of the data
+ guess_type: If `True`, the MIME type will be guessed from the file
+ extension, if a MIME type was not provided
+ metadata: Metadata to associate with the `Blob`
Returns:
- Blob instance
+ `Blob` instance
"""
if mime_type is None and guess_type:
mimetype = mimetypes.guess_type(path)[0] if guess_type else None
@@ -245,17 +252,17 @@ class Blob(BaseMedia):
path: str | None = None,
metadata: dict | None = None,
) -> Blob:
- """Initialize the blob from in-memory data.
+ """Initialize the `Blob` from in-memory data.
Args:
- data: the in-memory data associated with the blob
+ data: The in-memory data associated with the `Blob`
encoding: Encoding to use if decoding the bytes into a string
- mime_type: if provided, will be set as the mime-type of the data
- path: if provided, will be set as the source from which the data came
- metadata: Metadata to associate with the blob
+ mime_type: If provided, will be set as the MIME type of the data
+ path: If provided, will be set as the source from which the data came
+ metadata: Metadata to associate with the `Blob`
Returns:
- Blob instance
+ `Blob` instance
"""
return cls(
data=data,
@@ -276,6 +283,10 @@ class Blob(BaseMedia):
class Document(BaseMedia):
"""Class for storing a piece of text and associated metadata.
+ !!! note
+ `Document` is for **retrieval workflows**, not chat I/O. For sending text
+ to an LLM in a conversation, use message types from `langchain.messages`.
+
Example:
```python
from langchain_core.documents import Document
@@ -298,12 +309,12 @@ class Document(BaseMedia):
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
["langchain", "schema", "document"]
@@ -311,10 +322,10 @@ class Document(BaseMedia):
return ["langchain", "schema", "document"]
def __str__(self) -> str:
- """Override __str__ to restrict it to page_content and metadata.
+ """Override `__str__` to restrict it to page_content and metadata.
Returns:
- A string representation of the Document.
+ A string representation of the `Document`.
"""
# The format matches pydantic format for __str__.
#
diff --git a/libs/core/langchain_core/documents/compressor.py b/libs/core/langchain_core/documents/compressor.py
index b18728eb9d3..c765b378bb1 100644
--- a/libs/core/langchain_core/documents/compressor.py
+++ b/libs/core/langchain_core/documents/compressor.py
@@ -21,14 +21,14 @@ class BaseDocumentCompressor(BaseModel, ABC):
This abstraction is primarily used for post-processing of retrieved documents.
- Documents matching a given query are first retrieved.
+ `Document` objects matching a given query are first retrieved.
Then the list of documents can be further processed.
For example, one could re-rank the retrieved documents using an LLM.
!!! note
- Users should favor using a RunnableLambda instead of sub-classing from this
+ Users should favor using a `RunnableLambda` instead of sub-classing from this
interface.
"""
@@ -43,9 +43,9 @@ class BaseDocumentCompressor(BaseModel, ABC):
"""Compress retrieved documents given the query context.
Args:
- documents: The retrieved documents.
+ documents: The retrieved `Document` objects.
query: The query context.
- callbacks: Optional callbacks to run during compression.
+ callbacks: Optional `Callbacks` to run during compression.
Returns:
The compressed documents.
@@ -61,9 +61,9 @@ class BaseDocumentCompressor(BaseModel, ABC):
"""Async compress retrieved documents given the query context.
Args:
- documents: The retrieved documents.
+ documents: The retrieved `Document` objects.
query: The query context.
- callbacks: Optional callbacks to run during compression.
+ callbacks: Optional `Callbacks` to run during compression.
Returns:
The compressed documents.
diff --git a/libs/core/langchain_core/documents/transformers.py b/libs/core/langchain_core/documents/transformers.py
index 4cb37470e6b..c05fa29a239 100644
--- a/libs/core/langchain_core/documents/transformers.py
+++ b/libs/core/langchain_core/documents/transformers.py
@@ -16,8 +16,8 @@ if TYPE_CHECKING:
class BaseDocumentTransformer(ABC):
"""Abstract base class for document transformation.
- A document transformation takes a sequence of Documents and returns a
- sequence of transformed Documents.
+ A document transformation takes a sequence of `Document` objects and returns a
+ sequence of transformed `Document` objects.
Example:
```python
@@ -57,10 +57,10 @@ class BaseDocumentTransformer(ABC):
"""Transform a list of documents.
Args:
- documents: A sequence of Documents to be transformed.
+ documents: A sequence of `Document` objects to be transformed.
Returns:
- A sequence of transformed Documents.
+ A sequence of transformed `Document` objects.
"""
async def atransform_documents(
@@ -69,10 +69,10 @@ class BaseDocumentTransformer(ABC):
"""Asynchronously transform a list of documents.
Args:
- documents: A sequence of Documents to be transformed.
+ documents: A sequence of `Document` objects to be transformed.
Returns:
- A sequence of transformed Documents.
+ A sequence of transformed `Document` objects.
"""
return await run_in_executor(
None, self.transform_documents, documents, **kwargs
diff --git a/libs/core/langchain_core/embeddings/fake.py b/libs/core/langchain_core/embeddings/fake.py
index 885366d2e62..0a252efc194 100644
--- a/libs/core/langchain_core/embeddings/fake.py
+++ b/libs/core/langchain_core/embeddings/fake.py
@@ -18,7 +18,8 @@ class FakeEmbeddings(Embeddings, BaseModel):
This embedding model creates embeddings by sampling from a normal distribution.
- Do not use this outside of testing, as it is not a real embedding model.
+ !!! danger "Toy model"
+ Do not use this outside of testing, as it is not a real embedding model.
Instantiate:
```python
@@ -72,7 +73,8 @@ class DeterministicFakeEmbedding(Embeddings, BaseModel):
This embedding model creates embeddings by sampling from a normal distribution
with a seed based on the hash of the text.
- Do not use this outside of testing, as it is not a real embedding model.
+ !!! danger "Toy model"
+ Do not use this outside of testing, as it is not a real embedding model.
Instantiate:
```python
diff --git a/libs/core/langchain_core/example_selectors/length_based.py b/libs/core/langchain_core/example_selectors/length_based.py
index 296db6c1c60..9424e645bf5 100644
--- a/libs/core/langchain_core/example_selectors/length_based.py
+++ b/libs/core/langchain_core/example_selectors/length_based.py
@@ -29,7 +29,7 @@ class LengthBasedExampleSelector(BaseExampleSelector, BaseModel):
max_length: int = 2048
"""Max length for the prompt, beyond which examples are cut."""
- example_text_lengths: list[int] = Field(default_factory=list) # :meta private:
+ example_text_lengths: list[int] = Field(default_factory=list)
"""Length of each example."""
def add_example(self, example: dict[str, str]) -> None:
diff --git a/libs/core/langchain_core/example_selectors/semantic_similarity.py b/libs/core/langchain_core/example_selectors/semantic_similarity.py
index 57c3ece590a..1e7491a2eb7 100644
--- a/libs/core/langchain_core/example_selectors/semantic_similarity.py
+++ b/libs/core/langchain_core/example_selectors/semantic_similarity.py
@@ -41,7 +41,7 @@ class _VectorStoreExampleSelector(BaseExampleSelector, BaseModel, ABC):
"""Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables."""
vectorstore_kwargs: dict[str, Any] | None = None
- """Extra arguments passed to similarity_search function of the vectorstore."""
+ """Extra arguments passed to similarity_search function of the `VectorStore`."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -154,12 +154,12 @@ class SemanticSimilarityExampleSelector(_VectorStoreExampleSelector):
examples: List of examples to use in the prompt.
embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls: A vector store DB interface class, e.g. FAISS.
- k: Number of examples to select. Default is 4.
+ k: Number of examples to select.
input_keys: If provided, the search is based on the input variables
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
- of the vectorstore.
+ of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:
@@ -198,12 +198,12 @@ class SemanticSimilarityExampleSelector(_VectorStoreExampleSelector):
examples: List of examples to use in the prompt.
embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls: A vector store DB interface class, e.g. FAISS.
- k: Number of examples to select. Default is 4.
+ k: Number of examples to select.
input_keys: If provided, the search is based on the input variables
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
- of the vectorstore.
+ of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:
@@ -285,14 +285,13 @@ class MaxMarginalRelevanceExampleSelector(_VectorStoreExampleSelector):
examples: List of examples to use in the prompt.
embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls: A vector store DB interface class, e.g. FAISS.
- k: Number of examples to select. Default is 4.
- fetch_k: Number of Documents to fetch to pass to MMR algorithm.
- Default is 20.
+ k: Number of examples to select.
+ fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
input_keys: If provided, the search is based on the input variables
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
- of the vectorstore.
+ of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:
@@ -333,14 +332,13 @@ class MaxMarginalRelevanceExampleSelector(_VectorStoreExampleSelector):
examples: List of examples to use in the prompt.
embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls: A vector store DB interface class, e.g. FAISS.
- k: Number of examples to select. Default is 4.
- fetch_k: Number of Documents to fetch to pass to MMR algorithm.
- Default is 20.
+ k: Number of examples to select.
+ fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
input_keys: If provided, the search is based on the input variables
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
- of the vectorstore.
+ of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:
diff --git a/libs/core/langchain_core/exceptions.py b/libs/core/langchain_core/exceptions.py
index 375d2141f83..a1e8a5cb91a 100644
--- a/libs/core/langchain_core/exceptions.py
+++ b/libs/core/langchain_core/exceptions.py
@@ -16,9 +16,10 @@ class OutputParserException(ValueError, LangChainException): # noqa: N818
"""Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
- that also may arise inside the output parser. OutputParserExceptions will be
- available to catch and handle in ways to fix the parsing error, while other
- errors will be raised.
+ that also may arise inside the output parser.
+
+ `OutputParserException` will be available to catch and handle in ways to fix the
+ parsing error, while other errors will be raised.
"""
def __init__(
@@ -28,23 +29,23 @@ class OutputParserException(ValueError, LangChainException): # noqa: N818
llm_output: str | None = None,
send_to_llm: bool = False, # noqa: FBT001,FBT002
):
- """Create an OutputParserException.
+ """Create an `OutputParserException`.
Args:
error: The error that's being re-raised or an error message.
- observation: String explanation of error which can be passed to a
- model to try and remediate the issue. Defaults to `None`.
+ observation: String explanation of error which can be passed to a model to
+ try and remediate the issue.
llm_output: String model output which is error-ing.
- Defaults to `None`.
+
send_to_llm: Whether to send the observation and llm_output back to an Agent
- after an OutputParserException has been raised.
+ after an `OutputParserException` has been raised.
+
This gives the underlying model driving the agent the context that the
previous output was improperly structured, in the hopes that it will
update the output to the correct format.
- Defaults to `False`.
Raises:
- ValueError: If `send_to_llm` is True but either observation or
+ ValueError: If `send_to_llm` is `True` but either observation or
`llm_output` are not provided.
"""
if isinstance(error, str):
@@ -67,11 +68,11 @@ class ErrorCode(Enum):
"""Error codes."""
INVALID_PROMPT_INPUT = "INVALID_PROMPT_INPUT"
- INVALID_TOOL_RESULTS = "INVALID_TOOL_RESULTS"
+ INVALID_TOOL_RESULTS = "INVALID_TOOL_RESULTS" # Used in JS; not Py (yet)
MESSAGE_COERCION_FAILURE = "MESSAGE_COERCION_FAILURE"
- MODEL_AUTHENTICATION = "MODEL_AUTHENTICATION"
- MODEL_NOT_FOUND = "MODEL_NOT_FOUND"
- MODEL_RATE_LIMIT = "MODEL_RATE_LIMIT"
+ MODEL_AUTHENTICATION = "MODEL_AUTHENTICATION" # Used in JS; not Py (yet)
+ MODEL_NOT_FOUND = "MODEL_NOT_FOUND" # Used in JS; not Py (yet)
+ MODEL_RATE_LIMIT = "MODEL_RATE_LIMIT" # Used in JS; not Py (yet)
OUTPUT_PARSING_FAILURE = "OUTPUT_PARSING_FAILURE"
@@ -87,6 +88,6 @@ def create_message(*, message: str, error_code: ErrorCode) -> str:
"""
return (
f"{message}\n"
- "For troubleshooting, visit: https://python.langchain.com/docs/"
- f"troubleshooting/errors/{error_code.value} "
+ "For troubleshooting, visit: https://docs.langchain.com/oss/python/langchain"
+ f"/errors/{error_code.value} "
)
diff --git a/libs/core/langchain_core/indexing/__init__.py b/libs/core/langchain_core/indexing/__init__.py
index aea7b9fadf5..ceb25d2d074 100644
--- a/libs/core/langchain_core/indexing/__init__.py
+++ b/libs/core/langchain_core/indexing/__init__.py
@@ -1,7 +1,7 @@
"""Code to help indexing data into a vectorstore.
This package contains helper logic to help deal with indexing data into
-a vectorstore while avoiding duplicated content and over-writing content
+a `VectorStore` while avoiding duplicated content and over-writing content
if it's unchanged.
"""
diff --git a/libs/core/langchain_core/indexing/api.py b/libs/core/langchain_core/indexing/api.py
index 4ef3776d123..de42a4f4f8e 100644
--- a/libs/core/langchain_core/indexing/api.py
+++ b/libs/core/langchain_core/indexing/api.py
@@ -298,61 +298,58 @@ def index(
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
- !!! warning "Behavior changed in 0.3.25"
+ !!! warning "Behavior changed in `langchain-core` 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
* In full mode, the loader should be returning
- the entire dataset, and not just a subset of the dataset.
- Otherwise, the auto_cleanup will remove documents that it is not
- supposed to.
+ the entire dataset, and not just a subset of the dataset.
+ Otherwise, the auto_cleanup will remove documents that it is not
+ supposed to.
* In incremental mode, if documents associated with a particular
- source id appear across different batches, the indexing API
- will do some redundant work. This will still result in the
- correct end state of the index, but will unfortunately not be
- 100% efficient. For example, if a given document is split into 15
- chunks, and we index them using a batch size of 5, we'll have 3 batches
- all with the same source id. In general, to avoid doing too much
- redundant work select as big a batch size as possible.
+ source id appear across different batches, the indexing API
+ will do some redundant work. This will still result in the
+ correct end state of the index, but will unfortunately not be
+ 100% efficient. For example, if a given document is split into 15
+ chunks, and we index them using a batch size of 5, we'll have 3 batches
+ all with the same source id. In general, to avoid doing too much
+ redundant work select as big a batch size as possible.
* The `scoped_full` mode is suitable if determining an appropriate batch size
- is challenging or if your data loader cannot return the entire dataset at
- once. This mode keeps track of source IDs in memory, which should be fine
- for most use cases. If your dataset is large (10M+ docs), you will likely
- need to parallelize the indexing process regardless.
+ is challenging or if your data loader cannot return the entire dataset at
+ once. This mode keeps track of source IDs in memory, which should be fine
+ for most use cases. If your dataset is large (10M+ docs), you will likely
+ need to parallelize the indexing process regardless.
Args:
docs_source: Data loader or iterable of documents to index.
record_manager: Timestamped set to keep track of which documents were
updated.
- vector_store: VectorStore or DocumentIndex to index the documents into.
- batch_size: Batch size to use when indexing. Default is 100.
- cleanup: How to handle clean up of documents. Default is None.
+ vector_store: `VectorStore` or DocumentIndex to index the documents into.
+ batch_size: Batch size to use when indexing.
+ cleanup: How to handle clean up of documents.
- incremental: Cleans up all documents that haven't been updated AND
- that are associated with source ids that were seen during indexing.
- Clean up is done continuously during indexing helping to minimize the
- probability of users seeing duplicated content.
+ that are associated with source IDs that were seen during indexing.
+ Clean up is done continuously during indexing helping to minimize the
+ probability of users seeing duplicated content.
- full: Delete all documents that have not been returned by the loader
- during this run of indexing.
- Clean up runs after all documents have been indexed.
- This means that users may see duplicated content during indexing.
+ during this run of indexing.
+ Clean up runs after all documents have been indexed.
+ This means that users may see duplicated content during indexing.
- scoped_full: Similar to Full, but only deletes all documents
- that haven't been updated AND that are associated with
- source ids that were seen during indexing.
+ that haven't been updated AND that are associated with
+ source IDs that were seen during indexing.
- None: Do not delete any documents.
source_id_key: Optional key that helps identify the original source
- of the document. Default is None.
+ of the document.
cleanup_batch_size: Batch size to use when cleaning up documents.
- Default is 1_000.
force_update: Force update documents even if they are present in the
record manager. Useful if you are re-indexing with updated embeddings.
- Default is False.
key_encoder: Hashing algorithm to use for hashing the document content and
- metadata. Default is "sha1".
- Other options include "blake2b", "sha256", and "sha512".
+ metadata. Options include "blake2b", "sha256", and "sha512".
- !!! version-added "Added in version 0.3.66"
+ !!! version-added "Added in `langchain-core` 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
@@ -366,10 +363,10 @@ def index(
When changing the key encoder, you must change the
index as well to avoid duplicated documents in the cache.
upsert_kwargs: Additional keyword arguments to pass to the add_documents
- method of the VectorStore or the upsert method of the DocumentIndex.
+ method of the `VectorStore` or the upsert method of the DocumentIndex.
For example, you can use this to specify a custom vector_field:
upsert_kwargs={"vector_field": "embedding"}
- !!! version-added "Added in version 0.3.10"
+ !!! version-added "Added in `langchain-core` 0.3.10"
Returns:
Indexing result which contains information about how many documents
@@ -378,10 +375,10 @@ def index(
Raises:
ValueError: If cleanup mode is not one of 'incremental', 'full' or None
ValueError: If cleanup mode is incremental and source_id_key is None.
- ValueError: If vectorstore does not have
+ ValueError: If `VectorStore` does not have
"delete" and "add_documents" required methods.
ValueError: If source_id_key is not None, but is not a string or callable.
- TypeError: If `vectorstore` is not a VectorStore or a DocumentIndex.
+ TypeError: If `vectorstore` is not a `VectorStore` or a DocumentIndex.
AssertionError: If `source_id` is None when cleanup mode is incremental.
(should be unreachable code).
"""
@@ -418,7 +415,7 @@ def index(
raise ValueError(msg)
if type(destination).delete == VectorStore.delete:
- # Checking if the vectorstore has overridden the default delete method
+ # Checking if the VectorStore has overridden the default delete method
# implementation which just raises a NotImplementedError
msg = "Vectorstore has not implemented the delete method"
raise ValueError(msg)
@@ -469,11 +466,11 @@ def index(
]
if cleanup in {"incremental", "scoped_full"}:
- # source ids are required.
+ # Source IDs are required.
for source_id, hashed_doc in zip(source_ids, hashed_docs, strict=False):
if source_id is None:
msg = (
- f"Source ids are required when cleanup mode is "
+ f"Source IDs are required when cleanup mode is "
f"incremental or scoped_full. "
f"Document that starts with "
f"content: {hashed_doc.page_content[:100]} "
@@ -482,7 +479,7 @@ def index(
raise ValueError(msg)
if cleanup == "scoped_full":
scoped_full_cleanup_source_ids.add(source_id)
- # source ids cannot be None after for loop above.
+ # Source IDs cannot be None after for loop above.
source_ids = cast("Sequence[str]", source_ids)
exists_batch = record_manager.exists(
@@ -541,7 +538,7 @@ def index(
# If source IDs are provided, we can do the deletion incrementally!
if cleanup == "incremental":
# Get the uids of the documents that were not returned by the loader.
- # mypy isn't good enough to determine that source ids cannot be None
+ # mypy isn't good enough to determine that source IDs cannot be None
# here due to a check that's happening above, so we check again.
for source_id in source_ids:
if source_id is None:
@@ -639,61 +636,58 @@ async def aindex(
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
- !!! warning "Behavior changed in 0.3.25"
+ !!! warning "Behavior changed in `langchain-core` 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
* In full mode, the loader should be returning
- the entire dataset, and not just a subset of the dataset.
- Otherwise, the auto_cleanup will remove documents that it is not
- supposed to.
+ the entire dataset, and not just a subset of the dataset.
+ Otherwise, the auto_cleanup will remove documents that it is not
+ supposed to.
* In incremental mode, if documents associated with a particular
- source id appear across different batches, the indexing API
- will do some redundant work. This will still result in the
- correct end state of the index, but will unfortunately not be
- 100% efficient. For example, if a given document is split into 15
- chunks, and we index them using a batch size of 5, we'll have 3 batches
- all with the same source id. In general, to avoid doing too much
- redundant work select as big a batch size as possible.
+ source id appear across different batches, the indexing API
+ will do some redundant work. This will still result in the
+ correct end state of the index, but will unfortunately not be
+ 100% efficient. For example, if a given document is split into 15
+ chunks, and we index them using a batch size of 5, we'll have 3 batches
+ all with the same source id. In general, to avoid doing too much
+ redundant work select as big a batch size as possible.
* The `scoped_full` mode is suitable if determining an appropriate batch size
- is challenging or if your data loader cannot return the entire dataset at
- once. This mode keeps track of source IDs in memory, which should be fine
- for most use cases. If your dataset is large (10M+ docs), you will likely
- need to parallelize the indexing process regardless.
+ is challenging or if your data loader cannot return the entire dataset at
+ once. This mode keeps track of source IDs in memory, which should be fine
+ for most use cases. If your dataset is large (10M+ docs), you will likely
+ need to parallelize the indexing process regardless.
Args:
docs_source: Data loader or iterable of documents to index.
record_manager: Timestamped set to keep track of which documents were
updated.
- vector_store: VectorStore or DocumentIndex to index the documents into.
- batch_size: Batch size to use when indexing. Default is 100.
- cleanup: How to handle clean up of documents. Default is None.
+ vector_store: `VectorStore` or DocumentIndex to index the documents into.
+ batch_size: Batch size to use when indexing.
+ cleanup: How to handle clean up of documents.
- incremental: Cleans up all documents that haven't been updated AND
- that are associated with source ids that were seen during indexing.
- Clean up is done continuously during indexing helping to minimize the
- probability of users seeing duplicated content.
+ that are associated with source IDs that were seen during indexing.
+ Clean up is done continuously during indexing helping to minimize the
+ probability of users seeing duplicated content.
- full: Delete all documents that have not been returned by the loader
- during this run of indexing.
- Clean up runs after all documents have been indexed.
- This means that users may see duplicated content during indexing.
+ during this run of indexing.
+ Clean up runs after all documents have been indexed.
+ This means that users may see duplicated content during indexing.
- scoped_full: Similar to Full, but only deletes all documents
- that haven't been updated AND that are associated with
- source ids that were seen during indexing.
+ that haven't been updated AND that are associated with
+ source IDs that were seen during indexing.
- None: Do not delete any documents.
source_id_key: Optional key that helps identify the original source
- of the document. Default is None.
+ of the document.
cleanup_batch_size: Batch size to use when cleaning up documents.
- Default is 1_000.
force_update: Force update documents even if they are present in the
record manager. Useful if you are re-indexing with updated embeddings.
- Default is False.
key_encoder: Hashing algorithm to use for hashing the document content and
- metadata. Default is "sha1".
- Other options include "blake2b", "sha256", and "sha512".
+ metadata. Options include "blake2b", "sha256", and "sha512".
- !!! version-added "Added in version 0.3.66"
+ !!! version-added "Added in `langchain-core` 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
@@ -707,10 +701,10 @@ async def aindex(
When changing the key encoder, you must change the
index as well to avoid duplicated documents in the cache.
upsert_kwargs: Additional keyword arguments to pass to the add_documents
- method of the VectorStore or the upsert method of the DocumentIndex.
+ method of the `VectorStore` or the upsert method of the DocumentIndex.
For example, you can use this to specify a custom vector_field:
upsert_kwargs={"vector_field": "embedding"}
- !!! version-added "Added in version 0.3.10"
+ !!! version-added "Added in `langchain-core` 0.3.10"
Returns:
Indexing result which contains information about how many documents
@@ -719,10 +713,10 @@ async def aindex(
Raises:
ValueError: If cleanup mode is not one of 'incremental', 'full' or None
ValueError: If cleanup mode is incremental and source_id_key is None.
- ValueError: If vectorstore does not have
+ ValueError: If `VectorStore` does not have
"adelete" and "aadd_documents" required methods.
ValueError: If source_id_key is not None, but is not a string or callable.
- TypeError: If `vector_store` is not a VectorStore or DocumentIndex.
+ TypeError: If `vector_store` is not a `VectorStore` or DocumentIndex.
AssertionError: If `source_id_key` is None when cleanup mode is
incremental or `scoped_full` (should be unreachable).
"""
@@ -763,7 +757,7 @@ async def aindex(
type(destination).adelete == VectorStore.adelete
and type(destination).delete == VectorStore.delete
):
- # Checking if the vectorstore has overridden the default adelete or delete
+ # Checking if the VectorStore has overridden the default adelete or delete
# methods implementation which just raises a NotImplementedError
msg = "Vectorstore has not implemented the adelete or delete method"
raise ValueError(msg)
@@ -821,11 +815,11 @@ async def aindex(
]
if cleanup in {"incremental", "scoped_full"}:
- # If the cleanup mode is incremental, source ids are required.
+ # If the cleanup mode is incremental, source IDs are required.
for source_id, hashed_doc in zip(source_ids, hashed_docs, strict=False):
if source_id is None:
msg = (
- f"Source ids are required when cleanup mode is "
+ f"Source IDs are required when cleanup mode is "
f"incremental or scoped_full. "
f"Document that starts with "
f"content: {hashed_doc.page_content[:100]} "
@@ -834,7 +828,7 @@ async def aindex(
raise ValueError(msg)
if cleanup == "scoped_full":
scoped_full_cleanup_source_ids.add(source_id)
- # source ids cannot be None after for loop above.
+ # Source IDs cannot be None after for loop above.
source_ids = cast("Sequence[str]", source_ids)
exists_batch = await record_manager.aexists(
@@ -894,7 +888,7 @@ async def aindex(
if cleanup == "incremental":
# Get the uids of the documents that were not returned by the loader.
- # mypy isn't good enough to determine that source ids cannot be None
+ # mypy isn't good enough to determine that source IDs cannot be None
# here due to a check that's happening above, so we check again.
for source_id in source_ids:
if source_id is None:
diff --git a/libs/core/langchain_core/indexing/base.py b/libs/core/langchain_core/indexing/base.py
index a6f85dd7494..d8a891ddf9e 100644
--- a/libs/core/langchain_core/indexing/base.py
+++ b/libs/core/langchain_core/indexing/base.py
@@ -25,7 +25,7 @@ class RecordManager(ABC):
The record manager abstraction is used by the langchain indexing API.
The record manager keeps track of which documents have been
- written into a vectorstore and when they were written.
+ written into a `VectorStore` and when they were written.
The indexing API computes hashes for each document and stores the hash
together with the write time and the source id in the record manager.
@@ -37,7 +37,7 @@ class RecordManager(ABC):
already been indexed, and to only index new documents.
The main benefit of this abstraction is that it works across many vectorstores.
- To be supported, a vectorstore needs to only support the ability to add and
+ To be supported, a `VectorStore` needs to only support the ability to add and
delete documents by ID. Using the record manager, the indexing API will
be able to delete outdated documents and avoid redundant indexing of documents
that have already been indexed.
@@ -45,13 +45,13 @@ class RecordManager(ABC):
The main constraints of this abstraction are:
1. It relies on the time-stamps to determine which documents have been
- indexed and which have not. This means that the time-stamps must be
- monotonically increasing. The timestamp should be the timestamp
- as measured by the server to minimize issues.
+ indexed and which have not. This means that the time-stamps must be
+ monotonically increasing. The timestamp should be the timestamp
+ as measured by the server to minimize issues.
2. The record manager is currently implemented separately from the
- vectorstore, which means that the overall system becomes distributed
- and may create issues with consistency. For example, writing to
- record manager succeeds, but corresponding writing to vectorstore fails.
+ vectorstore, which means that the overall system becomes distributed
+ and may create issues with consistency. For example, writing to
+ record manager succeeds, but corresponding writing to `VectorStore` fails.
"""
def __init__(
@@ -278,10 +278,10 @@ class InMemoryRecordManager(RecordManager):
Args:
keys: A list of record keys to upsert.
group_ids: A list of group IDs corresponding to the keys.
- Defaults to `None`.
+
time_at_least: Optional timestamp. Implementation can use this
to optionally verify that the timestamp IS at least this time
- in the system that stores. Defaults to `None`.
+ in the system that stores.
E.g., use to validate that the time in the postgres database
is equal to or larger than the given timestamp, if not
raise an error.
@@ -315,10 +315,10 @@ class InMemoryRecordManager(RecordManager):
Args:
keys: A list of record keys to upsert.
group_ids: A list of group IDs corresponding to the keys.
- Defaults to `None`.
+
time_at_least: Optional timestamp. Implementation can use this
to optionally verify that the timestamp IS at least this time
- in the system that stores. Defaults to `None`.
+ in the system that stores.
E.g., use to validate that the time in the postgres database
is equal to or larger than the given timestamp, if not
raise an error.
@@ -361,13 +361,13 @@ class InMemoryRecordManager(RecordManager):
Args:
before: Filter to list records updated before this time.
- Defaults to `None`.
+
after: Filter to list records updated after this time.
- Defaults to `None`.
+
group_ids: Filter to list records with specific group IDs.
- Defaults to `None`.
+
limit: optional limit on the number of records to return.
- Defaults to `None`.
+
Returns:
A list of keys for the matching records.
@@ -397,13 +397,13 @@ class InMemoryRecordManager(RecordManager):
Args:
before: Filter to list records updated before this time.
- Defaults to `None`.
+
after: Filter to list records updated after this time.
- Defaults to `None`.
+
group_ids: Filter to list records with specific group IDs.
- Defaults to `None`.
+
limit: optional limit on the number of records to return.
- Defaults to `None`.
+
Returns:
A list of keys for the matching records.
@@ -460,7 +460,7 @@ class UpsertResponse(TypedDict):
class DeleteResponse(TypedDict, total=False):
"""A generic response for delete operation.
- The fields in this response are optional and whether the vectorstore
+ The fields in this response are optional and whether the `VectorStore`
returns them or not is up to the implementation.
"""
@@ -508,8 +508,6 @@ class DocumentIndex(BaseRetriever):
1. Storing document in the index.
2. Fetching document by ID.
3. Searching for document using a query.
-
- !!! version-added "Added in version 0.2.29"
"""
@abc.abstractmethod
@@ -520,40 +518,40 @@ class DocumentIndex(BaseRetriever):
if it is provided. If the ID is not provided, the upsert method is free
to generate an ID for the content.
- When an ID is specified and the content already exists in the vectorstore,
+ When an ID is specified and the content already exists in the `VectorStore`,
the upsert method should update the content with the new data. If the content
- does not exist, the upsert method should add the item to the vectorstore.
+ does not exist, the upsert method should add the item to the `VectorStore`.
Args:
- items: Sequence of documents to add to the vectorstore.
+ items: Sequence of documents to add to the `VectorStore`.
**kwargs: Additional keyword arguments.
Returns:
A response object that contains the list of IDs that were
- successfully added or updated in the vectorstore and the list of IDs that
+ successfully added or updated in the `VectorStore` and the list of IDs that
failed to be added or updated.
"""
async def aupsert(
self, items: Sequence[Document], /, **kwargs: Any
) -> UpsertResponse:
- """Add or update documents in the vectorstore. Async version of upsert.
+ """Add or update documents in the `VectorStore`. Async version of `upsert`.
The upsert functionality should utilize the ID field of the item
if it is provided. If the ID is not provided, the upsert method is free
to generate an ID for the item.
- When an ID is specified and the item already exists in the vectorstore,
+ When an ID is specified and the item already exists in the `VectorStore`,
the upsert method should update the item with the new data. If the item
- does not exist, the upsert method should add the item to the vectorstore.
+ does not exist, the upsert method should add the item to the `VectorStore`.
Args:
- items: Sequence of documents to add to the vectorstore.
+ items: Sequence of documents to add to the `VectorStore`.
**kwargs: Additional keyword arguments.
Returns:
A response object that contains the list of IDs that were
- successfully added or updated in the vectorstore and the list of IDs that
+ successfully added or updated in the `VectorStore` and the list of IDs that
failed to be added or updated.
"""
return await run_in_executor(
@@ -570,7 +568,7 @@ class DocumentIndex(BaseRetriever):
Calling delete without any input parameters should raise a ValueError!
Args:
- ids: List of ids to delete.
+ ids: List of IDs to delete.
**kwargs: Additional keyword arguments. This is up to the implementation.
For example, can include an option to delete the entire index,
or else issue a non-blocking delete etc.
@@ -588,7 +586,7 @@ class DocumentIndex(BaseRetriever):
Calling adelete without any input parameters should raise a ValueError!
Args:
- ids: List of ids to delete.
+ ids: List of IDs to delete.
**kwargs: Additional keyword arguments. This is up to the implementation.
For example, can include an option to delete the entire index.
diff --git a/libs/core/langchain_core/indexing/in_memory.py b/libs/core/langchain_core/indexing/in_memory.py
index 1ef40f9947e..ae9cf84088d 100644
--- a/libs/core/langchain_core/indexing/in_memory.py
+++ b/libs/core/langchain_core/indexing/in_memory.py
@@ -23,8 +23,6 @@ class InMemoryDocumentIndex(DocumentIndex):
It provides a simple search API that returns documents by the number of
counts the given query appears in the document.
-
- !!! version-added "Added in version 0.2.29"
"""
store: dict[str, Document] = Field(default_factory=dict)
@@ -64,10 +62,10 @@ class InMemoryDocumentIndex(DocumentIndex):
"""Delete by IDs.
Args:
- ids: List of ids to delete.
+ ids: List of IDs to delete.
Raises:
- ValueError: If ids is None.
+ ValueError: If IDs is None.
Returns:
A response object that contains the list of IDs that were successfully
diff --git a/libs/core/langchain_core/language_models/__init__.py b/libs/core/langchain_core/language_models/__init__.py
index 6e2770ecf10..625543c830f 100644
--- a/libs/core/langchain_core/language_models/__init__.py
+++ b/libs/core/langchain_core/language_models/__init__.py
@@ -1,43 +1,30 @@
"""Language models.
-**Language Model** is a type of model that can generate text or complete
-text prompts.
+LangChain has two main classes to work with language models: chat models and
+"old-fashioned" LLMs.
-LangChain has two main classes to work with language models: **Chat Models**
-and "old-fashioned" **LLMs**.
-
-**Chat Models**
+**Chat models**
Language models that use a sequence of messages as inputs and return chat messages
-as outputs (as opposed to using plain text). These are traditionally newer models (
-older models are generally LLMs, see below). Chat models support the assignment of
-distinct roles to conversation messages, helping to distinguish messages from the AI,
-users, and instructions such as system messages.
+as outputs (as opposed to using plain text).
-The key abstraction for chat models is `BaseChatModel`. Implementations
-should inherit from this class. Please see LangChain how-to guides with more
-information on how to implement a custom chat model.
+Chat models support the assignment of distinct roles to conversation messages, helping
+to distinguish messages from the AI, users, and instructions such as system messages.
-To implement a custom Chat Model, inherit from `BaseChatModel`. See
-the following guide for more information on how to implement a custom Chat Model:
+The key abstraction for chat models is `BaseChatModel`. Implementations should inherit
+from this class.
-https://python.langchain.com/docs/how_to/custom_chat_model/
+See existing [chat model integrations](https://docs.langchain.com/oss/python/integrations/chat).
**LLMs**
Language models that takes a string as input and returns a string.
-These are traditionally older models (newer models generally are Chat Models,
-see below).
+These are traditionally older models (newer models generally are chat models).
-Although the underlying models are string in, string out, the LangChain wrappers
-also allow these models to take messages as input. This gives them the same interface
-as Chat Models. When messages are passed in as input, they will be formatted into a
-string under the hood before being passed to the underlying model.
-
-To implement a custom LLM, inherit from `BaseLLM` or `LLM`.
-Please see the following guide for more information on how to implement a custom LLM:
-
-https://python.langchain.com/docs/how_to/custom_llm/
+Although the underlying models are string in, string out, the LangChain wrappers also
+allow these models to take messages as input. This gives them the same interface as
+chat models. When messages are passed in as input, they will be formatted into a string
+under the hood before being passed to the underlying model.
"""
from typing import TYPE_CHECKING
diff --git a/libs/core/langchain_core/language_models/_utils.py b/libs/core/langchain_core/language_models/_utils.py
index 48141b54812..d777425dde6 100644
--- a/libs/core/langchain_core/language_models/_utils.py
+++ b/libs/core/langchain_core/language_models/_utils.py
@@ -35,7 +35,7 @@ def is_openai_data_block(
different type, this function will return False.
Returns:
- True if the block is a valid OpenAI data block and matches the filter_
+ `True` if the block is a valid OpenAI data block and matches the filter_
(if provided).
"""
@@ -89,7 +89,8 @@ class ParsedDataUri(TypedDict):
def _parse_data_uri(uri: str) -> ParsedDataUri | None:
"""Parse a data URI into its components.
- If parsing fails, return None. If either MIME type or data is missing, return None.
+ If parsing fails, return `None`. If either MIME type or data is missing, return
+ `None`.
Example:
```python
@@ -138,7 +139,7 @@ def _normalize_messages(
directly; this may change in the future
- LangChain v0 standard content blocks for backward compatibility
- !!! warning "Behavior changed in 1.0.0"
+ !!! warning "Behavior changed in `langchain-core` 1.0.0"
In previous versions, this function returned messages in LangChain v0 format.
Now, it returns messages in LangChain v1 format, which upgraded chat models now
expect to receive when passing back in message history. For backward
diff --git a/libs/core/langchain_core/language_models/base.py b/libs/core/langchain_core/language_models/base.py
index 726c7aee6b0..58c6d4abf7e 100644
--- a/libs/core/langchain_core/language_models/base.py
+++ b/libs/core/langchain_core/language_models/base.py
@@ -96,9 +96,16 @@ def _get_token_ids_default_method(text: str) -> list[int]:
LanguageModelInput = PromptValue | str | Sequence[MessageLikeRepresentation]
+"""Input to a language model."""
+
LanguageModelOutput = BaseMessage | str
+"""Output from a language model."""
+
LanguageModelLike = Runnable[LanguageModelInput, LanguageModelOutput]
+"""Input/output interface for a language model."""
+
LanguageModelOutputVar = TypeVar("LanguageModelOutputVar", AIMessage, str)
+"""Type variable for the output of a language model."""
def _get_verbosity() -> bool:
@@ -123,16 +130,20 @@ class BaseLanguageModel(
* If instance of `BaseCache`, will use the provided cache.
Caching is not currently supported for streaming methods of models.
-
"""
+
verbose: bool = Field(default_factory=_get_verbosity, exclude=True, repr=False)
"""Whether to print out response text."""
+
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to add to the run trace."""
+
tags: list[str] | None = Field(default=None, exclude=True)
"""Tags to add to the run trace."""
+
metadata: dict[str, Any] | None = Field(default=None, exclude=True)
"""Metadata to add to the run trace."""
+
custom_get_token_ids: Callable[[str], list[int]] | None = Field(
default=None, exclude=True
)
@@ -146,7 +157,7 @@ class BaseLanguageModel(
def set_verbose(cls, verbose: bool | None) -> bool: # noqa: FBT001
"""If verbose is `None`, set it.
- This allows users to pass in None as verbose to access the global setting.
+ This allows users to pass in `None` as verbose to access the global setting.
Args:
verbose: The verbosity setting to use.
@@ -186,22 +197,29 @@ class BaseLanguageModel(
1. Take advantage of batched calls,
2. Need more output from the model than just the top generated value,
3. Are building chains that are agnostic to the underlying language model
- type (e.g., pure text completion models vs chat models).
+ type (e.g., pure text completion models vs chat models).
Args:
- prompts: List of PromptValues. A PromptValue is an object that can be
- converted to match the format of any language model (string for pure
- text generation models and BaseMessages for chat models).
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
- callbacks: Callbacks to pass through. Used for executing additional
- functionality, such as logging or streaming, throughout generation.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ prompts: List of `PromptValue` objects.
+
+ A `PromptValue` is an object that can be converted to match the format
+ of any language model (string for pure text generation models and
+ `BaseMessage` objects for chat models).
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+ callbacks: `Callbacks` to pass through.
+
+ Used for executing additional functionality, such as logging or
+ streaming, throughout generation.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Returns:
- An LLMResult, which contains a list of candidate Generations for each input
- prompt and additional model provider-specific output.
+ An `LLMResult`, which contains a list of candidate `Generation` objects for
+ each input prompt and additional model provider-specific output.
"""
@@ -223,22 +241,29 @@ class BaseLanguageModel(
1. Take advantage of batched calls,
2. Need more output from the model than just the top generated value,
3. Are building chains that are agnostic to the underlying language model
- type (e.g., pure text completion models vs chat models).
+ type (e.g., pure text completion models vs chat models).
Args:
- prompts: List of PromptValues. A PromptValue is an object that can be
- converted to match the format of any language model (string for pure
- text generation models and BaseMessages for chat models).
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
- callbacks: Callbacks to pass through. Used for executing additional
- functionality, such as logging or streaming, throughout generation.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ prompts: List of `PromptValue` objects.
+
+ A `PromptValue` is an object that can be converted to match the format
+ of any language model (string for pure text generation models and
+ `BaseMessage` objects for chat models).
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+ callbacks: `Callbacks` to pass through.
+
+ Used for executing additional functionality, such as logging or
+ streaming, throughout generation.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Returns:
- An `LLMResult`, which contains a list of candidate Generations for each
- input prompt and additional model provider-specific output.
+ An `LLMResult`, which contains a list of candidate `Generation` objects for
+ each input prompt and additional model provider-specific output.
"""
@@ -256,15 +281,14 @@ class BaseLanguageModel(
return self.lc_attributes
def get_token_ids(self, text: str) -> list[int]:
- """Return the ordered ids of the tokens in a text.
+ """Return the ordered IDs of the tokens in a text.
Args:
text: The string input to tokenize.
Returns:
- A list of ids corresponding to the tokens in the text, in order they occur
- in the text.
-
+ A list of IDs corresponding to the tokens in the text, in order they occur
+ in the text.
"""
if self.custom_get_token_ids is not None:
return self.custom_get_token_ids(text)
diff --git a/libs/core/langchain_core/language_models/chat_models.py b/libs/core/langchain_core/language_models/chat_models.py
index 6506111092e..f37113b4661 100644
--- a/libs/core/langchain_core/language_models/chat_models.py
+++ b/libs/core/langchain_core/language_models/chat_models.py
@@ -15,6 +15,7 @@ from typing import TYPE_CHECKING, Any, Literal, cast
from pydantic import BaseModel, ConfigDict, Field
from typing_extensions import override
+from langchain_core._api.beta_decorator import beta
from langchain_core.caches import BaseCache
from langchain_core.callbacks import (
AsyncCallbackManager,
@@ -75,6 +76,8 @@ from langchain_core.utils.utils import LC_ID_PREFIX, from_env
if TYPE_CHECKING:
import uuid
+ from langchain_model_profiles import ModelProfile # type: ignore[import-untyped]
+
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.tools import BaseTool
@@ -240,79 +243,52 @@ def _format_ls_structured_output(ls_structured_output_format: dict | None) -> di
class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
- """Base class for chat models.
+ r"""Base class for chat models.
Key imperative methods:
Methods that actually call the underlying model.
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | Method | Input | Output | Description |
- +===========================+================================================================+=====================================================================+==================================================================================================+
- | `invoke` | str | list[dict | tuple | BaseMessage] | PromptValue | BaseMessage | A single chat model call. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `ainvoke` | ''' | BaseMessage | Defaults to running invoke in an async executor. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `stream` | ''' | Iterator[BaseMessageChunk] | Defaults to yielding output of invoke. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `astream` | ''' | AsyncIterator[BaseMessageChunk] | Defaults to yielding output of ainvoke. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `astream_events` | ''' | AsyncIterator[StreamEvent] | Event types: 'on_chat_model_start', 'on_chat_model_stream', 'on_chat_model_end'. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `batch` | list['''] | list[BaseMessage] | Defaults to running invoke in concurrent threads. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `abatch` | list['''] | list[BaseMessage] | Defaults to running ainvoke in concurrent threads. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `batch_as_completed` | list['''] | Iterator[tuple[int, Union[BaseMessage, Exception]]] | Defaults to running invoke in concurrent threads. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
- | `abatch_as_completed` | list['''] | AsyncIterator[tuple[int, Union[BaseMessage, Exception]]] | Defaults to running ainvoke in concurrent threads. |
- +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
+ This table provides a brief overview of the main imperative methods. Please see the base `Runnable` reference for full documentation.
- This table provides a brief overview of the main imperative methods. Please see the base Runnable reference for full documentation.
+ | Method | Input | Output | Description |
+ | ---------------------- | ------------------------------------------------------------ | ---------------------------------------------------------- | -------------------------------------------------------------------------------- |
+ | `invoke` | `str` \| `list[dict | tuple | BaseMessage]` \| `PromptValue` | `BaseMessage` | A single chat model call. |
+ | `ainvoke` | `'''` | `BaseMessage` | Defaults to running `invoke` in an async executor. |
+ | `stream` | `'''` | `Iterator[BaseMessageChunk]` | Defaults to yielding output of `invoke`. |
+ | `astream` | `'''` | `AsyncIterator[BaseMessageChunk]` | Defaults to yielding output of `ainvoke`. |
+ | `astream_events` | `'''` | `AsyncIterator[StreamEvent]` | Event types: `on_chat_model_start`, `on_chat_model_stream`, `on_chat_model_end`. |
+ | `batch` | `list[''']` | `list[BaseMessage]` | Defaults to running `invoke` in concurrent threads. |
+ | `abatch` | `list[''']` | `list[BaseMessage]` | Defaults to running `ainvoke` in concurrent threads. |
+ | `batch_as_completed` | `list[''']` | `Iterator[tuple[int, Union[BaseMessage, Exception]]]` | Defaults to running `invoke` in concurrent threads. |
+ | `abatch_as_completed` | `list[''']` | `AsyncIterator[tuple[int, Union[BaseMessage, Exception]]]` | Defaults to running `ainvoke` in concurrent threads. |
Key declarative methods:
- Methods for creating another Runnable using the ChatModel.
-
- +----------------------------------+-----------------------------------------------------------------------------------------------------------+
- | Method | Description |
- +==================================+===========================================================================================================+
- | `bind_tools` | Create ChatModel that can call tools. |
- +----------------------------------+-----------------------------------------------------------------------------------------------------------+
- | `with_structured_output` | Create wrapper that structures model output using schema. |
- +----------------------------------+-----------------------------------------------------------------------------------------------------------+
- | `with_retry` | Create wrapper that retries model calls on failure. |
- +----------------------------------+-----------------------------------------------------------------------------------------------------------+
- | `with_fallbacks` | Create wrapper that falls back to other models on failure. |
- +----------------------------------+-----------------------------------------------------------------------------------------------------------+
- | `configurable_fields` | Specify init args of the model that can be configured at runtime via the RunnableConfig. |
- +----------------------------------+-----------------------------------------------------------------------------------------------------------+
- | `configurable_alternatives` | Specify alternative models which can be swapped in at runtime via the RunnableConfig. |
- +----------------------------------+-----------------------------------------------------------------------------------------------------------+
+ Methods for creating another `Runnable` using the chat model.
This table provides a brief overview of the main declarative methods. Please see the reference for each method for full documentation.
+ | Method | Description |
+ | ---------------------------- | ------------------------------------------------------------------------------------------ |
+ | `bind_tools` | Create chat model that can call tools. |
+ | `with_structured_output` | Create wrapper that structures model output using schema. |
+ | `with_retry` | Create wrapper that retries model calls on failure. |
+ | `with_fallbacks` | Create wrapper that falls back to other models on failure. |
+ | `configurable_fields` | Specify init args of the model that can be configured at runtime via the `RunnableConfig`. |
+ | `configurable_alternatives` | Specify alternative models which can be swapped in at runtime via the `RunnableConfig`. |
+
Creating custom chat model:
Custom chat model implementations should inherit from this class.
Please reference the table below for information about which
methods and properties are required or optional for implementations.
- +----------------------------------+--------------------------------------------------------------------+-------------------+
- | Method/Property | Description | Required/Optional |
- +==================================+====================================================================+===================+
+ | Method/Property | Description | Required |
+ | -------------------------------- | ------------------------------------------------------------------ | ----------------- |
| `_generate` | Use to generate a chat result from a prompt | Required |
- +----------------------------------+--------------------------------------------------------------------+-------------------+
| `_llm_type` (property) | Used to uniquely identify the type of the model. Used for logging. | Required |
- +----------------------------------+--------------------------------------------------------------------+-------------------+
| `_identifying_params` (property) | Represent model parameterization for tracing purposes. | Optional |
- +----------------------------------+--------------------------------------------------------------------+-------------------+
| `_stream` | Use to implement streaming | Optional |
- +----------------------------------+--------------------------------------------------------------------+-------------------+
| `_agenerate` | Use to implement a native async method | Optional |
- +----------------------------------+--------------------------------------------------------------------+-------------------+
| `_astream` | Use to implement async version of `_stream` | Optional |
- +----------------------------------+--------------------------------------------------------------------+-------------------+
-
- Follow the guide for more information on how to implement a custom Chat Model:
- [Guide](https://python.langchain.com/docs/how_to/custom_chat_model/).
""" # noqa: E501
@@ -327,9 +303,9 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
- If `True`, will always bypass streaming case.
- If `'tool_calling'`, will bypass streaming case only when the model is called
- with a `tools` keyword argument. In other words, LangChain will automatically
- switch to non-streaming behavior (`invoke`) only when the tools argument is
- provided. This offers the best of both worlds.
+ with a `tools` keyword argument. In other words, LangChain will automatically
+ switch to non-streaming behavior (`invoke`) only when the tools argument is
+ provided. This offers the best of both worlds.
- If `False` (Default), will always use streaming case if available.
The main reason for this flag is that code might be written using `stream` and
@@ -349,13 +325,14 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Supported values:
- `'v0'`: provider-specific format in content (can lazily-parse with
- `.content_blocks`)
- - `'v1'`: standardized format in content (consistent with `.content_blocks`)
+ `content_blocks`)
+ - `'v1'`: standardized format in content (consistent with `content_blocks`)
- Partner packages (e.g., `langchain-openai`) can also use this field to roll out
- new content formats in a backward-compatible way.
+ Partner packages (e.g.,
+ [`langchain-openai`](https://pypi.org/project/langchain-openai)) can also use this
+ field to roll out new content formats in a backward-compatible way.
- !!! version-added "Added in version 1.0"
+ !!! version-added "Added in `langchain-core` 1.0"
"""
@@ -864,24 +841,29 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
1. Take advantage of batched calls,
2. Need more output from the model than just the top generated value,
3. Are building chains that are agnostic to the underlying language model
- type (e.g., pure text completion models vs chat models).
+ type (e.g., pure text completion models vs chat models).
Args:
messages: List of list of messages.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
- callbacks: Callbacks to pass through. Used for executing additional
- functionality, such as logging or streaming, throughout generation.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+ callbacks: `Callbacks` to pass through.
+
+ Used for executing additional functionality, such as logging or
+ streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Returns:
- An LLMResult, which contains a list of candidate Generations for each input
- prompt and additional model provider-specific output.
+ An `LLMResult`, which contains a list of candidate `Generations` for each
+ input prompt and additional model provider-specific output.
"""
ls_structured_output_format = kwargs.pop(
@@ -982,24 +964,29 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
1. Take advantage of batched calls,
2. Need more output from the model than just the top generated value,
3. Are building chains that are agnostic to the underlying language model
- type (e.g., pure text completion models vs chat models).
+ type (e.g., pure text completion models vs chat models).
Args:
messages: List of list of messages.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
- callbacks: Callbacks to pass through. Used for executing additional
- functionality, such as logging or streaming, throughout generation.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+ callbacks: `Callbacks` to pass through.
+
+ Used for executing additional functionality, such as logging or
+ streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Returns:
- An LLMResult, which contains a list of candidate Generations for each input
- prompt and additional model provider-specific output.
+ An `LLMResult`, which contains a list of candidate `Generations` for each
+ input prompt and additional model provider-specific output.
"""
ls_structured_output_format = kwargs.pop(
@@ -1528,25 +1515,33 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Args:
schema: The output schema. Can be passed in as:
- - an OpenAI function/tool schema,
- - a JSON Schema,
- - a `TypedDict` class,
- - or a Pydantic class.
+ - An OpenAI function/tool schema,
+ - A JSON Schema,
+ - A `TypedDict` class,
+ - Or a Pydantic class.
If `schema` is a Pydantic class then the model output will be a
Pydantic instance of that class, and the model-generated fields will be
validated by the Pydantic class. Otherwise the model output will be a
- dict and will not be validated. See `langchain_core.utils.function_calling.convert_to_openai_tool`
- for more on how to properly specify types and descriptions of
- schema fields when specifying a Pydantic or `TypedDict` class.
+ dict and will not be validated.
+
+ See `langchain_core.utils.function_calling.convert_to_openai_tool` for
+ more on how to properly specify types and descriptions of schema fields
+ when specifying a Pydantic or `TypedDict` class.
include_raw:
- If `False` then only the parsed structured output is returned. If
- an error occurs during model output parsing it will be raised. If `True`
- then both the raw model response (a BaseMessage) and the parsed model
- response will be returned. If an error occurs during output parsing it
- will be caught and returned as well. The final output is always a dict
- with keys `'raw'`, `'parsed'`, and `'parsing_error'`.
+ If `False` then only the parsed structured output is returned.
+
+ If an error occurs during model output parsing it will be raised.
+
+ If `True` then both the raw model response (a `BaseMessage`) and the
+ parsed model response will be returned.
+
+ If an error occurs during output parsing it will be caught and returned
+ as well.
+
+ The final output is always a `dict` with keys `'raw'`, `'parsed'`, and
+ `'parsing_error'`.
Raises:
ValueError: If there are any unsupported `kwargs`.
@@ -1554,99 +1549,102 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
`with_structured_output()`.
Returns:
- A Runnable that takes same inputs as a `langchain_core.language_models.chat.BaseChatModel`.
+ A `Runnable` that takes same inputs as a
+ `langchain_core.language_models.chat.BaseChatModel`. If `include_raw` is
+ `False` and `schema` is a Pydantic class, `Runnable` outputs an instance
+ of `schema` (i.e., a Pydantic object). Otherwise, if `include_raw` is
+ `False` then `Runnable` outputs a `dict`.
- If `include_raw` is False and `schema` is a Pydantic class, Runnable outputs
- an instance of `schema` (i.e., a Pydantic object).
+ If `include_raw` is `True`, then `Runnable` outputs a `dict` with keys:
- Otherwise, if `include_raw` is False then Runnable outputs a dict.
+ - `'raw'`: `BaseMessage`
+ - `'parsed'`: `None` if there was a parsing error, otherwise the type
+ depends on the `schema` as described above.
+ - `'parsing_error'`: `BaseException | None`
- If `include_raw` is True, then Runnable outputs a dict with keys:
+ Example: Pydantic schema (`include_raw=False`):
- - `'raw'`: BaseMessage
- - `'parsed'`: None if there was a parsing error, otherwise the type depends on the `schema` as described above.
- - `'parsing_error'`: BaseException | None
-
- Example: Pydantic schema (include_raw=False):
- ```python
- from pydantic import BaseModel
+ ```python
+ from pydantic import BaseModel
- class AnswerWithJustification(BaseModel):
- '''An answer to the user question along with justification for the answer.'''
+ class AnswerWithJustification(BaseModel):
+ '''An answer to the user question along with justification for the answer.'''
- answer: str
- justification: str
+ answer: str
+ justification: str
- llm = ChatModel(model="model-name", temperature=0)
- structured_llm = llm.with_structured_output(AnswerWithJustification)
+ model = ChatModel(model="model-name", temperature=0)
+ structured_model = model.with_structured_output(AnswerWithJustification)
- structured_llm.invoke(
- "What weighs more a pound of bricks or a pound of feathers"
- )
+ structured_model.invoke(
+ "What weighs more a pound of bricks or a pound of feathers"
+ )
- # -> AnswerWithJustification(
- # answer='They weigh the same',
- # justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
- # )
- ```
+ # -> AnswerWithJustification(
+ # answer='They weigh the same',
+ # justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
+ # )
+ ```
- Example: Pydantic schema (include_raw=True):
- ```python
- from pydantic import BaseModel
+ Example: Pydantic schema (`include_raw=True`):
+
+ ```python
+ from pydantic import BaseModel
- class AnswerWithJustification(BaseModel):
- '''An answer to the user question along with justification for the answer.'''
+ class AnswerWithJustification(BaseModel):
+ '''An answer to the user question along with justification for the answer.'''
- answer: str
- justification: str
+ answer: str
+ justification: str
- llm = ChatModel(model="model-name", temperature=0)
- structured_llm = llm.with_structured_output(
- AnswerWithJustification, include_raw=True
- )
+ model = ChatModel(model="model-name", temperature=0)
+ structured_model = model.with_structured_output(
+ AnswerWithJustification, include_raw=True
+ )
- structured_llm.invoke(
- "What weighs more a pound of bricks or a pound of feathers"
- )
- # -> {
- # 'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
- # 'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
- # 'parsing_error': None
- # }
- ```
+ structured_model.invoke(
+ "What weighs more a pound of bricks or a pound of feathers"
+ )
+ # -> {
+ # 'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
+ # 'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
+ # 'parsing_error': None
+ # }
+ ```
- Example: Dict schema (include_raw=False):
- ```python
- from pydantic import BaseModel
- from langchain_core.utils.function_calling import convert_to_openai_tool
+ Example: `dict` schema (`include_raw=False`):
+
+ ```python
+ from pydantic import BaseModel
+ from langchain_core.utils.function_calling import convert_to_openai_tool
- class AnswerWithJustification(BaseModel):
- '''An answer to the user question along with justification for the answer.'''
+ class AnswerWithJustification(BaseModel):
+ '''An answer to the user question along with justification for the answer.'''
- answer: str
- justification: str
+ answer: str
+ justification: str
- dict_schema = convert_to_openai_tool(AnswerWithJustification)
- llm = ChatModel(model="model-name", temperature=0)
- structured_llm = llm.with_structured_output(dict_schema)
+ dict_schema = convert_to_openai_tool(AnswerWithJustification)
+ model = ChatModel(model="model-name", temperature=0)
+ structured_model = model.with_structured_output(dict_schema)
- structured_llm.invoke(
- "What weighs more a pound of bricks or a pound of feathers"
- )
- # -> {
- # 'answer': 'They weigh the same',
- # 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
- # }
- ```
+ structured_model.invoke(
+ "What weighs more a pound of bricks or a pound of feathers"
+ )
+ # -> {
+ # 'answer': 'They weigh the same',
+ # 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
+ # }
+ ```
- !!! warning "Behavior changed in 0.2.26"
- Added support for TypedDict class.
+ !!! warning "Behavior changed in `langchain-core` 0.2.26"
+ Added support for `TypedDict` class.
""" # noqa: E501
_ = kwargs.pop("method", None)
@@ -1687,6 +1685,40 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
return RunnableMap(raw=llm) | parser_with_fallback
return llm | output_parser
+ @property
+ @beta()
+ def profile(self) -> ModelProfile:
+ """Return profiling information for the model.
+
+ This property relies on the `langchain-model-profiles` package to retrieve chat
+ model capabilities, such as context window sizes and supported features.
+
+ Raises:
+ ImportError: If `langchain-model-profiles` is not installed.
+
+ Returns:
+ A `ModelProfile` object containing profiling information for the model.
+ """
+ try:
+ from langchain_model_profiles import get_model_profile # noqa: PLC0415
+ except ImportError as err:
+ informative_error_message = (
+ "To access model profiling information, please install the "
+ "`langchain-model-profiles` package: "
+ "`pip install langchain-model-profiles`."
+ )
+ raise ImportError(informative_error_message) from err
+
+ provider_id = self._llm_type
+ model_name = (
+ # Model name is not standardized across integrations. New integrations
+ # should prefer `model`.
+ getattr(self, "model", None)
+ or getattr(self, "model_name", None)
+ or getattr(self, "model_id", "")
+ )
+ return get_model_profile(provider_id, model_name) or {}
+
class SimpleChatModel(BaseChatModel):
"""Simplified implementation for a chat model to inherit from.
@@ -1745,9 +1777,12 @@ def _gen_info_and_msg_metadata(
}
+_MAX_CLEANUP_DEPTH = 100
+
+
def _cleanup_llm_representation(serialized: Any, depth: int) -> None:
"""Remove non-serializable objects from a serialized object."""
- if depth > 100: # Don't cooperate for pathological cases
+ if depth > _MAX_CLEANUP_DEPTH: # Don't cooperate for pathological cases
return
if not isinstance(serialized, dict):
diff --git a/libs/core/langchain_core/language_models/fake_chat_models.py b/libs/core/langchain_core/language_models/fake_chat_models.py
index 340f8ad3026..7ffb5896013 100644
--- a/libs/core/langchain_core/language_models/fake_chat_models.py
+++ b/libs/core/langchain_core/language_models/fake_chat_models.py
@@ -1,4 +1,4 @@
-"""Fake ChatModel for testing purposes."""
+"""Fake chat models for testing purposes."""
import asyncio
import re
@@ -19,7 +19,7 @@ from langchain_core.runnables import RunnableConfig
class FakeMessagesListChatModel(BaseChatModel):
- """Fake `ChatModel` for testing purposes."""
+ """Fake chat model for testing purposes."""
responses: list[BaseMessage]
"""List of responses to **cycle** through in order."""
@@ -57,7 +57,7 @@ class FakeListChatModelError(Exception):
class FakeListChatModel(SimpleChatModel):
- """Fake ChatModel for testing purposes."""
+ """Fake chat model for testing purposes."""
responses: list[str]
"""List of responses to **cycle** through in order."""
diff --git a/libs/core/langchain_core/language_models/llms.py b/libs/core/langchain_core/language_models/llms.py
index e81d40832a9..813ae7b21b9 100644
--- a/libs/core/langchain_core/language_models/llms.py
+++ b/libs/core/langchain_core/language_models/llms.py
@@ -1,4 +1,7 @@
-"""Base interface for large language models to expose."""
+"""Base interface for traditional large language models (LLMs) to expose.
+
+These are traditionally older models (newer models generally are chat models).
+"""
from __future__ import annotations
@@ -74,8 +77,8 @@ def create_base_retry_decorator(
Args:
error_types: List of error types to retry on.
- max_retries: Number of retries. Default is 1.
- run_manager: Callback manager for the run. Default is None.
+ max_retries: Number of retries.
+ run_manager: Callback manager for the run.
Returns:
A retry decorator.
@@ -91,13 +94,17 @@ def create_base_retry_decorator(
if isinstance(run_manager, AsyncCallbackManagerForLLMRun):
coro = run_manager.on_retry(retry_state)
try:
- loop = asyncio.get_event_loop()
- if loop.is_running():
- # TODO: Fix RUF006 - this task should have a reference
- # and be awaited somewhere
- loop.create_task(coro) # noqa: RUF006
- else:
+ try:
+ loop = asyncio.get_event_loop()
+ except RuntimeError:
asyncio.run(coro)
+ else:
+ if loop.is_running():
+ # TODO: Fix RUF006 - this task should have a reference
+ # and be awaited somewhere
+ loop.create_task(coro) # noqa: RUF006
+ else:
+ asyncio.run(coro)
except Exception as e:
_log_error_once(f"Error in on_retry: {e}")
else:
@@ -153,7 +160,7 @@ def get_prompts(
Args:
params: Dictionary of parameters.
prompts: List of prompts.
- cache: Cache object. Default is None.
+ cache: Cache object.
Returns:
A tuple of existing prompts, llm_string, missing prompt indexes,
@@ -189,7 +196,7 @@ async def aget_prompts(
Args:
params: Dictionary of parameters.
prompts: List of prompts.
- cache: Cache object. Default is None.
+ cache: Cache object.
Returns:
A tuple of existing prompts, llm_string, missing prompt indexes,
@@ -644,9 +651,12 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: The prompts to generate from.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of the stop substrings.
- If stop tokens are not supported consider raising NotImplementedError.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+
+ If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
Returns:
@@ -664,9 +674,12 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: The prompts to generate from.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of the stop substrings.
- If stop tokens are not supported consider raising NotImplementedError.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+
+ If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
Returns:
@@ -698,11 +711,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompt: The prompt to generate from.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
run_manager: Callback manager for the run.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Yields:
Generation chunks.
@@ -724,11 +740,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompt: The prompt to generate from.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
run_manager: Callback manager for the run.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Yields:
Generation chunks.
@@ -839,10 +858,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: List of string prompts.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
- callbacks: Callbacks to pass through. Used for executing additional
- functionality, such as logging or streaming, throughout generation.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+ callbacks: `Callbacks` to pass through.
+
+ Used for executing additional functionality, such as logging or
+ streaming, throughout generation.
tags: List of tags to associate with each prompt. If provided, the length
of the list must match the length of the prompts list.
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -852,8 +875,9 @@ class BaseLLM(BaseLanguageModel[str], ABC):
length of the list must match the length of the prompts list.
run_id: List of run IDs to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Raises:
ValueError: If prompts is not a list.
@@ -861,8 +885,8 @@ class BaseLLM(BaseLanguageModel[str], ABC):
`run_name` (if provided) does not match the length of prompts.
Returns:
- An LLMResult, which contains a list of candidate Generations for each input
- prompt and additional model provider-specific output.
+ An `LLMResult`, which contains a list of candidate `Generations` for each
+ input prompt and additional model provider-specific output.
"""
if not isinstance(prompts, list):
msg = (
@@ -1109,10 +1133,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: List of string prompts.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of these substrings.
- callbacks: Callbacks to pass through. Used for executing additional
- functionality, such as logging or streaming, throughout generation.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+ callbacks: `Callbacks` to pass through.
+
+ Used for executing additional functionality, such as logging or
+ streaming, throughout generation.
tags: List of tags to associate with each prompt. If provided, the length
of the list must match the length of the prompts list.
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -1122,16 +1150,17 @@ class BaseLLM(BaseLanguageModel[str], ABC):
length of the list must match the length of the prompts list.
run_id: List of run IDs to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Raises:
ValueError: If the length of `callbacks`, `tags`, `metadata`, or
`run_name` (if provided) does not match the length of prompts.
Returns:
- An LLMResult, which contains a list of candidate Generations for each input
- prompt and additional model provider-specific output.
+ An `LLMResult`, which contains a list of candidate `Generations` for each
+ input prompt and additional model provider-specific output.
"""
if isinstance(metadata, list):
metadata = [
@@ -1387,11 +1416,6 @@ class LLM(BaseLLM):
`astream` will use `_astream` if provided, otherwise it will implement
a fallback behavior that will use `_stream` if `_stream` is implemented,
and use `_acall` if `_stream` is not implemented.
-
- Please see the following guide for more information on how to
- implement a custom LLM:
-
- https://python.langchain.com/docs/how_to/custom_llm/
"""
@abstractmethod
@@ -1408,12 +1432,16 @@ class LLM(BaseLLM):
Args:
prompt: The prompt to generate from.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of the stop substrings.
- If stop tokens are not supported consider raising NotImplementedError.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+
+ If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Returns:
The model output as a string. SHOULD NOT include the prompt.
@@ -1434,12 +1462,16 @@ class LLM(BaseLLM):
Args:
prompt: The prompt to generate from.
- stop: Stop words to use when generating. Model output is cut off at the
- first occurrence of any of the stop substrings.
- If stop tokens are not supported consider raising NotImplementedError.
+ stop: Stop words to use when generating.
+
+ Model output is cut off at the first occurrence of any of these
+ substrings.
+
+ If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
- **kwargs: Arbitrary additional keyword arguments. These are usually passed
- to the model provider API call.
+ **kwargs: Arbitrary additional keyword arguments.
+
+ These are usually passed to the model provider API call.
Returns:
The model output as a string. SHOULD NOT include the prompt.
diff --git a/libs/core/langchain_core/load/dump.py b/libs/core/langchain_core/load/dump.py
index 01c886d33bd..4cb9ca59892 100644
--- a/libs/core/langchain_core/load/dump.py
+++ b/libs/core/langchain_core/load/dump.py
@@ -17,7 +17,7 @@ def default(obj: Any) -> Any:
obj: The object to serialize to json if it is a Serializable object.
Returns:
- A json serializable object or a SerializedNotImplemented object.
+ A JSON serializable object or a SerializedNotImplemented object.
"""
if isinstance(obj, Serializable):
return obj.to_json()
@@ -38,17 +38,16 @@ def _dump_pydantic_models(obj: Any) -> Any:
def dumps(obj: Any, *, pretty: bool = False, **kwargs: Any) -> str:
- """Return a json string representation of an object.
+ """Return a JSON string representation of an object.
Args:
obj: The object to dump.
- pretty: Whether to pretty print the json. If true, the json will be
- indented with 2 spaces (if no indent is provided as part of kwargs).
- Default is False.
- **kwargs: Additional arguments to pass to json.dumps
+ pretty: Whether to pretty print the json. If `True`, the json will be
+ indented with 2 spaces (if no indent is provided as part of `kwargs`).
+ **kwargs: Additional arguments to pass to `json.dumps`
Returns:
- A json string representation of the object.
+ A JSON string representation of the object.
Raises:
ValueError: If `default` is passed as a kwarg.
@@ -72,14 +71,12 @@ def dumps(obj: Any, *, pretty: bool = False, **kwargs: Any) -> str:
def dumpd(obj: Any) -> Any:
"""Return a dict representation of an object.
- !!! note
- Unfortunately this function is not as efficient as it could be because it first
- dumps the object to a json string and then loads it back into a dictionary.
-
Args:
obj: The object to dump.
Returns:
- dictionary that can be serialized to json using json.dumps
+ Dictionary that can be serialized to json using `json.dumps`.
"""
+ # Unfortunately this function is not as efficient as it could be because it first
+ # dumps the object to a json string and then loads it back into a dictionary.
return json.loads(dumps(obj))
diff --git a/libs/core/langchain_core/load/load.py b/libs/core/langchain_core/load/load.py
index 7b71760cf21..ed832e69dbb 100644
--- a/libs/core/langchain_core/load/load.py
+++ b/libs/core/langchain_core/load/load.py
@@ -63,16 +63,13 @@ class Reviver:
Args:
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
- is True. Defaults to `None`.
+ is True.
valid_namespaces: A list of additional namespaces (modules)
- to allow to be deserialized. Defaults to `None`.
+ to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
- Defaults to `True`.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
- Defaults to `None`.
ignore_unserializable_fields: Whether to ignore unserializable fields.
- Defaults to `False`.
"""
self.secrets_from_env = secrets_from_env
self.secrets_map = secrets_map or {}
@@ -200,16 +197,13 @@ def loads(
text: The string to load.
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
- is True. Defaults to `None`.
+ is True.
valid_namespaces: A list of additional namespaces (modules)
- to allow to be deserialized. Defaults to `None`.
+ to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
- Defaults to `True`.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
- Defaults to `None`.
ignore_unserializable_fields: Whether to ignore unserializable fields.
- Defaults to `False`.
Returns:
Revived LangChain objects.
@@ -245,16 +239,13 @@ def load(
obj: The object to load.
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
- is True. Defaults to `None`.
+ is True.
valid_namespaces: A list of additional namespaces (modules)
- to allow to be deserialized. Defaults to `None`.
+ to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
- Defaults to `True`.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
- Defaults to `None`.
ignore_unserializable_fields: Whether to ignore unserializable fields.
- Defaults to `False`.
Returns:
Revived LangChain objects.
diff --git a/libs/core/langchain_core/load/serializable.py b/libs/core/langchain_core/load/serializable.py
index f4f8c0417b7..9c7588c185b 100644
--- a/libs/core/langchain_core/load/serializable.py
+++ b/libs/core/langchain_core/load/serializable.py
@@ -25,9 +25,9 @@ class BaseSerialized(TypedDict):
id: list[str]
"""The unique identifier of the object."""
name: NotRequired[str]
- """The name of the object. Optional."""
+ """The name of the object."""
graph: NotRequired[dict[str, Any]]
- """The graph of the object. Optional."""
+ """The graph of the object."""
class SerializedConstructor(BaseSerialized):
@@ -52,7 +52,7 @@ class SerializedNotImplemented(BaseSerialized):
type: Literal["not_implemented"]
"""The type of the object. Must be `'not_implemented'`."""
repr: str | None
- """The representation of the object. Optional."""
+ """The representation of the object."""
def try_neq_default(value: Any, key: str, model: BaseModel) -> bool:
@@ -61,7 +61,7 @@ def try_neq_default(value: Any, key: str, model: BaseModel) -> bool:
Args:
value: The value.
key: The key.
- model: The pydantic model.
+ model: The Pydantic model.
Returns:
Whether the value is different from the default.
@@ -93,18 +93,21 @@ class Serializable(BaseModel, ABC):
It relies on the following methods and properties:
- `is_lc_serializable`: Is this class serializable?
- By design, even if a class inherits from Serializable, it is not serializable by
- default. This is to prevent accidental serialization of objects that should not
- be serialized.
- - `get_lc_namespace`: Get the namespace of the langchain object.
+ By design, even if a class inherits from `Serializable`, it is not serializable
+ by default. This is to prevent accidental serialization of objects that should
+ not be serialized.
+ - `get_lc_namespace`: Get the namespace of the LangChain object.
+
During deserialization, this namespace is used to identify
the correct class to instantiate.
+
Please see the `Reviver` class in `langchain_core.load.load` for more details.
- During deserialization an additional mapping is handle
- classes that have moved or been renamed across package versions.
+ During deserialization an additional mapping is handle classes that have moved
+ or been renamed across package versions.
+
- `lc_secrets`: A map of constructor argument names to secret ids.
- `lc_attributes`: List of additional attribute names that should be included
- as part of the serialized representation.
+ as part of the serialized representation.
"""
# Remove default BaseModel init docstring.
@@ -116,24 +119,24 @@ class Serializable(BaseModel, ABC):
def is_lc_serializable(cls) -> bool:
"""Is this class serializable?
- By design, even if a class inherits from Serializable, it is not serializable by
- default. This is to prevent accidental serialization of objects that should not
- be serialized.
+ By design, even if a class inherits from `Serializable`, it is not serializable
+ by default. This is to prevent accidental serialization of objects that should
+ not be serialized.
Returns:
- Whether the class is serializable. Default is False.
+ Whether the class is serializable. Default is `False`.
"""
return False
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
For example, if the class is `langchain.llms.openai.OpenAI`, then the
- namespace is ["langchain", "llms", "openai"]
+ namespace is `["langchain", "llms", "openai"]`
Returns:
- The namespace as a list of strings.
+ The namespace.
"""
return cls.__module__.split(".")
@@ -141,8 +144,7 @@ class Serializable(BaseModel, ABC):
def lc_secrets(self) -> dict[str, str]:
"""A map of constructor argument names to secret ids.
- For example,
- {"openai_api_key": "OPENAI_API_KEY"}
+ For example, `{"openai_api_key": "OPENAI_API_KEY"}`
"""
return {}
@@ -151,6 +153,7 @@ class Serializable(BaseModel, ABC):
"""List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
+
Default is an empty dictionary.
"""
return {}
@@ -194,7 +197,7 @@ class Serializable(BaseModel, ABC):
ValueError: If the class has deprecated attributes.
Returns:
- A json serializable object or a SerializedNotImplemented object.
+ A JSON serializable object or a `SerializedNotImplemented` object.
"""
if not self.is_lc_serializable():
return self.to_json_not_implemented()
@@ -269,7 +272,7 @@ class Serializable(BaseModel, ABC):
"""Serialize a "not implemented" object.
Returns:
- SerializedNotImplemented.
+ `SerializedNotImplemented`.
"""
return to_json_not_implemented(self)
@@ -284,8 +287,8 @@ def _is_field_useful(inst: Serializable, key: str, value: Any) -> bool:
Returns:
Whether the field is useful. If the field is required, it is useful.
- If the field is not required, it is useful if the value is not None.
- If the field is not required and the value is None, it is useful if the
+ If the field is not required, it is useful if the value is not `None`.
+ If the field is not required and the value is `None`, it is useful if the
default value is different from the value.
"""
field = type(inst).model_fields.get(key)
@@ -344,10 +347,10 @@ def to_json_not_implemented(obj: object) -> SerializedNotImplemented:
"""Serialize a "not implemented" object.
Args:
- obj: object to serialize.
+ obj: Object to serialize.
Returns:
- SerializedNotImplemented
+ `SerializedNotImplemented`
"""
id_: list[str] = []
try:
diff --git a/libs/core/langchain_core/messages/__init__.py b/libs/core/langchain_core/messages/__init__.py
index 183aff7fae7..97171f56b16 100644
--- a/libs/core/langchain_core/messages/__init__.py
+++ b/libs/core/langchain_core/messages/__init__.py
@@ -9,6 +9,9 @@ if TYPE_CHECKING:
from langchain_core.messages.ai import (
AIMessage,
AIMessageChunk,
+ InputTokenDetails,
+ OutputTokenDetails,
+ UsageMetadata,
)
from langchain_core.messages.base import (
BaseMessage,
@@ -87,10 +90,12 @@ __all__ = (
"HumanMessage",
"HumanMessageChunk",
"ImageContentBlock",
+ "InputTokenDetails",
"InvalidToolCall",
"MessageLikeRepresentation",
"NonStandardAnnotation",
"NonStandardContentBlock",
+ "OutputTokenDetails",
"PlainTextContentBlock",
"ReasoningContentBlock",
"RemoveMessage",
@@ -104,6 +109,7 @@ __all__ = (
"ToolCallChunk",
"ToolMessage",
"ToolMessageChunk",
+ "UsageMetadata",
"VideoContentBlock",
"_message_from_dict",
"convert_to_messages",
@@ -145,6 +151,7 @@ _dynamic_imports = {
"HumanMessageChunk": "human",
"NonStandardAnnotation": "content",
"NonStandardContentBlock": "content",
+ "OutputTokenDetails": "ai",
"PlainTextContentBlock": "content",
"ReasoningContentBlock": "content",
"RemoveMessage": "modifier",
@@ -154,12 +161,14 @@ _dynamic_imports = {
"SystemMessage": "system",
"SystemMessageChunk": "system",
"ImageContentBlock": "content",
+ "InputTokenDetails": "ai",
"InvalidToolCall": "tool",
"TextContentBlock": "content",
"ToolCall": "tool",
"ToolCallChunk": "tool",
"ToolMessage": "tool",
"ToolMessageChunk": "tool",
+ "UsageMetadata": "ai",
"VideoContentBlock": "content",
"AnyMessage": "utils",
"MessageLikeRepresentation": "utils",
diff --git a/libs/core/langchain_core/messages/ai.py b/libs/core/langchain_core/messages/ai.py
index 011c1d2ed46..fb85027b142 100644
--- a/libs/core/langchain_core/messages/ai.py
+++ b/libs/core/langchain_core/messages/ai.py
@@ -48,10 +48,10 @@ class InputTokenDetails(TypedDict, total=False):
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+ !!! version-added "Added in `langchain-core` 0.3.9"
+
"""
audio: int
@@ -83,7 +83,9 @@ class OutputTokenDetails(TypedDict, total=False):
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
"""
@@ -121,9 +123,13 @@ class UsageMetadata(TypedDict):
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
"""
input_tokens: int
@@ -131,7 +137,7 @@ class UsageMetadata(TypedDict):
output_tokens: int
"""Count of output (or completion) tokens. Sum of all output token types."""
total_tokens: int
- """Total token count. Sum of input_tokens + output_tokens."""
+ """Total token count. Sum of `input_tokens` + `output_tokens`."""
input_token_details: NotRequired[InputTokenDetails]
"""Breakdown of input token counts.
@@ -141,34 +147,31 @@ class UsageMetadata(TypedDict):
"""Breakdown of output token counts.
Does *not* need to sum to full output token count. Does *not* need to have all keys.
-
"""
class AIMessage(BaseMessage):
"""Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
-
"""
tool_calls: list[ToolCall] = []
- """If provided, tool calls associated with the message."""
+ """If present, tool calls associated with the message."""
invalid_tool_calls: list[InvalidToolCall] = []
- """If provided, tool calls with parsing errors associated with the message."""
+ """If present, tool calls with parsing errors associated with the message."""
usage_metadata: UsageMetadata | None = None
- """If provided, usage metadata for a message, such as token counts.
+ """If present, usage metadata for a message, such as token counts.
This is a standard representation of token usage that is consistent across models.
-
"""
type: Literal["ai"] = "ai"
- """The type of the message (used for deserialization). Defaults to "ai"."""
+ """The type of the message (used for deserialization)."""
@overload
def __init__(
@@ -191,7 +194,7 @@ class AIMessage(BaseMessage):
content_blocks: list[types.ContentBlock] | None = None,
**kwargs: Any,
) -> None:
- """Initialize `AIMessage`.
+ """Initialize an `AIMessage`.
Specify `content` as positional arg or `content_blocks` for typing.
@@ -217,7 +220,11 @@ class AIMessage(BaseMessage):
@property
def lc_attributes(self) -> dict:
- """Attrs to be serialized even if they are derived from other init args."""
+ """Attributes to be serialized.
+
+ Includes all attributes, even if they are derived from other initialization
+ arguments.
+ """
return {
"tool_calls": self.tool_calls,
"invalid_tool_calls": self.invalid_tool_calls,
@@ -225,7 +232,7 @@ class AIMessage(BaseMessage):
@property
def content_blocks(self) -> list[types.ContentBlock]:
- """Return content blocks of the message.
+ """Return standard, typed `ContentBlock` dicts from the message.
If the message has a known model provider, use the provider-specific translator
first before falling back to best-effort parsing. For details, see the property
@@ -331,11 +338,10 @@ class AIMessage(BaseMessage):
@override
def pretty_repr(self, html: bool = False) -> str:
- """Return a pretty representation of the message.
+ """Return a pretty representation of the message for display.
Args:
html: Whether to return an HTML-formatted string.
- Defaults to `False`.
Returns:
A pretty representation of the message.
@@ -372,23 +378,19 @@ class AIMessage(BaseMessage):
class AIMessageChunk(AIMessage, BaseMessageChunk):
- """Message chunk from an AI."""
+ """Message chunk from an AI (yielded when streaming)."""
# Ignoring mypy re-assignment here since we're overriding the value
# to make sure that the chunk variant can be discriminated from the
# non-chunk variant.
type: Literal["AIMessageChunk"] = "AIMessageChunk" # type: ignore[assignment]
- """The type of the message (used for deserialization).
-
- Defaults to `AIMessageChunk`.
-
- """
+ """The type of the message (used for deserialization)."""
tool_call_chunks: list[ToolCallChunk] = []
"""If provided, tool call chunks associated with the message."""
chunk_position: Literal["last"] | None = None
- """Optional span represented by an aggregated AIMessageChunk.
+ """Optional span represented by an aggregated `AIMessageChunk`.
If a chunk with `chunk_position="last"` is aggregated into a stream,
`tool_call_chunks` in message content will be parsed into `tool_calls`.
@@ -396,7 +398,7 @@ class AIMessageChunk(AIMessage, BaseMessageChunk):
@property
def lc_attributes(self) -> dict:
- """Attrs to be serialized even if they are derived from other init args."""
+ """Attributes to be serialized, even if they are derived from other initialization args.""" # noqa: E501
return {
"tool_calls": self.tool_calls,
"invalid_tool_calls": self.invalid_tool_calls,
@@ -404,7 +406,7 @@ class AIMessageChunk(AIMessage, BaseMessageChunk):
@property
def content_blocks(self) -> list[types.ContentBlock]:
- """Return content blocks of the message."""
+ """Return standard, typed `ContentBlock` dicts from the message."""
if self.response_metadata.get("output_version") == "v1":
return cast("list[types.ContentBlock]", self.content)
@@ -545,12 +547,15 @@ class AIMessageChunk(AIMessage, BaseMessageChunk):
and call_id in id_to_tc
):
self.content[idx] = cast("dict[str, Any]", id_to_tc[call_id])
+ if "extras" in block:
+ # mypy does not account for instance check for dict above
+ self.content[idx]["extras"] = block["extras"] # type: ignore[index]
return self
@model_validator(mode="after")
def init_server_tool_calls(self) -> Self:
- """Parse server_tool_call_chunks."""
+ """Parse `server_tool_call_chunks`."""
if (
self.chunk_position == "last"
and self.response_metadata.get("output_version") == "v1"
@@ -650,13 +655,13 @@ def add_ai_message_chunks(
chunk_id = id_
break
else:
- # second pass: prefer lc_run-* ids over lc_* ids
+ # second pass: prefer lc_run-* IDs over lc_* IDs
for id_ in candidates:
if id_ and id_.startswith(LC_ID_PREFIX):
chunk_id = id_
break
else:
- # third pass: take any remaining id (auto-generated lc_* ids)
+ # third pass: take any remaining ID (auto-generated lc_* IDs)
for id_ in candidates:
if id_:
chunk_id = id_
diff --git a/libs/core/langchain_core/messages/base.py b/libs/core/langchain_core/messages/base.py
index 0a735eb2829..05ce4e57fd1 100644
--- a/libs/core/langchain_core/messages/base.py
+++ b/libs/core/langchain_core/messages/base.py
@@ -92,11 +92,15 @@ class TextAccessor(str):
class BaseMessage(Serializable):
"""Base abstract message class.
- Messages are the inputs and outputs of a `ChatModel`.
+ Messages are the inputs and outputs of a chat model.
+
+ Examples include [`HumanMessage`][langchain.messages.HumanMessage],
+ [`AIMessage`][langchain.messages.AIMessage], and
+ [`SystemMessage`][langchain.messages.SystemMessage].
"""
content: str | list[str | dict]
- """The string contents of the message."""
+ """The contents of the message."""
additional_kwargs: dict = Field(default_factory=dict)
"""Reserved for additional payload data associated with the message.
@@ -159,12 +163,12 @@ class BaseMessage(Serializable):
content_blocks: list[types.ContentBlock] | None = None,
**kwargs: Any,
) -> None:
- """Initialize `BaseMessage`.
+ """Initialize a `BaseMessage`.
Specify `content` as positional arg or `content_blocks` for typing.
Args:
- content: The string contents of the message.
+ content: The contents of the message.
content_blocks: Typed standard content.
**kwargs: Additional arguments to pass to the parent class.
"""
@@ -184,7 +188,7 @@ class BaseMessage(Serializable):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "messages"]`
@@ -195,7 +199,7 @@ class BaseMessage(Serializable):
def content_blocks(self) -> list[types.ContentBlock]:
r"""Load content blocks from the message content.
- !!! version-added "Added in version 1.0.0"
+ !!! version-added "Added in `langchain-core` 1.0.0"
"""
# Needed here to avoid circular import, as these classes import BaseMessages
@@ -262,7 +266,7 @@ class BaseMessage(Serializable):
Can be used as both property (`message.text`) and method (`message.text()`).
!!! deprecated
- As of langchain-core 1.0.0, calling `.text()` as a method is deprecated.
+ As of `langchain-core` 1.0.0, calling `.text()` as a method is deprecated.
Use `.text` as a property instead. This method will be removed in 2.0.0.
Returns:
@@ -307,7 +311,7 @@ class BaseMessage(Serializable):
Args:
html: Whether to format the message as HTML. If `True`, the message will be
- formatted with HTML tags. Default is False.
+ formatted with HTML tags.
Returns:
A pretty representation of the message.
@@ -464,7 +468,7 @@ def get_msg_title_repr(title: str, *, bold: bool = False) -> str:
Args:
title: The title.
- bold: Whether to bold the title. Default is False.
+ bold: Whether to bold the title.
Returns:
The title representation.
diff --git a/libs/core/langchain_core/messages/block_translators/__init__.py b/libs/core/langchain_core/messages/block_translators/__init__.py
index 11419fd5b81..9d0c67ea7d2 100644
--- a/libs/core/langchain_core/messages/block_translators/__init__.py
+++ b/libs/core/langchain_core/messages/block_translators/__init__.py
@@ -28,7 +28,7 @@ dictionary with two keys:
- `'translate_content'`: Function to translate `AIMessage` content.
- `'translate_content_chunk'`: Function to translate `AIMessageChunk` content.
-When calling `.content_blocks` on an `AIMessage` or `AIMessageChunk`, if
+When calling `content_blocks` on an `AIMessage` or `AIMessageChunk`, if
`model_provider` is set in `response_metadata`, the corresponding translator
functions will be used to parse the content into blocks. Otherwise, best-effort parsing
in `BaseMessage` will be used.
diff --git a/libs/core/langchain_core/messages/block_translators/anthropic.py b/libs/core/langchain_core/messages/block_translators/anthropic.py
index 87f9df8a392..c5178be45b9 100644
--- a/libs/core/langchain_core/messages/block_translators/anthropic.py
+++ b/libs/core/langchain_core/messages/block_translators/anthropic.py
@@ -31,7 +31,7 @@ def _convert_to_v1_from_anthropic_input(
) -> list[types.ContentBlock]:
"""Convert Anthropic format blocks to v1 format.
- During the `.content_blocks` parsing process, we wrap blocks not recognized as a v1
+ During the `content_blocks` parsing process, we wrap blocks not recognized as a v1
block as a `'non_standard'` block with the original block stored in the `value`
field. This function attempts to unpack those blocks and convert any blocks that
might be Anthropic format to v1 ContentBlocks.
diff --git a/libs/core/langchain_core/messages/block_translators/bedrock_converse.py b/libs/core/langchain_core/messages/block_translators/bedrock_converse.py
index 6d5e517e49f..c44ef6ca535 100644
--- a/libs/core/langchain_core/messages/block_translators/bedrock_converse.py
+++ b/libs/core/langchain_core/messages/block_translators/bedrock_converse.py
@@ -35,7 +35,7 @@ def _convert_to_v1_from_converse_input(
) -> list[types.ContentBlock]:
"""Convert Bedrock Converse format blocks to v1 format.
- During the `.content_blocks` parsing process, we wrap blocks not recognized as a v1
+ During the `content_blocks` parsing process, we wrap blocks not recognized as a v1
block as a `'non_standard'` block with the original block stored in the `value`
field. This function attempts to unpack those blocks and convert any blocks that
might be Converse format to v1 ContentBlocks.
diff --git a/libs/core/langchain_core/messages/block_translators/google_genai.py b/libs/core/langchain_core/messages/block_translators/google_genai.py
index 8380a267a52..2a82f035c23 100644
--- a/libs/core/langchain_core/messages/block_translators/google_genai.py
+++ b/libs/core/langchain_core/messages/block_translators/google_genai.py
@@ -105,7 +105,7 @@ def _convert_to_v1_from_genai_input(
Called when message isn't an `AIMessage` or `model_provider` isn't set on
`response_metadata`.
- During the `.content_blocks` parsing process, we wrap blocks not recognized as a v1
+ During the `content_blocks` parsing process, we wrap blocks not recognized as a v1
block as a `'non_standard'` block with the original block stored in the `value`
field. This function attempts to unpack those blocks and convert any blocks that
might be GenAI format to v1 ContentBlocks.
@@ -282,7 +282,7 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
standard content blocks for returning.
Args:
- message: The AIMessage or AIMessageChunk to convert.
+ message: The `AIMessage` or `AIMessageChunk` to convert.
Returns:
List of standard content blocks derived from the message content.
@@ -368,7 +368,7 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
else:
# Assume it's raw base64 without data URI
try:
- # Validate base64 and decode for mime type detection
+ # Validate base64 and decode for MIME type detection
decoded_bytes = base64.b64decode(url, validate=True)
image_url_b64_block = {
@@ -379,7 +379,7 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
try:
import filetype # type: ignore[import-not-found] # noqa: PLC0415
- # Guess mime type based on file bytes
+ # Guess MIME type based on file bytes
mime_type = None
kind = filetype.guess(decoded_bytes)
if kind:
@@ -453,10 +453,13 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
"status": status, # type: ignore[typeddict-item]
"output": item.get("code_execution_result", ""),
}
+ server_tool_result_block["extras"] = {"block_type": item_type}
# Preserve original outcome in extras
if outcome is not None:
- server_tool_result_block["extras"] = {"outcome": outcome}
+ server_tool_result_block["extras"]["outcome"] = outcome
converted_blocks.append(server_tool_result_block)
+ elif item_type == "text":
+ converted_blocks.append(cast("types.TextContentBlock", item))
else:
# Unknown type, preserve as non-standard
converted_blocks.append({"type": "non_standard", "value": item})
diff --git a/libs/core/langchain_core/messages/block_translators/google_vertexai.py b/libs/core/langchain_core/messages/block_translators/google_vertexai.py
index f4e8f7ec0ab..016f146164e 100644
--- a/libs/core/langchain_core/messages/block_translators/google_vertexai.py
+++ b/libs/core/langchain_core/messages/block_translators/google_vertexai.py
@@ -1,37 +1,9 @@
"""Derivations of standard content blocks from Google (VertexAI) content."""
-import warnings
-
-from langchain_core.messages import AIMessage, AIMessageChunk
-from langchain_core.messages import content as types
-
-WARNED = False
-
-
-def translate_content(message: AIMessage) -> list[types.ContentBlock]: # noqa: ARG001
- """Derive standard content blocks from a message with Google (VertexAI) content."""
- global WARNED # noqa: PLW0603
- if not WARNED:
- warning_message = (
- "Content block standardization is not yet fully supported for Google "
- "VertexAI."
- )
- warnings.warn(warning_message, stacklevel=2)
- WARNED = True
- raise NotImplementedError
-
-
-def translate_content_chunk(message: AIMessageChunk) -> list[types.ContentBlock]: # noqa: ARG001
- """Derive standard content blocks from a chunk with Google (VertexAI) content."""
- global WARNED # noqa: PLW0603
- if not WARNED:
- warning_message = (
- "Content block standardization is not yet fully supported for Google "
- "VertexAI."
- )
- warnings.warn(warning_message, stacklevel=2)
- WARNED = True
- raise NotImplementedError
+from langchain_core.messages.block_translators.google_genai import (
+ translate_content,
+ translate_content_chunk,
+)
def _register_google_vertexai_translator() -> None:
diff --git a/libs/core/langchain_core/messages/block_translators/groq.py b/libs/core/langchain_core/messages/block_translators/groq.py
index 958de52280a..33f1921ddc8 100644
--- a/libs/core/langchain_core/messages/block_translators/groq.py
+++ b/libs/core/langchain_core/messages/block_translators/groq.py
@@ -1,39 +1,135 @@
"""Derivations of standard content blocks from Groq content."""
-import warnings
+import json
+import re
+from typing import Any
from langchain_core.messages import AIMessage, AIMessageChunk
from langchain_core.messages import content as types
-
-WARNED = False
+from langchain_core.messages.base import _extract_reasoning_from_additional_kwargs
-def translate_content(message: AIMessage) -> list[types.ContentBlock]: # noqa: ARG001
- """Derive standard content blocks from a message with Groq content."""
- global WARNED # noqa: PLW0603
- if not WARNED:
- warning_message = (
- "Content block standardization is not yet fully supported for Groq."
+def _populate_extras(
+ standard_block: types.ContentBlock, block: dict[str, Any], known_fields: set[str]
+) -> types.ContentBlock:
+ """Mutate a block, populating extras."""
+ if standard_block.get("type") == "non_standard":
+ return standard_block
+
+ for key, value in block.items():
+ if key not in known_fields:
+ if "extras" not in standard_block:
+ # Below type-ignores are because mypy thinks a non-standard block can
+ # get here, although we exclude them above.
+ standard_block["extras"] = {} # type: ignore[typeddict-unknown-key]
+ standard_block["extras"][key] = value # type: ignore[typeddict-item]
+
+ return standard_block
+
+
+def _parse_code_json(s: str) -> dict:
+ """Extract Python code from Groq built-in tool content.
+
+ Extracts the value of the 'code' field from a string of the form:
+ {"code": some_arbitrary_text_with_unescaped_quotes}
+
+ As Groq may not escape quotes in the executed tools, e.g.:
+ ```
+ '{"code": "import math; print("The square root of 101 is: "); print(math.sqrt(101))"}'
+ ```
+ """ # noqa: E501
+ m = re.fullmatch(r'\s*\{\s*"code"\s*:\s*"(.*)"\s*\}\s*', s, flags=re.DOTALL)
+ if not m:
+ msg = (
+ "Could not extract Python code from Groq tool arguments. "
+ "Expected a JSON object with a 'code' field."
)
- warnings.warn(warning_message, stacklevel=2)
- WARNED = True
- raise NotImplementedError
+ raise ValueError(msg)
+ return {"code": m.group(1)}
-def translate_content_chunk(message: AIMessageChunk) -> list[types.ContentBlock]: # noqa: ARG001
- """Derive standard content blocks from a message chunk with Groq content."""
- global WARNED # noqa: PLW0603
- if not WARNED:
- warning_message = (
- "Content block standardization is not yet fully supported for Groq."
+def _convert_to_v1_from_groq(message: AIMessage) -> list[types.ContentBlock]:
+ """Convert groq message content to v1 format."""
+ content_blocks: list[types.ContentBlock] = []
+
+ if reasoning_block := _extract_reasoning_from_additional_kwargs(message):
+ content_blocks.append(reasoning_block)
+
+ if executed_tools := message.additional_kwargs.get("executed_tools"):
+ for idx, executed_tool in enumerate(executed_tools):
+ args: dict[str, Any] | None = None
+ if arguments := executed_tool.get("arguments"):
+ try:
+ args = json.loads(arguments)
+ except json.JSONDecodeError:
+ if executed_tool.get("type") == "python":
+ try:
+ args = _parse_code_json(arguments)
+ except ValueError:
+ continue
+ elif (
+ executed_tool.get("type") == "function"
+ and executed_tool.get("name") == "python"
+ ):
+ # GPT-OSS
+ args = {"code": arguments}
+ else:
+ continue
+ if isinstance(args, dict):
+ name = ""
+ if executed_tool.get("type") == "search":
+ name = "web_search"
+ elif executed_tool.get("type") == "python" or (
+ executed_tool.get("type") == "function"
+ and executed_tool.get("name") == "python"
+ ):
+ name = "code_interpreter"
+ server_tool_call: types.ServerToolCall = {
+ "type": "server_tool_call",
+ "name": name,
+ "id": str(idx),
+ "args": args,
+ }
+ content_blocks.append(server_tool_call)
+ if tool_output := executed_tool.get("output"):
+ tool_result: types.ServerToolResult = {
+ "type": "server_tool_result",
+ "tool_call_id": str(idx),
+ "output": tool_output,
+ "status": "success",
+ }
+ known_fields = {"type", "arguments", "index", "output"}
+ _populate_extras(tool_result, executed_tool, known_fields)
+ content_blocks.append(tool_result)
+
+ if isinstance(message.content, str) and message.content:
+ content_blocks.append({"type": "text", "text": message.content})
+
+ for tool_call in message.tool_calls:
+ content_blocks.append( # noqa: PERF401
+ {
+ "type": "tool_call",
+ "name": tool_call["name"],
+ "args": tool_call["args"],
+ "id": tool_call.get("id"),
+ }
)
- warnings.warn(warning_message, stacklevel=2)
- WARNED = True
- raise NotImplementedError
+
+ return content_blocks
+
+
+def translate_content(message: AIMessage) -> list[types.ContentBlock]:
+ """Derive standard content blocks from a message with groq content."""
+ return _convert_to_v1_from_groq(message)
+
+
+def translate_content_chunk(message: AIMessageChunk) -> list[types.ContentBlock]:
+ """Derive standard content blocks from a message chunk with groq content."""
+ return _convert_to_v1_from_groq(message)
def _register_groq_translator() -> None:
- """Register the Groq translator with the central registry.
+ """Register the groq translator with the central registry.
Run automatically when the module is imported.
"""
diff --git a/libs/core/langchain_core/messages/block_translators/langchain_v0.py b/libs/core/langchain_core/messages/block_translators/langchain_v0.py
index 2172bf6e829..f7cb03839e8 100644
--- a/libs/core/langchain_core/messages/block_translators/langchain_v0.py
+++ b/libs/core/langchain_core/messages/block_translators/langchain_v0.py
@@ -10,7 +10,7 @@ def _convert_v0_multimodal_input_to_v1(
) -> list[types.ContentBlock]:
"""Convert v0 multimodal blocks to v1 format.
- During the `.content_blocks` parsing process, we wrap blocks not recognized as a v1
+ During the `content_blocks` parsing process, we wrap blocks not recognized as a v1
block as a `'non_standard'` block with the original block stored in the `value`
field. This function attempts to unpack those blocks and convert any v0 format
blocks to v1 format.
diff --git a/libs/core/langchain_core/messages/block_translators/openai.py b/libs/core/langchain_core/messages/block_translators/openai.py
index 5d60ce025ac..f70c15feca3 100644
--- a/libs/core/langchain_core/messages/block_translators/openai.py
+++ b/libs/core/langchain_core/messages/block_translators/openai.py
@@ -155,7 +155,7 @@ def _convert_to_v1_from_chat_completions_input(
) -> list[types.ContentBlock]:
"""Convert OpenAI Chat Completions format blocks to v1 format.
- During the `.content_blocks` parsing process, we wrap blocks not recognized as a v1
+ During the `content_blocks` parsing process, we wrap blocks not recognized as a v1
block as a `'non_standard'` block with the original block stored in the `value`
field. This function attempts to unpack those blocks and convert any blocks that
might be OpenAI format to v1 ContentBlocks.
diff --git a/libs/core/langchain_core/messages/chat.py b/libs/core/langchain_core/messages/chat.py
index 2050dd7fa0b..6786efcacf4 100644
--- a/libs/core/langchain_core/messages/chat.py
+++ b/libs/core/langchain_core/messages/chat.py
@@ -19,7 +19,7 @@ class ChatMessage(BaseMessage):
"""The speaker / role of the Message."""
type: Literal["chat"] = "chat"
- """The type of the message (used during serialization). Defaults to "chat"."""
+ """The type of the message (used during serialization)."""
class ChatMessageChunk(ChatMessage, BaseMessageChunk):
@@ -29,11 +29,7 @@ class ChatMessageChunk(ChatMessage, BaseMessageChunk):
# to make sure that the chunk variant can be discriminated from the
# non-chunk variant.
type: Literal["ChatMessageChunk"] = "ChatMessageChunk" # type: ignore[assignment]
- """The type of the message (used during serialization).
-
- Defaults to `'ChatMessageChunk'`.
-
- """
+ """The type of the message (used during serialization)."""
@override
def __add__(self, other: Any) -> BaseMessageChunk: # type: ignore[override]
diff --git a/libs/core/langchain_core/messages/content.py b/libs/core/langchain_core/messages/content.py
index 9f14baea9b7..568579bc203 100644
--- a/libs/core/langchain_core/messages/content.py
+++ b/libs/core/langchain_core/messages/content.py
@@ -143,7 +143,7 @@ class Citation(TypedDict):
not the source text. This means that the indices are relative to the model's
response, not the original document (as specified in the `url`).
- !!! note
+ !!! note "Factory function"
`create_citation` may also be used as a factory to create a `Citation`.
Benefits include:
@@ -156,7 +156,9 @@ class Citation(TypedDict):
"""Type of the content block. Used for discrimination."""
id: NotRequired[str]
- """Content block identifier. Either:
+ """Content block identifier.
+
+ Either:
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -201,6 +203,7 @@ class NonStandardAnnotation(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -211,6 +214,7 @@ class NonStandardAnnotation(TypedDict):
Annotation = Citation | NonStandardAnnotation
+"""A union of all defined `Annotation` types."""
class TextContentBlock(TypedDict):
@@ -219,7 +223,7 @@ class TextContentBlock(TypedDict):
This typically represents the main text content of a message, such as the response
from a language model or the text of a user message.
- !!! note
+ !!! note "Factory function"
`create_text_block` may also be used as a factory to create a
`TextContentBlock`. Benefits include:
@@ -235,6 +239,7 @@ class TextContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -254,7 +259,7 @@ class TextContentBlock(TypedDict):
class ToolCall(TypedDict):
- """Represents a request to call a tool.
+ """Represents an AI's request to call a tool.
Example:
```python
@@ -264,7 +269,7 @@ class ToolCall(TypedDict):
This represents a request to call the tool named "foo" with arguments {"a": 1}
and an identifier of "123".
- !!! note
+ !!! note "Factory function"
`create_tool_call` may also be used as a factory to create a
`ToolCall`. Benefits include:
@@ -299,7 +304,7 @@ class ToolCall(TypedDict):
class ToolCallChunk(TypedDict):
- """A chunk of a tool call (e.g., as part of a stream).
+ """A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunks` (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -381,7 +386,10 @@ class InvalidToolCall(TypedDict):
class ServerToolCall(TypedDict):
- """Tool call that is executed server-side."""
+ """Tool call that is executed server-side.
+
+ For example: code execution, web search, etc.
+ """
type: Literal["server_tool_call"]
"""Used for discrimination."""
@@ -403,7 +411,7 @@ class ServerToolCall(TypedDict):
class ServerToolCallChunk(TypedDict):
- """A chunk of a tool call (as part of a stream)."""
+ """A chunk of a server-side tool call (yielded when streaming)."""
type: Literal["server_tool_call_chunk"]
"""Used for discrimination."""
@@ -452,7 +460,7 @@ class ServerToolResult(TypedDict):
class ReasoningContentBlock(TypedDict):
"""Reasoning output from a LLM.
- !!! note
+ !!! note "Factory function"
`create_reasoning_block` may also be used as a factory to create a
`ReasoningContentBlock`. Benefits include:
@@ -468,6 +476,7 @@ class ReasoningContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -494,7 +503,7 @@ class ReasoningContentBlock(TypedDict):
class ImageContentBlock(TypedDict):
"""Image data.
- !!! note
+ !!! note "Factory function"
`create_image_block` may also be used as a factory to create a
`ImageContentBlock`. Benefits include:
@@ -510,6 +519,7 @@ class ImageContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -541,7 +551,7 @@ class ImageContentBlock(TypedDict):
class VideoContentBlock(TypedDict):
"""Video data.
- !!! note
+ !!! note "Factory function"
`create_video_block` may also be used as a factory to create a
`VideoContentBlock`. Benefits include:
@@ -557,6 +567,7 @@ class VideoContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -588,7 +599,7 @@ class VideoContentBlock(TypedDict):
class AudioContentBlock(TypedDict):
"""Audio data.
- !!! note
+ !!! note "Factory function"
`create_audio_block` may also be used as a factory to create an
`AudioContentBlock`. Benefits include:
* Automatic ID generation (when not provided)
@@ -603,6 +614,7 @@ class AudioContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -632,7 +644,7 @@ class AudioContentBlock(TypedDict):
class PlainTextContentBlock(TypedDict):
- """Plaintext data (e.g., from a document).
+ """Plaintext data (e.g., from a `.txt` or `.md` document).
!!! note
A `PlainTextContentBlock` existed in `langchain-core<1.0.0`. Although the
@@ -642,9 +654,9 @@ class PlainTextContentBlock(TypedDict):
!!! note
Title and context are optional fields that may be passed to the model. See
- Anthropic [example](https://docs.anthropic.com/en/docs/build-with-claude/citations#citable-vs-non-citable-content).
+ Anthropic [example](https://docs.claude.com/en/docs/build-with-claude/citations#citable-vs-non-citable-content).
- !!! note
+ !!! note "Factory function"
`create_plaintext_block` may also be used as a factory to create a
`PlainTextContentBlock`. Benefits include:
@@ -660,6 +672,7 @@ class PlainTextContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -694,7 +707,7 @@ class PlainTextContentBlock(TypedDict):
class FileContentBlock(TypedDict):
- """File data that doesn't fit into other multimodal blocks.
+ """File data that doesn't fit into other multimodal block types.
This block is intended for files that are not images, audio, or plaintext. For
example, it can be used for PDFs, Word documents, etc.
@@ -703,7 +716,7 @@ class FileContentBlock(TypedDict):
content block type (e.g., `ImageContentBlock`, `AudioContentBlock`,
`PlainTextContentBlock`).
- !!! note
+ !!! note "Factory function"
`create_file_block` may also be used as a factory to create a
`FileContentBlock`. Benefits include:
@@ -719,6 +732,7 @@ class FileContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
@@ -753,7 +767,7 @@ class FileContentBlock(TypedDict):
class NonStandardContentBlock(TypedDict):
- """Provider-specific data.
+ """Provider-specific content data.
This block contains data for which there is not yet a standard type.
@@ -765,7 +779,7 @@ class NonStandardContentBlock(TypedDict):
Has no `extras` field, as provider-specific data should be included in the
`value` field.
- !!! note
+ !!! note "Factory function"
`create_non_standard_block` may also be used as a factory to create a
`NonStandardContentBlock`. Benefits include:
@@ -781,13 +795,14 @@ class NonStandardContentBlock(TypedDict):
"""Content block identifier.
Either:
+
- Generated by the provider (e.g., OpenAI's file ID)
- Generated by LangChain upon creation (`UUID4` prefixed with `'lc_'`))
"""
value: dict[str, Any]
- """Provider-specific data."""
+ """Provider-specific content data."""
index: NotRequired[int | str]
"""Index of block in aggregate response. Used during streaming."""
@@ -801,6 +816,7 @@ DataContentBlock = (
| PlainTextContentBlock
| FileContentBlock
)
+"""A union of all defined multimodal data `ContentBlock` types."""
ToolContentBlock = (
ToolCall | ToolCallChunk | ServerToolCall | ServerToolCallChunk | ServerToolResult
@@ -814,6 +830,7 @@ ContentBlock = (
| DataContentBlock
| ToolContentBlock
)
+"""A union of all defined `ContentBlock` types and aliases."""
KNOWN_BLOCK_TYPES = {
@@ -850,7 +867,7 @@ def _get_data_content_block_types() -> tuple[str, ...]:
Example: ("image", "video", "audio", "text-plain", "file")
Note that old style multimodal blocks type literals with new style blocks.
- Speficially, "image", "audio", and "file".
+ Specifically, "image", "audio", and "file".
See the docstring of `_normalize_messages` in `language_models._utils` for details.
"""
@@ -877,7 +894,7 @@ def is_data_content_block(block: dict) -> bool:
block: The content block to check.
Returns:
- True if the content block is a data content block, False otherwise.
+ `True` if the content block is a data content block, `False` otherwise.
"""
if block.get("type") not in _get_data_content_block_types():
@@ -889,7 +906,7 @@ def is_data_content_block(block: dict) -> bool:
# 'text' is checked to support v0 PlainTextContentBlock types
# We must guard against new style TextContentBlock which also has 'text' `type`
- # by ensuring the presense of `source_type`
+ # by ensuring the presence of `source_type`
if block["type"] == "text" and "source_type" not in block: # noqa: SIM103 # This is more readable
return False
@@ -1382,7 +1399,7 @@ def create_non_standard_block(
"""Create a `NonStandardContentBlock`.
Args:
- value: Provider-specific data.
+ value: Provider-specific content data.
id: Content block identifier. Generated automatically if not provided.
index: Index of block in aggregate response. Used during streaming.
diff --git a/libs/core/langchain_core/messages/function.py b/libs/core/langchain_core/messages/function.py
index 2bd63e04e69..ee0dad3975f 100644
--- a/libs/core/langchain_core/messages/function.py
+++ b/libs/core/langchain_core/messages/function.py
@@ -19,7 +19,7 @@ class FunctionMessage(BaseMessage):
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
"""
@@ -28,7 +28,7 @@ class FunctionMessage(BaseMessage):
"""The name of the function that was executed."""
type: Literal["function"] = "function"
- """The type of the message (used for serialization). Defaults to `'function'`."""
+ """The type of the message (used for serialization)."""
class FunctionMessageChunk(FunctionMessage, BaseMessageChunk):
@@ -38,11 +38,7 @@ class FunctionMessageChunk(FunctionMessage, BaseMessageChunk):
# to make sure that the chunk variant can be discriminated from the
# non-chunk variant.
type: Literal["FunctionMessageChunk"] = "FunctionMessageChunk" # type: ignore[assignment]
- """The type of the message (used for serialization).
-
- Defaults to `'FunctionMessageChunk'`.
-
- """
+ """The type of the message (used for serialization)."""
@override
def __add__(self, other: Any) -> BaseMessageChunk: # type: ignore[override]
diff --git a/libs/core/langchain_core/messages/human.py b/libs/core/langchain_core/messages/human.py
index d4626b6f415..338e2213700 100644
--- a/libs/core/langchain_core/messages/human.py
+++ b/libs/core/langchain_core/messages/human.py
@@ -7,9 +7,9 @@ from langchain_core.messages.base import BaseMessage, BaseMessageChunk
class HumanMessage(BaseMessage):
- """Message from a human.
+ """Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -27,11 +27,7 @@ class HumanMessage(BaseMessage):
"""
type: Literal["human"] = "human"
- """The type of the message (used for serialization).
-
- Defaults to `'human'`.
-
- """
+ """The type of the message (used for serialization)."""
@overload
def __init__(
@@ -71,5 +67,4 @@ class HumanMessageChunk(HumanMessage, BaseMessageChunk):
# to make sure that the chunk variant can be discriminated from the
# non-chunk variant.
type: Literal["HumanMessageChunk"] = "HumanMessageChunk" # type: ignore[assignment]
- """The type of the message (used for serialization).
- Defaults to "HumanMessageChunk"."""
+ """The type of the message (used for serialization)."""
diff --git a/libs/core/langchain_core/messages/modifier.py b/libs/core/langchain_core/messages/modifier.py
index 364af242ea5..2175be492e8 100644
--- a/libs/core/langchain_core/messages/modifier.py
+++ b/libs/core/langchain_core/messages/modifier.py
@@ -9,7 +9,7 @@ class RemoveMessage(BaseMessage):
"""Message responsible for deleting other messages."""
type: Literal["remove"] = "remove"
- """The type of the message (used for serialization). Defaults to "remove"."""
+ """The type of the message (used for serialization)."""
def __init__(
self,
diff --git a/libs/core/langchain_core/messages/system.py b/libs/core/langchain_core/messages/system.py
index 789506ce5ce..4a60811dffc 100644
--- a/libs/core/langchain_core/messages/system.py
+++ b/libs/core/langchain_core/messages/system.py
@@ -27,11 +27,7 @@ class SystemMessage(BaseMessage):
"""
type: Literal["system"] = "system"
- """The type of the message (used for serialization).
-
- Defaults to `'system'`.
-
- """
+ """The type of the message (used for serialization)."""
@overload
def __init__(
@@ -71,8 +67,4 @@ class SystemMessageChunk(SystemMessage, BaseMessageChunk):
# to make sure that the chunk variant can be discriminated from the
# non-chunk variant.
type: Literal["SystemMessageChunk"] = "SystemMessageChunk" # type: ignore[assignment]
- """The type of the message (used for serialization).
-
- Defaults to `'SystemMessageChunk'`.
-
- """
+ """The type of the message (used for serialization)."""
diff --git a/libs/core/langchain_core/messages/tool.py b/libs/core/langchain_core/messages/tool.py
index 47128bf5e42..6993ad61c6a 100644
--- a/libs/core/langchain_core/messages/tool.py
+++ b/libs/core/langchain_core/messages/tool.py
@@ -31,36 +31,34 @@ class ToolMessage(BaseMessage, ToolOutputMixin):
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
"""
@@ -69,11 +67,7 @@ class ToolMessage(BaseMessage, ToolOutputMixin):
"""Tool call that this message is responding to."""
type: Literal["tool"] = "tool"
- """The type of the message (used for serialization).
-
- Defaults to `'tool'`.
-
- """
+ """The type of the message (used for serialization)."""
artifact: Any = None
"""Artifact of the Tool execution which is not meant to be sent to the model.
@@ -82,21 +76,15 @@ class ToolMessage(BaseMessage, ToolOutputMixin):
a subset of the full tool output is being passed as message content but the full
output is needed in other parts of the code.
- !!! version-added "Added in version 0.2.17"
-
"""
status: Literal["success", "error"] = "success"
- """Status of the tool invocation.
-
- !!! version-added "Added in version 0.2.24"
-
- """
+ """Status of the tool invocation."""
additional_kwargs: dict = Field(default_factory=dict, repr=False)
- """Currently inherited from BaseMessage, but not used."""
+ """Currently inherited from `BaseMessage`, but not used."""
response_metadata: dict = Field(default_factory=dict, repr=False)
- """Currently inherited from BaseMessage, but not used."""
+ """Currently inherited from `BaseMessage`, but not used."""
@model_validator(mode="before")
@classmethod
@@ -164,12 +152,12 @@ class ToolMessage(BaseMessage, ToolOutputMixin):
content_blocks: list[types.ContentBlock] | None = None,
**kwargs: Any,
) -> None:
- """Initialize `ToolMessage`.
+ """Initialize a `ToolMessage`.
Specify `content` as positional arg or `content_blocks` for typing.
Args:
- content: The string contents of the message.
+ content: The contents of the message.
content_blocks: Typed standard content.
**kwargs: Additional fields.
"""
@@ -215,7 +203,7 @@ class ToolMessageChunk(ToolMessage, BaseMessageChunk):
class ToolCall(TypedDict):
- """Represents a request to call a tool.
+ """Represents an AI's request to call a tool.
Example:
```python
@@ -261,7 +249,7 @@ def tool_call(
class ToolCallChunk(TypedDict):
- """A chunk of a tool call (e.g., as part of a stream).
+ """A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
diff --git a/libs/core/langchain_core/messages/utils.py b/libs/core/langchain_core/messages/utils.py
index 9b3643c0899..9c0c89a23cb 100644
--- a/libs/core/langchain_core/messages/utils.py
+++ b/libs/core/langchain_core/messages/utils.py
@@ -86,6 +86,7 @@ AnyMessage = Annotated[
| Annotated[ToolMessageChunk, Tag(tag="ToolMessageChunk")],
Field(discriminator=Discriminator(_get_type)),
]
+"""A type representing any defined `Message` or `MessageChunk` type."""
def get_buffer_string(
@@ -96,9 +97,7 @@ def get_buffer_string(
Args:
messages: Messages to be converted to strings.
human_prefix: The prefix to prepend to contents of `HumanMessage`s.
- Default is `'Human'`.
- ai_prefix: The prefix to prepend to contents of `AIMessage`. Default is
- `'AI'`.
+ ai_prefix: The prefix to prepend to contents of `AIMessage`.
Returns:
A single string concatenation of all input messages.
@@ -211,6 +210,7 @@ def message_chunk_to_message(chunk: BaseMessage) -> BaseMessage:
MessageLikeRepresentation = (
BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]
)
+"""A type representing the various ways a message can be represented."""
def _create_message_from_message_type(
@@ -227,10 +227,10 @@ def _create_message_from_message_type(
Args:
message_type: (str) the type of the message (e.g., `'human'`, `'ai'`, etc.).
content: (str) the content string.
- name: (str) the name of the message. Default is None.
- tool_call_id: (str) the tool call id. Default is None.
- tool_calls: (list[dict[str, Any]]) the tool calls. Default is None.
- id: (str) the id of the message. Default is None.
+ name: (str) the name of the message.
+ tool_call_id: (str) the tool call id.
+ tool_calls: (list[dict[str, Any]]) the tool calls.
+ id: (str) the id of the message.
additional_kwargs: (dict[str, Any]) additional keyword arguments.
Returns:
@@ -319,7 +319,7 @@ def _convert_to_message(message: MessageLikeRepresentation) -> BaseMessage:
message: a representation of a message in one of the supported formats.
Returns:
- an instance of a message or a message template.
+ An instance of a message or a message template.
Raises:
NotImplementedError: if the message type is not supported.
@@ -328,12 +328,16 @@ def _convert_to_message(message: MessageLikeRepresentation) -> BaseMessage:
"""
if isinstance(message, BaseMessage):
message_ = message
- elif isinstance(message, str):
- message_ = _create_message_from_message_type("human", message)
- elif isinstance(message, Sequence) and len(message) == 2:
- # mypy doesn't realise this can't be a string given the previous branch
- message_type_str, template = message # type: ignore[misc]
- message_ = _create_message_from_message_type(message_type_str, template)
+ elif isinstance(message, Sequence):
+ if isinstance(message, str):
+ message_ = _create_message_from_message_type("human", message)
+ else:
+ try:
+ message_type_str, template = message
+ except ValueError as e:
+ msg = "Message as a sequence must be (role string, template)"
+ raise NotImplementedError(msg) from e
+ message_ = _create_message_from_message_type(message_type_str, template)
elif isinstance(message, dict):
msg_kwargs = message.copy()
try:
@@ -425,22 +429,22 @@ def filter_messages(
Args:
messages: Sequence Message-like objects to filter.
- include_names: Message names to include. Default is None.
- exclude_names: Messages names to exclude. Default is None.
+ include_names: Message names to include.
+ exclude_names: Messages names to exclude.
include_types: Message types to include. Can be specified as string names
(e.g. `'system'`, `'human'`, `'ai'`, ...) or as `BaseMessage`
classes (e.g. `SystemMessage`, `HumanMessage`, `AIMessage`, ...).
- Default is None.
+
exclude_types: Message types to exclude. Can be specified as string names
(e.g. `'system'`, `'human'`, `'ai'`, ...) or as `BaseMessage`
classes (e.g. `SystemMessage`, `HumanMessage`, `AIMessage`, ...).
- Default is None.
- include_ids: Message IDs to include. Default is None.
- exclude_ids: Message IDs to exclude. Default is None.
- exclude_tool_calls: Tool call IDs to exclude. Default is None.
+
+ include_ids: Message IDs to include.
+ exclude_ids: Message IDs to exclude.
+ exclude_tool_calls: Tool call IDs to exclude.
Can be one of the following:
- - `True`: all `AIMessage`s with tool calls and all
- `ToolMessage` objects will be excluded.
+ - `True`: All `AIMessage` objects with tool calls and all `ToolMessage`
+ objects will be excluded.
- a sequence of tool call IDs to exclude:
- `ToolMessage` objects with the corresponding tool call ID will be
excluded.
@@ -568,7 +572,6 @@ def merge_message_runs(
Args:
messages: Sequence Message-like objects to merge.
chunk_separator: Specify the string to be inserted between message chunks.
- Defaults to `'\n'`.
Returns:
list of BaseMessages with consecutive runs of message types merged into single
@@ -703,7 +706,7 @@ def trim_messages(
r"""Trim messages to be below a token count.
`trim_messages` can be used to reduce the size of a chat history to a specified
- token count or specified message count.
+ token or message count.
In either case, if passing the trimmed chat history back into a chat model
directly, the resulting chat history should usually satisfy the following
@@ -714,8 +717,6 @@ def trim_messages(
followed by a `HumanMessage`. To achieve this, set `start_on='human'`.
In addition, generally a `ToolMessage` can only appear after an `AIMessage`
that involved a tool call.
- Please see the following link for more information about messages:
- https://python.langchain.com/docs/concepts/#messages
2. It includes recent messages and drops old messages in the chat history.
To achieve this set the `strategy='last'`.
3. Usually, the new chat history should include the `SystemMessage` if it
@@ -745,12 +746,10 @@ def trim_messages(
strategy: Strategy for trimming.
- `'first'`: Keep the first `<= n_count` tokens of the messages.
- `'last'`: Keep the last `<= n_count` tokens of the messages.
- Default is `'last'`.
allow_partial: Whether to split a message if only part of the message can be
included. If `strategy='last'` then the last partial contents of a message
are included. If `strategy='first'` then the first partial contents of a
message are included.
- Default is False.
end_on: The message type to end on. If specified then every message after the
last occurrence of this type is ignored. If `strategy='last'` then this
is done before we attempt to get the last `max_tokens`. If
@@ -759,7 +758,7 @@ def trim_messages(
`'human'`, `'ai'`, ...) or as `BaseMessage` classes (e.g.
`SystemMessage`, `HumanMessage`, `AIMessage`, ...). Can be a single
type or a list of types.
- Default is None.
+
start_on: The message type to start on. Should only be specified if
`strategy='last'`. If specified then every message before
the first occurrence of this type is ignored. This is done after we trim
@@ -768,10 +767,9 @@ def trim_messages(
specified as string names (e.g. `'system'`, `'human'`, `'ai'`, ...) or
as `BaseMessage` classes (e.g. `SystemMessage`, `HumanMessage`,
`AIMessage`, ...). Can be a single type or a list of types.
- Default is None.
- include_system: Whether to keep the SystemMessage if there is one at index 0.
- Should only be specified if `strategy="last"`.
- Default is False.
+
+ include_system: Whether to keep the `SystemMessage` if there is one at index
+ `0`. Should only be specified if `strategy="last"`.
text_splitter: Function or `langchain_text_splitters.TextSplitter` for
splitting the string contents of a message. Only used if
`allow_partial=True`. If `strategy='last'` then the last split tokens
@@ -782,7 +780,7 @@ def trim_messages(
newlines.
Returns:
- list of trimmed `BaseMessage`.
+ List of trimmed `BaseMessage`.
Raises:
ValueError: if two incompatible arguments are specified or an unrecognized
@@ -1031,18 +1029,18 @@ def convert_to_openai_messages(
messages: Message-like object or iterable of objects whose contents are
in OpenAI, Anthropic, Bedrock Converse, or VertexAI formats.
text_format: How to format string or text block contents:
- - `'string'`:
- If a message has a string content, this is left as a string. If
- a message has content blocks that are all of type `'text'`, these
- are joined with a newline to make a single string. If a message has
- content blocks and at least one isn't of type `'text'`, then
- all blocks are left as dicts.
- - `'block'`:
- If a message has a string content, this is turned into a list
- with a single content block of type `'text'`. If a message has
- content blocks these are left as is.
- include_id: Whether to include message ids in the openai messages, if they
- are present in the source messages.
+ - `'string'`:
+ If a message has a string content, this is left as a string. If
+ a message has content blocks that are all of type `'text'`, these
+ are joined with a newline to make a single string. If a message has
+ content blocks and at least one isn't of type `'text'`, then
+ all blocks are left as dicts.
+ - `'block'`:
+ If a message has a string content, this is turned into a list
+ with a single content block of type `'text'`. If a message has
+ content blocks these are left as is.
+ include_id: Whether to include message IDs in the openai messages, if they
+ are present in the source messages.
Raises:
ValueError: if an unrecognized `text_format` is specified, or if a message
@@ -1103,7 +1101,7 @@ def convert_to_openai_messages(
# ]
```
- !!! version-added "Added in version 0.3.11"
+ !!! version-added "Added in `langchain-core` 0.3.11"
""" # noqa: E501
if text_format not in {"string", "block"}:
@@ -1683,12 +1681,12 @@ def count_tokens_approximately(
Args:
messages: List of messages to count tokens for.
chars_per_token: Number of characters per token to use for the approximation.
- Default is 4 (one token corresponds to ~4 chars for common English text).
- You can also specify float values for more fine-grained control.
+ One token corresponds to ~4 chars for common English text.
+ You can also specify `float` values for more fine-grained control.
[See more here](https://platform.openai.com/tokenizer).
- extra_tokens_per_message: Number of extra tokens to add per message.
- Default is 3 (special tokens, including beginning/end of message).
- You can also specify float values for more fine-grained control.
+ extra_tokens_per_message: Number of extra tokens to add per message, e.g.
+ special tokens, including beginning/end of message.
+ You can also specify `float` values for more fine-grained control.
[See more here](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb).
count_name: Whether to include message names in the count.
Enabled by default.
@@ -1703,7 +1701,7 @@ def count_tokens_approximately(
Warning:
This function does not currently support counting image tokens.
- !!! version-added "Added in version 0.3.46"
+ !!! version-added "Added in `langchain-core` 0.3.46"
"""
token_count = 0.0
diff --git a/libs/core/langchain_core/output_parsers/__init__.py b/libs/core/langchain_core/output_parsers/__init__.py
index 81c40b73a43..7bd9c0ca893 100644
--- a/libs/core/langchain_core/output_parsers/__init__.py
+++ b/libs/core/langchain_core/output_parsers/__init__.py
@@ -1,4 +1,20 @@
-"""**OutputParser** classes parse the output of an LLM call."""
+"""`OutputParser` classes parse the output of an LLM call into structured data.
+
+!!! tip "Structured output"
+
+ Output parsers emerged as an early solution to the challenge of obtaining structured
+ output from LLMs.
+
+ Today, most LLMs support [structured output](https://docs.langchain.com/oss/python/langchain/models#structured-outputs)
+ natively. In such cases, using output parsers may be unnecessary, and you should
+ leverage the model's built-in capabilities for structured output. Refer to the
+ [documentation of your chosen model](https://docs.langchain.com/oss/python/integrations/providers/overview)
+ for guidance on how to achieve structured output directly.
+
+ Output parsers remain valuable when working with models that do not support
+ structured output natively, or when you require additional processing or validation
+ of the model's output beyond its inherent capabilities.
+"""
from typing import TYPE_CHECKING
diff --git a/libs/core/langchain_core/output_parsers/base.py b/libs/core/langchain_core/output_parsers/base.py
index 6786ef7c058..53f5240a96c 100644
--- a/libs/core/langchain_core/output_parsers/base.py
+++ b/libs/core/langchain_core/output_parsers/base.py
@@ -31,13 +31,13 @@ class BaseLLMOutputParser(ABC, Generic[T]):
@abstractmethod
def parse_result(self, result: list[Generation], *, partial: bool = False) -> T:
- """Parse a list of candidate model Generations into a specific format.
+ """Parse a list of candidate model `Generation` objects into a specific format.
Args:
- result: A list of Generations to be parsed. The Generations are assumed
- to be different candidate outputs for a single model input.
+ result: A list of `Generation` to be parsed. The `Generation` objects are
+ assumed to be different candidate outputs for a single model input.
partial: Whether to parse the output as a partial result. This is useful
- for parsers that can parse partial results. Default is False.
+ for parsers that can parse partial results.
Returns:
Structured output.
@@ -46,17 +46,17 @@ class BaseLLMOutputParser(ABC, Generic[T]):
async def aparse_result(
self, result: list[Generation], *, partial: bool = False
) -> T:
- """Async parse a list of candidate model Generations into a specific format.
+ """Async parse a list of candidate model `Generation` objects into a specific format.
Args:
- result: A list of Generations to be parsed. The Generations are assumed
+ result: A list of `Generation` to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
partial: Whether to parse the output as a partial result. This is useful
- for parsers that can parse partial results. Default is False.
+ for parsers that can parse partial results.
Returns:
Structured output.
- """
+ """ # noqa: E501
return await run_in_executor(None, self.parse_result, result, partial=partial)
@@ -135,6 +135,9 @@ class BaseOutputParser(
Example:
```python
+ # Implement a simple boolean output parser
+
+
class BooleanOutputParser(BaseOutputParser[bool]):
true_val: str = "YES"
false_val: str = "NO"
@@ -172,7 +175,7 @@ class BaseOutputParser(
This property is inferred from the first type argument of the class.
Raises:
- TypeError: If the class doesn't have an inferable OutputType.
+ TypeError: If the class doesn't have an inferable `OutputType`.
"""
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
@@ -234,16 +237,16 @@ class BaseOutputParser(
@override
def parse_result(self, result: list[Generation], *, partial: bool = False) -> T:
- """Parse a list of candidate model Generations into a specific format.
+ """Parse a list of candidate model `Generation` objects into a specific format.
- The return value is parsed from only the first Generation in the result, which
- is assumed to be the highest-likelihood Generation.
+ The return value is parsed from only the first `Generation` in the result, which
+ is assumed to be the highest-likelihood `Generation`.
Args:
- result: A list of Generations to be parsed. The Generations are assumed
- to be different candidate outputs for a single model input.
+ result: A list of `Generation` to be parsed. The `Generation` objects are
+ assumed to be different candidate outputs for a single model input.
partial: Whether to parse the output as a partial result. This is useful
- for parsers that can parse partial results. Default is False.
+ for parsers that can parse partial results.
Returns:
Structured output.
@@ -264,20 +267,20 @@ class BaseOutputParser(
async def aparse_result(
self, result: list[Generation], *, partial: bool = False
) -> T:
- """Async parse a list of candidate model Generations into a specific format.
+ """Async parse a list of candidate model `Generation` objects into a specific format.
- The return value is parsed from only the first Generation in the result, which
- is assumed to be the highest-likelihood Generation.
+ The return value is parsed from only the first `Generation` in the result, which
+ is assumed to be the highest-likelihood `Generation`.
Args:
- result: A list of Generations to be parsed. The Generations are assumed
- to be different candidate outputs for a single model input.
+ result: A list of `Generation` to be parsed. The `Generation` objects are
+ assumed to be different candidate outputs for a single model input.
partial: Whether to parse the output as a partial result. This is useful
- for parsers that can parse partial results. Default is False.
+ for parsers that can parse partial results.
Returns:
Structured output.
- """
+ """ # noqa: E501
return await run_in_executor(None, self.parse_result, result, partial=partial)
async def aparse(self, text: str) -> T:
@@ -299,13 +302,13 @@ class BaseOutputParser(
) -> Any:
"""Parse the output of an LLM call with the input prompt for context.
- The prompt is largely provided in the event the OutputParser wants
+ The prompt is largely provided in the event the `OutputParser` wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Args:
completion: String output of a language model.
- prompt: Input PromptValue.
+ prompt: Input `PromptValue`.
Returns:
Structured output.
diff --git a/libs/core/langchain_core/output_parsers/format_instructions.py b/libs/core/langchain_core/output_parsers/format_instructions.py
index 8ad789bcd66..49898917f45 100644
--- a/libs/core/langchain_core/output_parsers/format_instructions.py
+++ b/libs/core/langchain_core/output_parsers/format_instructions.py
@@ -1,11 +1,16 @@
"""Format instructions."""
-JSON_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
+JSON_FORMAT_INSTRUCTIONS = """STRICT OUTPUT FORMAT:
+- Return only the JSON value that conforms to the schema. Do not include any additional text, explanations, headings, or separators.
+- Do not wrap the JSON in Markdown or code fences (no ``` or ```json).
+- Do not prepend or append any text (e.g., do not write "Here is the JSON:").
+- The response must be a single top-level JSON value exactly as required by the schema (object/array/etc.), with no trailing commas or comments.
-As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
-the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
+The output should be formatted as a JSON instance that conforms to the JSON schema below.
-Here is the output schema:
+As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}} the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
+
+Here is the output schema (shown in a code block for readability only β do not include any backticks or Markdown in your output):
```
{schema}
```""" # noqa: E501
diff --git a/libs/core/langchain_core/output_parsers/json.py b/libs/core/langchain_core/output_parsers/json.py
index d665c33b48f..fc2b43ee01c 100644
--- a/libs/core/langchain_core/output_parsers/json.py
+++ b/libs/core/langchain_core/output_parsers/json.py
@@ -31,11 +31,14 @@ TBaseModel = TypeVar("TBaseModel", bound=PydanticBaseModel)
class JsonOutputParser(BaseCumulativeTransformOutputParser[Any]):
"""Parse the output of an LLM call to a JSON object.
+ Probably the most reliable output parser for getting structured data that does *not*
+ use function calling.
+
When used in streaming mode, it will yield partial JSON objects containing
all the keys that have been returned so far.
- In streaming, if `diff` is set to `True`, yields JSONPatch operations
- describing the difference between the previous and the current object.
+ In streaming, if `diff` is set to `True`, yields JSONPatch operations describing the
+ difference between the previous and the current object.
"""
pydantic_object: Annotated[type[TBaseModel] | None, SkipValidation()] = None # type: ignore[valid-type]
@@ -62,7 +65,6 @@ class JsonOutputParser(BaseCumulativeTransformOutputParser[Any]):
If `True`, the output will be a JSON object containing
all the keys that have been returned so far.
If `False`, the output will be the full JSON object.
- Default is False.
Returns:
The parsed JSON object.
diff --git a/libs/core/langchain_core/output_parsers/list.py b/libs/core/langchain_core/output_parsers/list.py
index f2c087225c3..16b99b64f6a 100644
--- a/libs/core/langchain_core/output_parsers/list.py
+++ b/libs/core/langchain_core/output_parsers/list.py
@@ -41,7 +41,7 @@ def droplastn(
class ListOutputParser(BaseTransformOutputParser[list[str]]):
- """Parse the output of an LLM call to a list."""
+ """Parse the output of a model to a list."""
@property
def _type(self) -> str:
@@ -74,30 +74,30 @@ class ListOutputParser(BaseTransformOutputParser[list[str]]):
buffer = ""
for chunk in input:
if isinstance(chunk, BaseMessage):
- # extract text
+ # Extract text
chunk_content = chunk.content
if not isinstance(chunk_content, str):
continue
buffer += chunk_content
else:
- # add current chunk to buffer
+ # Add current chunk to buffer
buffer += chunk
- # parse buffer into a list of parts
+ # Parse buffer into a list of parts
try:
done_idx = 0
- # yield only complete parts
+ # Yield only complete parts
for m in droplastn(self.parse_iter(buffer), 1):
done_idx = m.end()
yield [m.group(1)]
buffer = buffer[done_idx:]
except NotImplementedError:
parts = self.parse(buffer)
- # yield only complete parts
+ # Yield only complete parts
if len(parts) > 1:
for part in parts[:-1]:
yield [part]
buffer = parts[-1]
- # yield the last part
+ # Yield the last part
for part in self.parse(buffer):
yield [part]
@@ -108,45 +108,45 @@ class ListOutputParser(BaseTransformOutputParser[list[str]]):
buffer = ""
async for chunk in input:
if isinstance(chunk, BaseMessage):
- # extract text
+ # Extract text
chunk_content = chunk.content
if not isinstance(chunk_content, str):
continue
buffer += chunk_content
else:
- # add current chunk to buffer
+ # Add current chunk to buffer
buffer += chunk
- # parse buffer into a list of parts
+ # Parse buffer into a list of parts
try:
done_idx = 0
- # yield only complete parts
+ # Yield only complete parts
for m in droplastn(self.parse_iter(buffer), 1):
done_idx = m.end()
yield [m.group(1)]
buffer = buffer[done_idx:]
except NotImplementedError:
parts = self.parse(buffer)
- # yield only complete parts
+ # Yield only complete parts
if len(parts) > 1:
for part in parts[:-1]:
yield [part]
buffer = parts[-1]
- # yield the last part
+ # Yield the last part
for part in self.parse(buffer):
yield [part]
class CommaSeparatedListOutputParser(ListOutputParser):
- """Parse the output of an LLM call to a comma-separated list."""
+ """Parse the output of a model to a comma-separated list."""
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "output_parsers", "list"]`
@@ -177,7 +177,7 @@ class CommaSeparatedListOutputParser(ListOutputParser):
)
return [item for sublist in reader for item in sublist]
except csv.Error:
- # keep old logic for backup
+ # Keep old logic for backup
return [part.strip() for part in text.split(",")]
@property
diff --git a/libs/core/langchain_core/output_parsers/openai_functions.py b/libs/core/langchain_core/output_parsers/openai_functions.py
index a971884621a..71e55d798d8 100644
--- a/libs/core/langchain_core/output_parsers/openai_functions.py
+++ b/libs/core/langchain_core/output_parsers/openai_functions.py
@@ -31,13 +31,13 @@ class OutputFunctionsParser(BaseGenerationOutputParser[Any]):
Args:
result: The result of the LLM call.
- partial: Whether to parse partial JSON objects. Default is False.
+ partial: Whether to parse partial JSON objects.
Returns:
The parsed JSON object.
Raises:
- OutputParserException: If the output is not valid JSON.
+ `OutputParserException`: If the output is not valid JSON.
"""
generation = result[0]
if not isinstance(generation, ChatGeneration):
@@ -56,7 +56,7 @@ class OutputFunctionsParser(BaseGenerationOutputParser[Any]):
class JsonOutputFunctionsParser(BaseCumulativeTransformOutputParser[Any]):
- """Parse an output as the Json object."""
+ """Parse an output as the JSON object."""
strict: bool = False
"""Whether to allow non-JSON-compliant strings.
@@ -82,13 +82,13 @@ class JsonOutputFunctionsParser(BaseCumulativeTransformOutputParser[Any]):
Args:
result: The result of the LLM call.
- partial: Whether to parse partial JSON objects. Default is False.
+ partial: Whether to parse partial JSON objects.
Returns:
The parsed JSON object.
Raises:
- OutputParserException: If the output is not valid JSON.
+ OutputParserExcept`ion: If the output is not valid JSON.
"""
if len(result) != 1:
msg = f"Expected exactly one result, but got {len(result)}"
@@ -155,7 +155,7 @@ class JsonOutputFunctionsParser(BaseCumulativeTransformOutputParser[Any]):
class JsonKeyOutputFunctionsParser(JsonOutputFunctionsParser):
- """Parse an output as the element of the Json object."""
+ """Parse an output as the element of the JSON object."""
key_name: str
"""The name of the key to return."""
@@ -165,7 +165,7 @@ class JsonKeyOutputFunctionsParser(JsonOutputFunctionsParser):
Args:
result: The result of the LLM call.
- partial: Whether to parse partial JSON objects. Default is False.
+ partial: Whether to parse partial JSON objects.
Returns:
The parsed JSON object.
@@ -177,16 +177,15 @@ class JsonKeyOutputFunctionsParser(JsonOutputFunctionsParser):
class PydanticOutputFunctionsParser(OutputFunctionsParser):
- """Parse an output as a pydantic object.
+ """Parse an output as a Pydantic object.
- This parser is used to parse the output of a ChatModel that uses
- OpenAI function format to invoke functions.
+ This parser is used to parse the output of a chat model that uses OpenAI function
+ format to invoke functions.
- The parser extracts the function call invocation and matches
- them to the pydantic schema provided.
+ The parser extracts the function call invocation and matches them to the Pydantic
+ schema provided.
- An exception will be raised if the function call does not match
- the provided schema.
+ An exception will be raised if the function call does not match the provided schema.
Example:
```python
@@ -221,7 +220,7 @@ class PydanticOutputFunctionsParser(OutputFunctionsParser):
"""
pydantic_schema: type[BaseModel] | dict[str, type[BaseModel]]
- """The pydantic schema to parse the output with.
+ """The Pydantic schema to parse the output with.
If multiple schemas are provided, then the function name will be used to
determine which schema to use.
@@ -230,7 +229,7 @@ class PydanticOutputFunctionsParser(OutputFunctionsParser):
@model_validator(mode="before")
@classmethod
def validate_schema(cls, values: dict) -> Any:
- """Validate the pydantic schema.
+ """Validate the Pydantic schema.
Args:
values: The values to validate.
@@ -239,7 +238,7 @@ class PydanticOutputFunctionsParser(OutputFunctionsParser):
The validated values.
Raises:
- ValueError: If the schema is not a pydantic schema.
+ ValueError: If the schema is not a Pydantic schema.
"""
schema = values["pydantic_schema"]
if "args_only" not in values:
@@ -262,10 +261,10 @@ class PydanticOutputFunctionsParser(OutputFunctionsParser):
Args:
result: The result of the LLM call.
- partial: Whether to parse partial JSON objects. Default is False.
+ partial: Whether to parse partial JSON objects.
Raises:
- ValueError: If the pydantic schema is not valid.
+ ValueError: If the Pydantic schema is not valid.
Returns:
The parsed JSON object.
@@ -288,13 +287,13 @@ class PydanticOutputFunctionsParser(OutputFunctionsParser):
elif issubclass(pydantic_schema, BaseModelV1):
pydantic_args = pydantic_schema.parse_raw(args)
else:
- msg = f"Unsupported pydantic schema: {pydantic_schema}"
+ msg = f"Unsupported Pydantic schema: {pydantic_schema}"
raise ValueError(msg)
return pydantic_args
class PydanticAttrOutputFunctionsParser(PydanticOutputFunctionsParser):
- """Parse an output as an attribute of a pydantic object."""
+ """Parse an output as an attribute of a Pydantic object."""
attr_name: str
"""The name of the attribute to return."""
@@ -305,7 +304,7 @@ class PydanticAttrOutputFunctionsParser(PydanticOutputFunctionsParser):
Args:
result: The result of the LLM call.
- partial: Whether to parse partial JSON objects. Default is False.
+ partial: Whether to parse partial JSON objects.
Returns:
The parsed JSON object.
diff --git a/libs/core/langchain_core/output_parsers/openai_tools.py b/libs/core/langchain_core/output_parsers/openai_tools.py
index 5f44b916ae2..23884abdfd3 100644
--- a/libs/core/langchain_core/output_parsers/openai_tools.py
+++ b/libs/core/langchain_core/output_parsers/openai_tools.py
@@ -31,10 +31,9 @@ def parse_tool_call(
Args:
raw_tool_call: The raw tool call to parse.
- partial: Whether to parse partial JSON. Default is False.
+ partial: Whether to parse partial JSON.
strict: Whether to allow non-JSON-compliant strings.
- Default is False.
- return_id: Whether to return the tool call id. Default is True.
+ return_id: Whether to return the tool call id.
Returns:
The parsed tool call.
@@ -105,10 +104,9 @@ def parse_tool_calls(
Args:
raw_tool_calls: The raw tool calls to parse.
- partial: Whether to parse partial JSON. Default is False.
+ partial: Whether to parse partial JSON.
strict: Whether to allow non-JSON-compliant strings.
- Default is False.
- return_id: Whether to return the tool call id. Default is True.
+ return_id: Whether to return the tool call id.
Returns:
The parsed tool calls.
@@ -165,7 +163,6 @@ class JsonOutputToolsParser(BaseCumulativeTransformOutputParser[Any]):
If `True`, the output will be a JSON object containing
all the keys that have been returned so far.
If `False`, the output will be the full JSON object.
- Default is False.
Returns:
The parsed tool calls.
@@ -227,9 +224,8 @@ class JsonOutputKeyToolsParser(JsonOutputToolsParser):
result: The result of the LLM call.
partial: Whether to parse partial JSON.
If `True`, the output will be a JSON object containing
- all the keys that have been returned so far.
+ all the keys that have been returned so far.
If `False`, the output will be the full JSON object.
- Default is False.
Raises:
OutputParserException: If the generation is not a chat generation.
@@ -311,9 +307,8 @@ class PydanticToolsParser(JsonOutputToolsParser):
result: The result of the LLM call.
partial: Whether to parse partial JSON.
If `True`, the output will be a JSON object containing
- all the keys that have been returned so far.
+ all the keys that have been returned so far.
If `False`, the output will be the full JSON object.
- Default is False.
Returns:
The parsed Pydantic objects.
diff --git a/libs/core/langchain_core/output_parsers/pydantic.py b/libs/core/langchain_core/output_parsers/pydantic.py
index 13eee343cd2..f9076a8158f 100644
--- a/libs/core/langchain_core/output_parsers/pydantic.py
+++ b/libs/core/langchain_core/output_parsers/pydantic.py
@@ -17,10 +17,10 @@ from langchain_core.utils.pydantic import (
class PydanticOutputParser(JsonOutputParser, Generic[TBaseModel]):
- """Parse an output using a pydantic model."""
+ """Parse an output using a Pydantic model."""
pydantic_object: Annotated[type[TBaseModel], SkipValidation()]
- """The pydantic model to parse."""
+ """The Pydantic model to parse."""
def _parse_obj(self, obj: dict) -> TBaseModel:
try:
@@ -45,21 +45,20 @@ class PydanticOutputParser(JsonOutputParser, Generic[TBaseModel]):
def parse_result(
self, result: list[Generation], *, partial: bool = False
) -> TBaseModel | None:
- """Parse the result of an LLM call to a pydantic object.
+ """Parse the result of an LLM call to a Pydantic object.
Args:
result: The result of the LLM call.
partial: Whether to parse partial JSON objects.
If `True`, the output will be a JSON object containing
all the keys that have been returned so far.
- Defaults to `False`.
Raises:
- OutputParserException: If the result is not valid JSON
- or does not conform to the pydantic model.
+ `OutputParserException`: If the result is not valid JSON
+ or does not conform to the Pydantic model.
Returns:
- The parsed pydantic object.
+ The parsed Pydantic object.
"""
try:
json_object = super().parse_result(result)
@@ -70,13 +69,13 @@ class PydanticOutputParser(JsonOutputParser, Generic[TBaseModel]):
raise
def parse(self, text: str) -> TBaseModel:
- """Parse the output of an LLM call to a pydantic object.
+ """Parse the output of an LLM call to a Pydantic object.
Args:
text: The output of the LLM call.
Returns:
- The parsed pydantic object.
+ The parsed Pydantic object.
"""
return super().parse(text)
@@ -87,7 +86,7 @@ class PydanticOutputParser(JsonOutputParser, Generic[TBaseModel]):
The format instructions for the JSON output.
"""
# Copy schema to avoid altering original Pydantic schema.
- schema = dict(self.pydantic_object.model_json_schema().items())
+ schema = dict(self._get_schema(self.pydantic_object).items())
# Remove extraneous fields.
reduced_schema = schema
@@ -107,7 +106,7 @@ class PydanticOutputParser(JsonOutputParser, Generic[TBaseModel]):
@property
@override
def OutputType(self) -> type[TBaseModel]:
- """Return the pydantic model."""
+ """Return the Pydantic model."""
return self.pydantic_object
diff --git a/libs/core/langchain_core/output_parsers/string.py b/libs/core/langchain_core/output_parsers/string.py
index 566daa25d82..4b189e1c467 100644
--- a/libs/core/langchain_core/output_parsers/string.py
+++ b/libs/core/langchain_core/output_parsers/string.py
@@ -6,20 +6,20 @@ from langchain_core.output_parsers.transform import BaseTransformOutputParser
class StrOutputParser(BaseTransformOutputParser[str]):
- """OutputParser that parses LLMResult into the top likely string."""
+ """OutputParser that parses `LLMResult` into the top likely string."""
@classmethod
def is_lc_serializable(cls) -> bool:
- """StrOutputParser is serializable.
+ """`StrOutputParser` is serializable.
Returns:
- True
+ `True`
"""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "output_parser"]`
diff --git a/libs/core/langchain_core/output_parsers/xml.py b/libs/core/langchain_core/output_parsers/xml.py
index 718145ebb58..55e93542d7f 100644
--- a/libs/core/langchain_core/output_parsers/xml.py
+++ b/libs/core/langchain_core/output_parsers/xml.py
@@ -43,19 +43,19 @@ class _StreamingParser:
"""Streaming parser for XML.
This implementation is pulled into a class to avoid implementation
- drift between transform and atransform of the XMLOutputParser.
+ drift between transform and atransform of the `XMLOutputParser`.
"""
def __init__(self, parser: Literal["defusedxml", "xml"]) -> None:
"""Initialize the streaming parser.
Args:
- parser: Parser to use for XML parsing. Can be either 'defusedxml' or 'xml'.
- See documentation in XMLOutputParser for more information.
+ parser: Parser to use for XML parsing. Can be either `'defusedxml'` or
+ `'xml'`. See documentation in `XMLOutputParser` for more information.
Raises:
- ImportError: If defusedxml is not installed and the defusedxml
- parser is requested.
+ ImportError: If `defusedxml` is not installed and the `defusedxml` parser is
+ requested.
"""
if parser == "defusedxml":
if not _HAS_DEFUSEDXML:
@@ -79,10 +79,10 @@ class _StreamingParser:
"""Parse a chunk of text.
Args:
- chunk: A chunk of text to parse. This can be a string or a BaseMessage.
+ chunk: A chunk of text to parse. This can be a `str` or a `BaseMessage`.
Yields:
- A dictionary representing the parsed XML element.
+ A `dict` representing the parsed XML element.
Raises:
xml.etree.ElementTree.ParseError: If the XML is not well-formed.
@@ -147,46 +147,49 @@ class _StreamingParser:
class XMLOutputParser(BaseTransformOutputParser):
- """Parse an output using xml format."""
+ """Parse an output using xml format.
+
+ Returns a dictionary of tags.
+ """
tags: list[str] | None = None
"""Tags to tell the LLM to expect in the XML output.
Note this may not be perfect depending on the LLM implementation.
- For example, with tags=["foo", "bar", "baz"]:
+ For example, with `tags=["foo", "bar", "baz"]`:
1. A well-formatted XML instance:
- "\n \n \n \n"
+ `"\n \n \n \n"`
2. A badly-formatted XML instance (missing closing tag for 'bar'):
- "\n \n "
+ `"\n \n "`
3. A badly-formatted XML instance (unexpected 'tag' element):
- "\n \n \n"
+ `"\n \n \n"`
"""
encoding_matcher: re.Pattern = re.compile(
r"<([^>]*encoding[^>]*)>\n(.*)", re.MULTILINE | re.DOTALL
)
parser: Literal["defusedxml", "xml"] = "defusedxml"
- """Parser to use for XML parsing. Can be either 'defusedxml' or 'xml'.
+ """Parser to use for XML parsing. Can be either `'defusedxml'` or `'xml'`.
- * 'defusedxml' is the default parser and is used to prevent XML vulnerabilities
- present in some distributions of Python's standard library xml.
- `defusedxml` is a wrapper around the standard library parser that
- sets up the parser with secure defaults.
- * 'xml' is the standard library parser.
+ * `'defusedxml'` is the default parser and is used to prevent XML vulnerabilities
+ present in some distributions of Python's standard library xml.
+ `defusedxml` is a wrapper around the standard library parser that
+ sets up the parser with secure defaults.
+ * `'xml'` is the standard library parser.
- Use `xml` only if you are sure that your distribution of the standard library
- is not vulnerable to XML vulnerabilities.
+ Use `xml` only if you are sure that your distribution of the standard library is not
+ vulnerable to XML vulnerabilities.
Please review the following resources for more information:
* https://docs.python.org/3/library/xml.html#xml-vulnerabilities
* https://github.com/tiran/defusedxml
- The standard library relies on libexpat for parsing XML:
- https://github.com/libexpat/libexpat
+ The standard library relies on [`libexpat`](https://github.com/libexpat/libexpat)
+ for parsing XML.
"""
def get_format_instructions(self) -> str:
@@ -200,12 +203,12 @@ class XMLOutputParser(BaseTransformOutputParser):
text: The output of an LLM call.
Returns:
- A dictionary representing the parsed XML.
+ A `dict` representing the parsed XML.
Raises:
OutputParserException: If the XML is not well-formed.
- ImportError: If defusedxml is not installed and the defusedxml
- parser is requested.
+ ImportError: If defus`edxml is not installed and the `defusedxml` parser is
+ requested.
"""
# Try to find XML string within triple backticks
# Imports are temporarily placed here to avoid issue with caching on CI
diff --git a/libs/core/langchain_core/outputs/generation.py b/libs/core/langchain_core/outputs/generation.py
index 960563dac99..5fbd9c7e1b0 100644
--- a/libs/core/langchain_core/outputs/generation.py
+++ b/libs/core/langchain_core/outputs/generation.py
@@ -11,9 +11,8 @@ from langchain_core.utils._merge import merge_dicts
class Generation(Serializable):
"""A single text generation output.
- Generation represents the response from an
- `"old-fashioned" LLM __` that
- generates regular text (not chat messages).
+ Generation represents the response from an "old-fashioned" LLM (string-in,
+ string-out) that generates regular text (not chat messages).
This model is used internally by chat model and will eventually
be mapped to a more general `LLMResult` object, and then projected into
@@ -21,8 +20,7 @@ class Generation(Serializable):
LangChain users working with chat models will usually access information via
`AIMessage` (returned from runnable interfaces) or `LLMResult` (available
- via callbacks). Please refer the `AIMessage` and `LLMResult` schema documentation
- for more information.
+ via callbacks). Please refer to `AIMessage` and `LLMResult` for more information.
"""
text: str
@@ -35,16 +33,18 @@ class Generation(Serializable):
"""
type: Literal["Generation"] = "Generation"
"""Type is used exclusively for serialization purposes.
- Set to "Generation" for this class."""
+
+ Set to "Generation" for this class.
+ """
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "output"]`
@@ -53,7 +53,7 @@ class Generation(Serializable):
class GenerationChunk(Generation):
- """Generation chunk, which can be concatenated with other Generation chunks."""
+ """`GenerationChunk`, which can be concatenated with other Generation chunks."""
def __add__(self, other: GenerationChunk) -> GenerationChunk:
"""Concatenate two `GenerationChunk`s.
diff --git a/libs/core/langchain_core/outputs/llm_result.py b/libs/core/langchain_core/outputs/llm_result.py
index ddb12a77f6d..4eb3160f397 100644
--- a/libs/core/langchain_core/outputs/llm_result.py
+++ b/libs/core/langchain_core/outputs/llm_result.py
@@ -97,7 +97,7 @@ class LLMResult(BaseModel):
other: Another `LLMResult` object to compare against.
Returns:
- True if the generations and `llm_output` are equal, False otherwise.
+ `True` if the generations and `llm_output` are equal, `False` otherwise.
"""
if not isinstance(other, LLMResult):
return NotImplemented
diff --git a/libs/core/langchain_core/prompt_values.py b/libs/core/langchain_core/prompt_values.py
index 8380a3e067a..cb29070fa65 100644
--- a/libs/core/langchain_core/prompt_values.py
+++ b/libs/core/langchain_core/prompt_values.py
@@ -24,20 +24,18 @@ from langchain_core.messages import (
class PromptValue(Serializable, ABC):
"""Base abstract class for inputs to any language model.
- PromptValues can be converted to both LLM (pure text-generation) inputs and
- ChatModel inputs.
+ `PromptValues` can be converted to both LLM (pure text-generation) inputs and
+ chat model inputs.
"""
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
-
- This is used to determine the namespace of the object when serializing.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "prompt"]`
@@ -50,7 +48,7 @@ class PromptValue(Serializable, ABC):
@abstractmethod
def to_messages(self) -> list[BaseMessage]:
- """Return prompt as a list of Messages."""
+ """Return prompt as a list of messages."""
class StringPromptValue(PromptValue):
@@ -62,9 +60,7 @@ class StringPromptValue(PromptValue):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
-
- This is used to determine the namespace of the object when serializing.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "base"]`
@@ -99,9 +95,7 @@ class ChatPromptValue(PromptValue):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
-
- This is used to determine the namespace of the object when serializing.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "chat"]`
@@ -113,11 +107,11 @@ class ImageURL(TypedDict, total=False):
"""Image URL."""
detail: Literal["auto", "low", "high"]
- """Specifies the detail level of the image. Defaults to `'auto'`.
+ """Specifies the detail level of the image.
+
Can be `'auto'`, `'low'`, or `'high'`.
This follows OpenAI's Chat Completion API's image URL format.
-
"""
url: str
diff --git a/libs/core/langchain_core/prompts/base.py b/libs/core/langchain_core/prompts/base.py
index 43964fe0349..f4cd86fb5cc 100644
--- a/libs/core/langchain_core/prompts/base.py
+++ b/libs/core/langchain_core/prompts/base.py
@@ -46,21 +46,27 @@ class BasePromptTemplate(
input_variables: list[str]
"""A list of the names of the variables whose values are required as inputs to the
- prompt."""
+ prompt.
+ """
optional_variables: list[str] = Field(default=[])
- """optional_variables: A list of the names of the variables for placeholder
- or MessagePlaceholder that are optional. These variables are auto inferred
- from the prompt and user need not provide them."""
+ """A list of the names of the variables for placeholder or `MessagePlaceholder` that
+ are optional.
+
+ These variables are auto inferred from the prompt and user need not provide them.
+ """
input_types: typing.Dict[str, Any] = Field(default_factory=dict, exclude=True) # noqa: UP006
"""A dictionary of the types of the variables the prompt template expects.
- If not provided, all variables are assumed to be strings."""
+
+ If not provided, all variables are assumed to be strings.
+ """
output_parser: BaseOutputParser | None = None
"""How to parse the output of calling an LLM on this formatted prompt."""
partial_variables: Mapping[str, Any] = Field(default_factory=dict)
"""A dictionary of the partial variables the prompt template carries.
- Partial variables populate the template so that you don't need to
- pass them in every time you call the prompt."""
+ Partial variables populate the template so that you don't need to pass them in every
+ time you call the prompt.
+ """
metadata: typing.Dict[str, Any] | None = None # noqa: UP006
"""Metadata to be used for tracing."""
tags: list[str] | None = None
@@ -96,7 +102,7 @@ class BasePromptTemplate(
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "prompt_template"]`
@@ -105,7 +111,7 @@ class BasePromptTemplate(
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
model_config = ConfigDict(
@@ -127,7 +133,7 @@ class BasePromptTemplate(
"""Get the input schema for the prompt.
Args:
- config: configuration for the prompt.
+ config: Configuration for the prompt.
Returns:
The input schema for the prompt.
@@ -195,8 +201,8 @@ class BasePromptTemplate(
"""Invoke the prompt.
Args:
- input: Dict, input to the prompt.
- config: RunnableConfig, configuration for the prompt.
+ input: Input to the prompt.
+ config: Configuration for the prompt.
Returns:
The output of the prompt.
@@ -221,8 +227,8 @@ class BasePromptTemplate(
"""Async invoke the prompt.
Args:
- input: Dict, input to the prompt.
- config: RunnableConfig, configuration for the prompt.
+ input: Input to the prompt.
+ config: Configuration for the prompt.
Returns:
The output of the prompt.
@@ -242,7 +248,7 @@ class BasePromptTemplate(
@abstractmethod
def format_prompt(self, **kwargs: Any) -> PromptValue:
- """Create Prompt Value.
+ """Create `PromptValue`.
Args:
**kwargs: Any arguments to be passed to the prompt template.
@@ -252,7 +258,7 @@ class BasePromptTemplate(
"""
async def aformat_prompt(self, **kwargs: Any) -> PromptValue:
- """Async create Prompt Value.
+ """Async create `PromptValue`.
Args:
**kwargs: Any arguments to be passed to the prompt template.
@@ -266,7 +272,7 @@ class BasePromptTemplate(
"""Return a partial of the prompt template.
Args:
- **kwargs: partial variables to set.
+ **kwargs: Partial variables to set.
Returns:
A partial of the prompt template.
@@ -296,9 +302,9 @@ class BasePromptTemplate(
A formatted string.
Example:
- ```python
- prompt.format(variable1="foo")
- ```
+ ```python
+ prompt.format(variable1="foo")
+ ```
"""
async def aformat(self, **kwargs: Any) -> FormatOutputType:
@@ -311,9 +317,9 @@ class BasePromptTemplate(
A formatted string.
Example:
- ```python
- await prompt.aformat(variable1="foo")
- ```
+ ```python
+ await prompt.aformat(variable1="foo")
+ ```
"""
return self.format(**kwargs)
@@ -348,9 +354,9 @@ class BasePromptTemplate(
NotImplementedError: If the prompt type is not implemented.
Example:
- ```python
- prompt.save(file_path="path/prompt.yaml")
- ```
+ ```python
+ prompt.save(file_path="path/prompt.yaml")
+ ```
"""
if self.partial_variables:
msg = "Cannot save prompt with partial variables."
@@ -402,23 +408,23 @@ def format_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
First, this pulls information from the document from two sources:
- 1. page_content:
- This takes the information from the `document.page_content`
- and assigns it to a variable named `page_content`.
- 2. metadata:
- This takes information from `document.metadata` and assigns
- it to variables of the same name.
+ 1. `page_content`:
+ This takes the information from the `document.page_content` and assigns it to a
+ variable named `page_content`.
+ 2. `metadata`:
+ This takes information from `document.metadata` and assigns it to variables of
+ the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
- doc: Document, the page_content and metadata will be used to create
+ doc: `Document`, the `page_content` and `metadata` will be used to create
the final string.
- prompt: BasePromptTemplate, will be used to format the page_content
- and metadata into the final string.
+ prompt: `BasePromptTemplate`, will be used to format the `page_content`
+ and `metadata` into the final string.
Returns:
- string of the document formatted.
+ String of the document formatted.
Example:
```python
@@ -429,7 +435,6 @@ def format_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
prompt = PromptTemplate.from_template("Page {page}: {page_content}")
format_document(doc, prompt)
>>> "Page 1: This is a joke"
-
```
"""
return prompt.format(**_get_document_info(doc, prompt))
@@ -440,22 +445,22 @@ async def aformat_document(doc: Document, prompt: BasePromptTemplate[str]) -> st
First, this pulls information from the document from two sources:
- 1. page_content:
- This takes the information from the `document.page_content`
- and assigns it to a variable named `page_content`.
- 2. metadata:
- This takes information from `document.metadata` and assigns
- it to variables of the same name.
+ 1. `page_content`:
+ This takes the information from the `document.page_content` and assigns it to a
+ variable named `page_content`.
+ 2. `metadata`:
+ This takes information from `document.metadata` and assigns it to variables of
+ the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
- doc: Document, the page_content and metadata will be used to create
+ doc: `Document`, the `page_content` and `metadata` will be used to create
the final string.
- prompt: BasePromptTemplate, will be used to format the page_content
- and metadata into the final string.
+ prompt: `BasePromptTemplate`, will be used to format the `page_content`
+ and `metadata` into the final string.
Returns:
- string of the document formatted.
+ String of the document formatted.
"""
return await prompt.aformat(**_get_document_info(doc, prompt))
diff --git a/libs/core/langchain_core/prompts/chat.py b/libs/core/langchain_core/prompts/chat.py
index 4c5d1f7810b..ac1d158e15d 100644
--- a/libs/core/langchain_core/prompts/chat.py
+++ b/libs/core/langchain_core/prompts/chat.py
@@ -135,7 +135,7 @@ class MessagesPlaceholder(BaseMessagePromptTemplate):
n_messages: PositiveInt | None = None
"""Maximum number of messages to include. If `None`, then will include all.
- Defaults to `None`."""
+ """
def __init__(
self, variable_name: str, *, optional: bool = False, **kwargs: Any
@@ -147,7 +147,6 @@ class MessagesPlaceholder(BaseMessagePromptTemplate):
optional: If `True` format_messages can be called with no arguments and will
return an empty list. If `False` then a named argument with name
`variable_name` must be passed in, even if the value is an empty list.
- Defaults to `False`.]
"""
# mypy can't detect the init which is defined in the parent class
# b/c these are BaseModel classes.
@@ -195,7 +194,7 @@ class MessagesPlaceholder(BaseMessagePromptTemplate):
"""Human-readable representation.
Args:
- html: Whether to format as HTML. Defaults to `False`.
+ html: Whether to format as HTML.
Returns:
Human-readable representation.
@@ -235,13 +234,13 @@ class BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC):
Args:
template: a template.
- template_format: format of the template. Defaults to "f-string".
+ template_format: format of the template.
partial_variables: A dictionary of variables that can be used to partially
fill in the template. For example, if the template is
`"{variable1} {variable2}"`, and `partial_variables` is
`{"variable1": "foo"}`, then the final prompt will be
`"foo {variable2}"`.
- Defaults to `None`.
+
**kwargs: keyword arguments to pass to the constructor.
Returns:
@@ -330,7 +329,7 @@ class BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC):
"""Human-readable representation.
Args:
- html: Whether to format as HTML. Defaults to `False`.
+ html: Whether to format as HTML.
Returns:
Human-readable representation.
@@ -412,9 +411,9 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
Args:
template: a template.
template_format: format of the template.
- Options are: 'f-string', 'mustache', 'jinja2'. Defaults to "f-string".
+ Options are: 'f-string', 'mustache', 'jinja2'.
partial_variables: A dictionary of variables that can be used too partially.
- Defaults to `None`.
+
**kwargs: keyword arguments to pass to the constructor.
Returns:
@@ -637,7 +636,7 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
"""Human-readable representation.
Args:
- html: Whether to format as HTML. Defaults to `False`.
+ html: Whether to format as HTML.
Returns:
Human-readable representation.
@@ -750,7 +749,7 @@ class BaseChatPromptTemplate(BasePromptTemplate, ABC):
"""Human-readable representation.
Args:
- html: Whether to format as HTML. Defaults to `False`.
+ html: Whether to format as HTML.
Returns:
Human-readable representation.
@@ -777,42 +776,36 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
Use to create flexible templated prompts for chat models.
- Examples:
- !!! warning "Behavior changed in 0.2.24"
- You can pass any Message-like formats supported by
- `ChatPromptTemplate.from_messages()` directly to `ChatPromptTemplate()`
- init.
+ ```python
+ from langchain_core.prompts import ChatPromptTemplate
- ```python
- from langchain_core.prompts import ChatPromptTemplate
+ template = ChatPromptTemplate(
+ [
+ ("system", "You are a helpful AI bot. Your name is {name}."),
+ ("human", "Hello, how are you doing?"),
+ ("ai", "I'm doing well, thanks!"),
+ ("human", "{user_input}"),
+ ]
+ )
- template = ChatPromptTemplate(
- [
- ("system", "You are a helpful AI bot. Your name is {name}."),
- ("human", "Hello, how are you doing?"),
- ("ai", "I'm doing well, thanks!"),
- ("human", "{user_input}"),
- ]
- )
+ prompt_value = template.invoke(
+ {
+ "name": "Bob",
+ "user_input": "What is your name?",
+ }
+ )
+ # Output:
+ # ChatPromptValue(
+ # messages=[
+ # SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
+ # HumanMessage(content='Hello, how are you doing?'),
+ # AIMessage(content="I'm doing well, thanks!"),
+ # HumanMessage(content='What is your name?')
+ # ]
+ # )
+ ```
- prompt_value = template.invoke(
- {
- "name": "Bob",
- "user_input": "What is your name?",
- }
- )
- # Output:
- # ChatPromptValue(
- # messages=[
- # SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
- # HumanMessage(content='Hello, how are you doing?'),
- # AIMessage(content="I'm doing well, thanks!"),
- # HumanMessage(content='What is your name?')
- # ]
- # )
- ```
-
- Messages Placeholder:
+ !!! note "Messages Placeholder"
```python
# In addition to Human/AI/Tool/Function messages,
@@ -853,13 +846,12 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
# )
```
- Single-variable template:
+ !!! note "Single-variable template"
If your prompt has only a single input variable (i.e., 1 instance of "{variable_nams}"),
and you invoke the template with a non-dict object, the prompt template will
inject the provided argument into that variable location.
-
```python
from langchain_core.prompts import ChatPromptTemplate
@@ -899,25 +891,35 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
"""Create a chat prompt template from a variety of message formats.
Args:
- messages: sequence of message representations.
+ messages: Sequence of message representations.
+
A message can be represented using the following formats:
- (1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
- (message type, template); e.g., ("human", "{user_input}"),
- (4) 2-tuple of (message class, template), (5) a string which is
- shorthand for ("human", template); e.g., "{user_input}".
- template_format: format of the template. Defaults to "f-string".
+
+ 1. `BaseMessagePromptTemplate`
+ 2. `BaseMessage`
+ 3. 2-tuple of `(message type, template)`; e.g.,
+ `("human", "{user_input}")`
+ 4. 2-tuple of `(message class, template)`
+ 5. A string which is shorthand for `("human", template)`; e.g.,
+ `"{user_input}"`
+ template_format: Format of the template.
input_variables: A list of the names of the variables whose values are
required as inputs to the prompt.
optional_variables: A list of the names of the variables for placeholder
or MessagePlaceholder that are optional.
+
These variables are auto inferred from the prompt and user need not
provide them.
partial_variables: A dictionary of the partial variables the prompt
- template carries. Partial variables populate the template so that you
- don't need to pass them in every time you call the prompt.
+ template carries.
+
+ Partial variables populate the template so that you don't need to pass
+ them in every time you call the prompt.
validate_template: Whether to validate the template.
input_types: A dictionary of the types of the variables the prompt template
- expects. If not provided, all variables are assumed to be strings.
+ expects.
+
+ If not provided, all variables are assumed to be strings.
Examples:
Instantiation from a list of message templates:
@@ -971,7 +973,7 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "chat"]`
@@ -1122,13 +1124,18 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
)
```
Args:
- messages: sequence of message representations.
+ messages: Sequence of message representations.
+
A message can be represented using the following formats:
- (1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
- (message type, template); e.g., ("human", "{user_input}"),
- (4) 2-tuple of (message class, template), (5) a string which is
- shorthand for ("human", template); e.g., "{user_input}".
- template_format: format of the template. Defaults to "f-string".
+
+ 1. `BaseMessagePromptTemplate`
+ 2. `BaseMessage`
+ 3. 2-tuple of `(message type, template)`; e.g.,
+ `("human", "{user_input}")`
+ 4. 2-tuple of `(message class, template)`
+ 5. A string which is shorthand for `("human", template)`; e.g.,
+ `"{user_input}"`
+ template_format: format of the template.
Returns:
a chat prompt template.
@@ -1239,7 +1246,7 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
"""Extend the chat template with a sequence of messages.
Args:
- messages: sequence of message representations to append.
+ messages: Sequence of message representations to append.
"""
self.messages.extend(
[_convert_to_message_template(message) for message in messages]
@@ -1287,7 +1294,7 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
"""Human-readable representation.
Args:
- html: Whether to format as HTML. Defaults to `False`.
+ html: Whether to format as HTML.
Returns:
Human-readable representation.
@@ -1306,7 +1313,7 @@ def _create_template_from_message_type(
Args:
message_type: str the type of the message template (e.g., "human", "ai", etc.)
template: str the template string.
- template_format: format of the template. Defaults to "f-string".
+ template_format: format of the template.
Returns:
a message prompt template of the appropriate type.
@@ -1336,11 +1343,25 @@ def _create_template_from_message_type(
raise ValueError(msg)
var_name = template[1:-1]
message = MessagesPlaceholder(variable_name=var_name, optional=True)
- elif len(template) == 2 and isinstance(template[1], bool):
- var_name_wrapped, is_optional = template
+ else:
+ try:
+ var_name_wrapped, is_optional = template
+ except ValueError as e:
+ msg = (
+ "Unexpected arguments for placeholder message type."
+ " Expected either a single string variable name"
+ " or a list of [variable_name: str, is_optional: bool]."
+ f" Got: {template}"
+ )
+ raise ValueError(msg) from e
+
+ if not isinstance(is_optional, bool):
+ msg = f"Expected is_optional to be a boolean. Got: {is_optional}"
+ raise ValueError(msg) # noqa: TRY004
+
if not isinstance(var_name_wrapped, str):
msg = f"Expected variable name to be a string. Got: {var_name_wrapped}"
- raise ValueError(msg) # noqa:TRY004
+ raise ValueError(msg) # noqa: TRY004
if var_name_wrapped[0] != "{" or var_name_wrapped[-1] != "}":
msg = (
f"Invalid placeholder template: {var_name_wrapped}."
@@ -1350,14 +1371,6 @@ def _create_template_from_message_type(
var_name = var_name_wrapped[1:-1]
message = MessagesPlaceholder(variable_name=var_name, optional=is_optional)
- else:
- msg = (
- "Unexpected arguments for placeholder message type."
- " Expected either a single string variable name"
- " or a list of [variable_name: str, is_optional: bool]."
- f" Got: {template}"
- )
- raise ValueError(msg)
else:
msg = (
f"Unexpected message type: {message_type}. Use one of 'human',"
@@ -1383,7 +1396,7 @@ def _convert_to_message_template(
Args:
message: a representation of a message in one of the supported formats.
- template_format: format of the template. Defaults to "f-string".
+ template_format: format of the template.
Returns:
an instance of a message or a message template.
@@ -1411,10 +1424,11 @@ def _convert_to_message_template(
)
raise ValueError(msg)
message = (message["role"], message["content"])
- if len(message) != 2:
+ try:
+ message_type_str, template = message
+ except ValueError as e:
msg = f"Expected 2-tuple of (role, template), got {message}"
- raise ValueError(msg)
- message_type_str, template = message
+ raise ValueError(msg) from e
if isinstance(message_type_str, str):
message_ = _create_template_from_message_type(
message_type_str, template, template_format=template_format
diff --git a/libs/core/langchain_core/prompts/dict.py b/libs/core/langchain_core/prompts/dict.py
index a8b0094f78e..1d6f76384ed 100644
--- a/libs/core/langchain_core/prompts/dict.py
+++ b/libs/core/langchain_core/prompts/dict.py
@@ -69,12 +69,12 @@ class DictPromptTemplate(RunnableSerializable[dict, dict]):
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain_core", "prompts", "dict"]`
@@ -85,7 +85,7 @@ class DictPromptTemplate(RunnableSerializable[dict, dict]):
"""Human-readable representation.
Args:
- html: Whether to format as HTML. Defaults to `False`.
+ html: Whether to format as HTML.
Returns:
Human-readable representation.
diff --git a/libs/core/langchain_core/prompts/few_shot_with_templates.py b/libs/core/langchain_core/prompts/few_shot_with_templates.py
index 7d0997da9d5..0693de2205f 100644
--- a/libs/core/langchain_core/prompts/few_shot_with_templates.py
+++ b/libs/core/langchain_core/prompts/few_shot_with_templates.py
@@ -46,7 +46,7 @@ class FewShotPromptWithTemplates(StringPromptTemplate):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "few_shot_with_templates"]`
diff --git a/libs/core/langchain_core/prompts/image.py b/libs/core/langchain_core/prompts/image.py
index 26987576641..c650a032e73 100644
--- a/libs/core/langchain_core/prompts/image.py
+++ b/libs/core/langchain_core/prompts/image.py
@@ -49,7 +49,7 @@ class ImagePromptTemplate(BasePromptTemplate[ImageURL]):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "image"]`
diff --git a/libs/core/langchain_core/prompts/loading.py b/libs/core/langchain_core/prompts/loading.py
index 0b9ec9eefa0..c1a95f63d36 100644
--- a/libs/core/langchain_core/prompts/loading.py
+++ b/libs/core/langchain_core/prompts/loading.py
@@ -139,7 +139,7 @@ def load_prompt(path: str | Path, encoding: str | None = None) -> BasePromptTemp
Args:
path: Path to the prompt file.
- encoding: Encoding of the file. Defaults to `None`.
+ encoding: Encoding of the file.
Returns:
A PromptTemplate object.
diff --git a/libs/core/langchain_core/prompts/message.py b/libs/core/langchain_core/prompts/message.py
index d1938ff4d4c..bf52af49590 100644
--- a/libs/core/langchain_core/prompts/message.py
+++ b/libs/core/langchain_core/prompts/message.py
@@ -18,12 +18,12 @@ class BaseMessagePromptTemplate(Serializable, ABC):
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "chat"]`
@@ -32,13 +32,13 @@ class BaseMessagePromptTemplate(Serializable, ABC):
@abstractmethod
def format_messages(self, **kwargs: Any) -> list[BaseMessage]:
- """Format messages from kwargs. Should return a list of BaseMessages.
+ """Format messages from kwargs. Should return a list of `BaseMessage` objects.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
- List of BaseMessages.
+ List of `BaseMessage` objects.
"""
async def aformat_messages(self, **kwargs: Any) -> list[BaseMessage]:
@@ -48,7 +48,7 @@ class BaseMessagePromptTemplate(Serializable, ABC):
**kwargs: Keyword arguments to use for formatting.
Returns:
- List of BaseMessages.
+ List of `BaseMessage` objects.
"""
return self.format_messages(**kwargs)
@@ -68,7 +68,7 @@ class BaseMessagePromptTemplate(Serializable, ABC):
"""Human-readable representation.
Args:
- html: Whether to format as HTML. Defaults to `False`.
+ html: Whether to format as HTML.
Returns:
Human-readable representation.
diff --git a/libs/core/langchain_core/prompts/prompt.py b/libs/core/langchain_core/prompts/prompt.py
index 59fdf0882d7..6d486dc3aea 100644
--- a/libs/core/langchain_core/prompts/prompt.py
+++ b/libs/core/langchain_core/prompts/prompt.py
@@ -66,7 +66,7 @@ class PromptTemplate(StringPromptTemplate):
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "prompt"]`
@@ -220,7 +220,7 @@ class PromptTemplate(StringPromptTemplate):
example_separator: The separator to use in between examples. Defaults
to two new line characters.
prefix: String that should go before any examples. Generally includes
- examples. Default to an empty string.
+ examples.
Returns:
The final prompt generated.
@@ -275,13 +275,12 @@ class PromptTemplate(StringPromptTemplate):
Args:
template: The template to load.
template_format: The format of the template. Use `jinja2` for jinja2,
- `mustache` for mustache, and `f-string` for f-strings.
- Defaults to `f-string`.
+ `mustache` for mustache, and `f-string` for f-strings.
partial_variables: A dictionary of variables that can be used to partially
- fill in the template. For example, if the template is
- `"{variable1} {variable2}"`, and `partial_variables` is
- `{"variable1": "foo"}`, then the final prompt will be
- `"foo {variable2}"`. Defaults to `None`.
+ fill in the template. For example, if the template is
+ `"{variable1} {variable2}"`, and `partial_variables` is
+ `{"variable1": "foo"}`, then the final prompt will be
+ `"foo {variable2}"`.
**kwargs: Any other arguments to pass to the prompt template.
Returns:
diff --git a/libs/core/langchain_core/prompts/string.py b/libs/core/langchain_core/prompts/string.py
index 581cff8bf5e..c93615fdb30 100644
--- a/libs/core/langchain_core/prompts/string.py
+++ b/libs/core/langchain_core/prompts/string.py
@@ -4,7 +4,7 @@ from __future__ import annotations
import warnings
from abc import ABC
-from collections.abc import Callable
+from collections.abc import Callable, Sequence
from string import Formatter
from typing import Any, Literal
@@ -122,13 +122,16 @@ def mustache_formatter(template: str, /, **kwargs: Any) -> str:
def mustache_template_vars(
template: str,
) -> set[str]:
- """Get the variables from a mustache template.
+ """Get the top-level variables from a mustache template.
+
+ For nested variables like `{{person.name}}`, only the top-level
+ key (`person`) is returned.
Args:
template: The template string.
Returns:
- The variables from the template.
+ The top-level variables from the template.
"""
variables: set[str] = set()
section_depth = 0
@@ -149,9 +152,7 @@ def mustache_template_vars(
Defs = dict[str, "Defs"]
-def mustache_schema(
- template: str,
-) -> type[BaseModel]:
+def mustache_schema(template: str) -> type[BaseModel]:
"""Get the variables from a mustache template.
Args:
@@ -175,6 +176,11 @@ def mustache_schema(
fields[prefix] = False
elif type_ in {"variable", "no escape"}:
fields[prefix + tuple(key.split("."))] = True
+
+ for fkey, fval in fields.items():
+ fields[fkey] = fval and not any(
+ is_subsequence(fkey, k) for k in fields if k != fkey
+ )
defs: Defs = {} # None means leaf node
while fields:
field, is_leaf = fields.popitem()
@@ -273,7 +279,7 @@ class StringPromptTemplate(BasePromptTemplate, ABC):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "prompts", "base"]`
@@ -327,3 +333,12 @@ class StringPromptTemplate(BasePromptTemplate, ABC):
def pretty_print(self) -> None:
"""Print a pretty representation of the prompt."""
print(self.pretty_repr(html=is_interactive_env())) # noqa: T201
+
+
+def is_subsequence(child: Sequence, parent: Sequence) -> bool:
+ """Return True if child is subsequence of parent."""
+ if len(child) == 0 or len(parent) == 0:
+ return False
+ if len(parent) < len(child):
+ return False
+ return all(child[i] == parent[i] for i in range(len(child)))
diff --git a/libs/core/langchain_core/prompts/structured.py b/libs/core/langchain_core/prompts/structured.py
index 158c90be170..33888691f79 100644
--- a/libs/core/langchain_core/prompts/structured.py
+++ b/libs/core/langchain_core/prompts/structured.py
@@ -63,13 +63,13 @@ class StructuredPrompt(ChatPromptTemplate):
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
For example, if the class is `langchain.llms.openai.OpenAI`, then the
namespace is `["langchain", "llms", "openai"]`
Returns:
- The namespace of the langchain object.
+ The namespace of the LangChain object.
"""
return cls.__module__.split(".")
@@ -104,19 +104,23 @@ class StructuredPrompt(ChatPromptTemplate):
)
```
Args:
- messages: sequence of message representations.
+ messages: Sequence of message representations.
+
A message can be represented using the following formats:
- (1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
- (message type, template); e.g., ("human", "{user_input}"),
- (4) 2-tuple of (message class, template), (5) a string which is
- shorthand for ("human", template); e.g., "{user_input}"
- schema: a dictionary representation of function call, or a Pydantic model.
+
+ 1. `BaseMessagePromptTemplate`
+ 2. `BaseMessage`
+ 3. 2-tuple of `(message type, template)`; e.g.,
+ `("human", "{user_input}")`
+ 4. 2-tuple of `(message class, template)`
+ 5. A string which is shorthand for `("human", template)`; e.g.,
+ `"{user_input}"`
+ schema: A dictionary representation of function call, or a Pydantic model.
**kwargs: Any additional kwargs to pass through to
`ChatModel.with_structured_output(schema, **kwargs)`.
Returns:
- a structured prompt template
-
+ A structured prompt template
"""
return cls(messages, schema, **kwargs)
@@ -144,7 +148,7 @@ class StructuredPrompt(ChatPromptTemplate):
Args:
others: The language model to pipe the structured prompt to.
- name: The name of the pipeline. Defaults to `None`.
+ name: The name of the pipeline.
Returns:
A RunnableSequence object.
diff --git a/libs/core/langchain_core/pydantic_v1/__init__.py b/libs/core/langchain_core/pydantic_v1/__init__.py
deleted file mode 100644
index 1f7c9cb8699..00000000000
--- a/libs/core/langchain_core/pydantic_v1/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-"""Pydantic v1 compatibility shim."""
-
-from importlib import metadata
-
-from pydantic.v1 import * # noqa: F403
-
-from langchain_core._api.deprecation import warn_deprecated
-
-try:
- _PYDANTIC_MAJOR_VERSION: int = int(metadata.version("pydantic").split(".")[0])
-except metadata.PackageNotFoundError:
- _PYDANTIC_MAJOR_VERSION = 0
-
-warn_deprecated(
- "0.3.0",
- removal="1.0.0",
- alternative="pydantic.v1 or pydantic",
- message=(
- "As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
- "The langchain_core.pydantic_v1 module was a "
- "compatibility shim for pydantic v1, and should no longer be used. "
- "Please update the code to import from Pydantic directly.\n\n"
- "For example, replace imports like: "
- "`from langchain_core.pydantic_v1 import BaseModel`\n"
- "with: `from pydantic import BaseModel`\n"
- "or the v1 compatibility namespace if you are working in a code base "
- "that has not been fully upgraded to pydantic 2 yet. "
- "\tfrom pydantic.v1 import BaseModel\n"
- ),
-)
diff --git a/libs/core/langchain_core/pydantic_v1/dataclasses.py b/libs/core/langchain_core/pydantic_v1/dataclasses.py
deleted file mode 100644
index cdcdb77e3a0..00000000000
--- a/libs/core/langchain_core/pydantic_v1/dataclasses.py
+++ /dev/null
@@ -1,23 +0,0 @@
-"""Pydantic v1 compatibility shim."""
-
-from pydantic.v1.dataclasses import * # noqa: F403
-
-from langchain_core._api import warn_deprecated
-
-warn_deprecated(
- "0.3.0",
- removal="1.0.0",
- alternative="pydantic.v1 or pydantic",
- message=(
- "As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
- "The langchain_core.pydantic_v1 module was a "
- "compatibility shim for pydantic v1, and should no longer be used. "
- "Please update the code to import from Pydantic directly.\n\n"
- "For example, replace imports like: "
- "`from langchain_core.pydantic_v1 import BaseModel`\n"
- "with: `from pydantic import BaseModel`\n"
- "or the v1 compatibility namespace if you are working in a code base "
- "that has not been fully upgraded to pydantic 2 yet. "
- "\tfrom pydantic.v1 import BaseModel\n"
- ),
-)
diff --git a/libs/core/langchain_core/pydantic_v1/main.py b/libs/core/langchain_core/pydantic_v1/main.py
deleted file mode 100644
index 005ad4ed347..00000000000
--- a/libs/core/langchain_core/pydantic_v1/main.py
+++ /dev/null
@@ -1,23 +0,0 @@
-"""Pydantic v1 compatibility shim."""
-
-from pydantic.v1.main import * # noqa: F403
-
-from langchain_core._api import warn_deprecated
-
-warn_deprecated(
- "0.3.0",
- removal="1.0.0",
- alternative="pydantic.v1 or pydantic",
- message=(
- "As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
- "The langchain_core.pydantic_v1 module was a "
- "compatibility shim for pydantic v1, and should no longer be used. "
- "Please update the code to import from Pydantic directly.\n\n"
- "For example, replace imports like: "
- "`from langchain_core.pydantic_v1 import BaseModel`\n"
- "with: `from pydantic import BaseModel`\n"
- "or the v1 compatibility namespace if you are working in a code base "
- "that has not been fully upgraded to pydantic 2 yet. "
- "\tfrom pydantic.v1 import BaseModel\n"
- ),
-)
diff --git a/libs/core/langchain_core/rate_limiters.py b/libs/core/langchain_core/rate_limiters.py
index dd0753fbc41..986d3ff63b7 100644
--- a/libs/core/langchain_core/rate_limiters.py
+++ b/libs/core/langchain_core/rate_limiters.py
@@ -21,11 +21,8 @@ class BaseRateLimiter(abc.ABC):
Current limitations:
- Rate limiting information is not surfaced in tracing or callbacks. This means
- that the total time it takes to invoke a chat model will encompass both
- the time spent waiting for tokens and the time spent making the request.
-
-
- !!! version-added "Added in version 0.2.24"
+ that the total time it takes to invoke a chat model will encompass both
+ the time spent waiting for tokens and the time spent making the request.
"""
@abc.abstractmethod
@@ -33,18 +30,18 @@ class BaseRateLimiter(abc.ABC):
"""Attempt to acquire the necessary tokens for the rate limiter.
This method blocks until the required tokens are available if `blocking`
- is set to True.
+ is set to `True`.
- If `blocking` is set to False, the method will immediately return the result
+ If `blocking` is set to `False`, the method will immediately return the result
of the attempt to acquire the tokens.
Args:
blocking: If `True`, the method will block until the tokens are available.
If `False`, the method will return immediately with the result of
- the attempt. Defaults to `True`.
+ the attempt.
Returns:
- True if the tokens were successfully acquired, False otherwise.
+ `True` if the tokens were successfully acquired, `False` otherwise.
"""
@abc.abstractmethod
@@ -52,18 +49,18 @@ class BaseRateLimiter(abc.ABC):
"""Attempt to acquire the necessary tokens for the rate limiter.
This method blocks until the required tokens are available if `blocking`
- is set to True.
+ is set to `True`.
- If `blocking` is set to False, the method will immediately return the result
+ If `blocking` is set to `False`, the method will immediately return the result
of the attempt to acquire the tokens.
Args:
blocking: If `True`, the method will block until the tokens are available.
If `False`, the method will return immediately with the result of
- the attempt. Defaults to `True`.
+ the attempt.
Returns:
- True if the tokens were successfully acquired, False otherwise.
+ `True` if the tokens were successfully acquired, `False` otherwise.
"""
@@ -84,7 +81,7 @@ class InMemoryRateLimiter(BaseRateLimiter):
not enough tokens in the bucket, the request is blocked until there are
enough tokens.
- These *tokens* have NOTHING to do with LLM tokens. They are just
+ These tokens have nothing to do with LLM tokens. They are just
a way to keep track of how many requests can be made at a given time.
Current limitations:
@@ -109,7 +106,7 @@ class InMemoryRateLimiter(BaseRateLimiter):
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
- model_name="claude-3-opus-20240229", rate_limiter=rate_limiter
+ model_name="claude-sonnet-4-5-20250929", rate_limiter=rate_limiter
)
for _ in range(5):
@@ -118,9 +115,6 @@ class InMemoryRateLimiter(BaseRateLimiter):
toc = time.time()
print(toc - tic)
```
-
- !!! version-added "Added in version 0.2.24"
-
""" # noqa: E501
def __init__(
@@ -132,7 +126,7 @@ class InMemoryRateLimiter(BaseRateLimiter):
) -> None:
"""A rate limiter based on a token bucket.
- These *tokens* have NOTHING to do with LLM tokens. They are just
+ These tokens have nothing to do with LLM tokens. They are just
a way to keep track of how many requests can be made at a given time.
This rate limiter is designed to work in a threaded environment.
@@ -145,11 +139,11 @@ class InMemoryRateLimiter(BaseRateLimiter):
Args:
requests_per_second: The number of tokens to add per second to the bucket.
The tokens represent "credit" that can be used to make requests.
- check_every_n_seconds: check whether the tokens are available
+ check_every_n_seconds: Check whether the tokens are available
every this many seconds. Can be a float to represent
fractions of a second.
max_bucket_size: The maximum number of tokens that can be in the bucket.
- Must be at least 1. Used to prevent bursts of requests.
+ Must be at least `1`. Used to prevent bursts of requests.
"""
# Number of requests that we can make per second.
self.requests_per_second = requests_per_second
@@ -199,18 +193,18 @@ class InMemoryRateLimiter(BaseRateLimiter):
"""Attempt to acquire a token from the rate limiter.
This method blocks until the required tokens are available if `blocking`
- is set to True.
+ is set to `True`.
- If `blocking` is set to False, the method will immediately return the result
+ If `blocking` is set to `False`, the method will immediately return the result
of the attempt to acquire the tokens.
Args:
blocking: If `True`, the method will block until the tokens are available.
If `False`, the method will return immediately with the result of
- the attempt. Defaults to `True`.
+ the attempt.
Returns:
- True if the tokens were successfully acquired, False otherwise.
+ `True` if the tokens were successfully acquired, `False` otherwise.
"""
if not blocking:
return self._consume()
@@ -223,18 +217,18 @@ class InMemoryRateLimiter(BaseRateLimiter):
"""Attempt to acquire a token from the rate limiter. Async version.
This method blocks until the required tokens are available if `blocking`
- is set to True.
+ is set to `True`.
- If `blocking` is set to False, the method will immediately return the result
+ If `blocking` is set to `False`, the method will immediately return the result
of the attempt to acquire the tokens.
Args:
blocking: If `True`, the method will block until the tokens are available.
If `False`, the method will return immediately with the result of
- the attempt. Defaults to `True`.
+ the attempt.
Returns:
- True if the tokens were successfully acquired, False otherwise.
+ `True` if the tokens were successfully acquired, `False` otherwise.
"""
if not blocking:
return self._consume()
diff --git a/libs/core/langchain_core/retrievers.py b/libs/core/langchain_core/retrievers.py
index 74b9eb5cdb6..7be6df9a727 100644
--- a/libs/core/langchain_core/retrievers.py
+++ b/libs/core/langchain_core/retrievers.py
@@ -7,7 +7,6 @@ the backbone of a retriever, but there are other types of retrievers as well.
from __future__ import annotations
-import warnings
from abc import ABC, abstractmethod
from inspect import signature
from typing import TYPE_CHECKING, Any
@@ -15,8 +14,6 @@ from typing import TYPE_CHECKING, Any
from pydantic import ConfigDict
from typing_extensions import Self, TypedDict, override
-from langchain_core._api import deprecated
-from langchain_core.callbacks import Callbacks
from langchain_core.callbacks.manager import AsyncCallbackManager, CallbackManager
from langchain_core.documents import Document
from langchain_core.runnables import (
@@ -53,25 +50,25 @@ class LangSmithRetrieverParams(TypedDict, total=False):
class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
- """Abstract base class for a Document retrieval system.
+ """Abstract base class for a document retrieval system.
A retrieval system is defined as something that can take string queries and return
- the most 'relevant' Documents from some source.
+ the most 'relevant' documents from some source.
Usage:
- A retriever follows the standard Runnable interface, and should be used
- via the standard Runnable methods of `invoke`, `ainvoke`, `batch`, `abatch`.
+ A retriever follows the standard `Runnable` interface, and should be used via the
+ standard `Runnable` methods of `invoke`, `ainvoke`, `batch`, `abatch`.
Implementation:
- When implementing a custom retriever, the class should implement
- the `_get_relevant_documents` method to define the logic for retrieving documents.
+ When implementing a custom retriever, the class should implement the
+ `_get_relevant_documents` method to define the logic for retrieving documents.
Optionally, an async native implementations can be provided by overriding the
`_aget_relevant_documents` method.
- Example: A retriever that returns the first 5 documents from a list of documents
+ !!! example "Retriever that returns the first 5 documents from a list of documents"
```python
from langchain_core.documents import Document
@@ -90,7 +87,7 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
return self.docs[:self.k]
```
- Example: A simple retriever based on a scikit-learn vectorizer
+ !!! example "Simple retriever based on a scikit-learn vectorizer"
```python
from sklearn.metrics.pairwise import cosine_similarity
@@ -121,16 +118,20 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
_new_arg_supported: bool = False
_expects_other_args: bool = False
tags: list[str] | None = None
- """Optional list of tags associated with the retriever. Defaults to `None`.
+ """Optional list of tags associated with the retriever.
+
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
+
You can use these to eg identify a specific instance of a retriever with its
use case.
"""
metadata: dict[str, Any] | None = None
- """Optional metadata associated with the retriever. Defaults to `None`.
+ """Optional metadata associated with the retriever.
+
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
+
You can use these to eg identify a specific instance of a retriever with its
use case.
"""
@@ -138,35 +139,6 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
@override
def __init_subclass__(cls, **kwargs: Any) -> None:
super().__init_subclass__(**kwargs)
- # Version upgrade for old retrievers that implemented the public
- # methods directly.
- if cls.get_relevant_documents != BaseRetriever.get_relevant_documents:
- warnings.warn(
- "Retrievers must implement abstract `_get_relevant_documents` method"
- " instead of `get_relevant_documents`",
- DeprecationWarning,
- stacklevel=4,
- )
- swap = cls.get_relevant_documents
- cls.get_relevant_documents = ( # type: ignore[method-assign]
- BaseRetriever.get_relevant_documents
- )
- cls._get_relevant_documents = swap # type: ignore[method-assign]
- if (
- hasattr(cls, "aget_relevant_documents")
- and cls.aget_relevant_documents != BaseRetriever.aget_relevant_documents
- ):
- warnings.warn(
- "Retrievers must implement abstract `_aget_relevant_documents` method"
- " instead of `aget_relevant_documents`",
- DeprecationWarning,
- stacklevel=4,
- )
- aswap = cls.aget_relevant_documents
- cls.aget_relevant_documents = ( # type: ignore[method-assign]
- BaseRetriever.aget_relevant_documents
- )
- cls._aget_relevant_documents = aswap # type: ignore[method-assign]
parameters = signature(cls._get_relevant_documents).parameters
cls._new_arg_supported = parameters.get("run_manager") is not None
if (
@@ -207,7 +179,7 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
Args:
input: The query string.
- config: Configuration for the retriever. Defaults to `None`.
+ config: Configuration for the retriever.
**kwargs: Additional arguments to pass to the retriever.
Returns:
@@ -268,7 +240,7 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
Args:
input: The query string.
- config: Configuration for the retriever. Defaults to `None`.
+ config: Configuration for the retriever.
**kwargs: Additional arguments to pass to the retriever.
Returns:
@@ -348,91 +320,3 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
query,
run_manager=run_manager.get_sync(),
)
-
- @deprecated(since="0.1.46", alternative="invoke", removal="1.0")
- def get_relevant_documents(
- self,
- query: str,
- *,
- callbacks: Callbacks = None,
- tags: list[str] | None = None,
- metadata: dict[str, Any] | None = None,
- run_name: str | None = None,
- **kwargs: Any,
- ) -> list[Document]:
- """Retrieve documents relevant to a query.
-
- Users should favor using `.invoke` or `.batch` rather than
- `get_relevant_documents directly`.
-
- Args:
- query: string to find relevant documents for.
- callbacks: Callback manager or list of callbacks. Defaults to `None`.
- tags: Optional list of tags associated with the retriever.
- These tags will be associated with each call to this retriever,
- and passed as arguments to the handlers defined in `callbacks`.
- Defaults to `None`.
- metadata: Optional metadata associated with the retriever.
- This metadata will be associated with each call to this retriever,
- and passed as arguments to the handlers defined in `callbacks`.
- Defaults to `None`.
- run_name: Optional name for the run. Defaults to `None`.
- **kwargs: Additional arguments to pass to the retriever.
-
- Returns:
- List of relevant documents.
- """
- config: RunnableConfig = {}
- if callbacks:
- config["callbacks"] = callbacks
- if tags:
- config["tags"] = tags
- if metadata:
- config["metadata"] = metadata
- if run_name:
- config["run_name"] = run_name
- return self.invoke(query, config, **kwargs)
-
- @deprecated(since="0.1.46", alternative="ainvoke", removal="1.0")
- async def aget_relevant_documents(
- self,
- query: str,
- *,
- callbacks: Callbacks = None,
- tags: list[str] | None = None,
- metadata: dict[str, Any] | None = None,
- run_name: str | None = None,
- **kwargs: Any,
- ) -> list[Document]:
- """Asynchronously get documents relevant to a query.
-
- Users should favor using `.ainvoke` or `.abatch` rather than
- `aget_relevant_documents directly`.
-
- Args:
- query: string to find relevant documents for.
- callbacks: Callback manager or list of callbacks.
- tags: Optional list of tags associated with the retriever.
- These tags will be associated with each call to this retriever,
- and passed as arguments to the handlers defined in `callbacks`.
- Defaults to `None`.
- metadata: Optional metadata associated with the retriever.
- This metadata will be associated with each call to this retriever,
- and passed as arguments to the handlers defined in `callbacks`.
- Defaults to `None`.
- run_name: Optional name for the run. Defaults to `None`.
- **kwargs: Additional arguments to pass to the retriever.
-
- Returns:
- List of relevant documents.
- """
- config: RunnableConfig = {}
- if callbacks:
- config["callbacks"] = callbacks
- if tags:
- config["tags"] = tags
- if metadata:
- config["metadata"] = metadata
- if run_name:
- config["run_name"] = run_name
- return await self.ainvoke(query, config, **kwargs)
diff --git a/libs/core/langchain_core/runnables/base.py b/libs/core/langchain_core/runnables/base.py
index d5ca1d48c7e..71fb4978cc0 100644
--- a/libs/core/langchain_core/runnables/base.py
+++ b/libs/core/langchain_core/runnables/base.py
@@ -118,6 +118,8 @@ if TYPE_CHECKING:
Other = TypeVar("Other")
+_RUNNABLE_GENERIC_NUM_ARGS = 2 # Input and Output
+
class Runnable(ABC, Generic[Input, Output]):
"""A unit of work that can be invoked, batched, streamed, transformed and composed.
@@ -147,11 +149,11 @@ class Runnable(ABC, Generic[Input, Output]):
the `input_schema` property, the `output_schema` property and `config_schema`
method.
- LCEL and Composition
- ====================
+ Composition
+ ===========
+
+ Runnable objects can be composed together to create chains in a declarative way.
- The LangChain Expression Language (LCEL) is a declarative way to compose
- `Runnable` objectsinto chains.
Any chain constructed this way will automatically have sync, async, batch, and
streaming support.
@@ -235,21 +237,21 @@ class Runnable(ABC, Generic[Input, Output]):
You can set the global debug flag to True to enable debug output for all chains:
- ```python
- from langchain_core.globals import set_debug
+ ```python
+ from langchain_core.globals import set_debug
- set_debug(True)
- ```
+ set_debug(True)
+ ```
Alternatively, you can pass existing or custom callbacks to any given chain:
- ```python
- from langchain_core.tracers import ConsoleCallbackHandler
+ ```python
+ from langchain_core.tracers import ConsoleCallbackHandler
- chain.invoke(..., config={"callbacks": [ConsoleCallbackHandler()]})
- ```
+ chain.invoke(..., config={"callbacks": [ConsoleCallbackHandler()]})
+ ```
- For a UI (and much more) checkout [LangSmith](https://docs.smith.langchain.com/).
+ For a UI (and much more) checkout [LangSmith](https://docs.langchain.com/langsmith/home).
"""
@@ -304,20 +306,23 @@ class Runnable(ABC, Generic[Input, Output]):
TypeError: If the input type cannot be inferred.
"""
# First loop through all parent classes and if any of them is
- # a pydantic model, we will pick up the generic parameterization
+ # a Pydantic model, we will pick up the generic parameterization
# from that model via the __pydantic_generic_metadata__ attribute.
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
- if "args" in metadata and len(metadata["args"]) == 2:
+ if (
+ "args" in metadata
+ and len(metadata["args"]) == _RUNNABLE_GENERIC_NUM_ARGS
+ ):
return metadata["args"][0]
- # If we didn't find a pydantic model in the parent classes,
+ # If we didn't find a Pydantic model in the parent classes,
# then loop through __orig_bases__. This corresponds to
# Runnables that are not pydantic models.
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
- if type_args and len(type_args) == 2:
+ if type_args and len(type_args) == _RUNNABLE_GENERIC_NUM_ARGS:
return type_args[0]
msg = (
@@ -340,12 +345,15 @@ class Runnable(ABC, Generic[Input, Output]):
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
- if "args" in metadata and len(metadata["args"]) == 2:
+ if (
+ "args" in metadata
+ and len(metadata["args"]) == _RUNNABLE_GENERIC_NUM_ARGS
+ ):
return metadata["args"][1]
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
- if type_args and len(type_args) == 2:
+ if type_args and len(type_args) == _RUNNABLE_GENERIC_NUM_ARGS:
return type_args[1]
msg = (
@@ -390,7 +398,7 @@ class Runnable(ABC, Generic[Input, Output]):
self.get_name("Input"),
root=root_type,
# create model needs access to appropriate type annotations to be
- # able to construct the pydantic model.
+ # able to construct the Pydantic model.
# When we create the model, we pass information about the namespace
# where the model is being created, so the type annotations can
# be resolved correctly as well.
@@ -424,7 +432,7 @@ class Runnable(ABC, Generic[Input, Output]):
print(runnable.get_input_jsonschema())
```
- !!! version-added "Added in version 0.3.0"
+ !!! version-added "Added in `langchain-core` 0.3.0"
"""
return self.get_input_schema(config).model_json_schema()
@@ -433,7 +441,7 @@ class Runnable(ABC, Generic[Input, Output]):
def output_schema(self) -> type[BaseModel]:
"""Output schema.
- The type of output this `Runnable` produces specified as a pydantic model.
+ The type of output this `Runnable` produces specified as a Pydantic model.
"""
return self.get_output_schema()
@@ -468,7 +476,7 @@ class Runnable(ABC, Generic[Input, Output]):
self.get_name("Output"),
root=root_type,
# create model needs access to appropriate type annotations to be
- # able to construct the pydantic model.
+ # able to construct the Pydantic model.
# When we create the model, we pass information about the namespace
# where the model is being created, so the type annotations can
# be resolved correctly as well.
@@ -502,7 +510,7 @@ class Runnable(ABC, Generic[Input, Output]):
print(runnable.get_output_jsonschema())
```
- !!! version-added "Added in version 0.3.0"
+ !!! version-added "Added in `langchain-core` 0.3.0"
"""
return self.get_output_schema(config).model_json_schema()
@@ -566,7 +574,7 @@ class Runnable(ABC, Generic[Input, Output]):
Returns:
A JSON schema that represents the config of the `Runnable`.
- !!! version-added "Added in version 0.3.0"
+ !!! version-added "Added in `langchain-core` 0.3.0"
"""
return self.config_schema(include=include).model_json_schema()
@@ -766,7 +774,7 @@ class Runnable(ABC, Generic[Input, Output]):
"""Assigns new fields to the `dict` output of this `Runnable`.
```python
- from langchain_community.llms.fake import FakeStreamingListLLM
+ from langchain_core.language_models.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
@@ -776,11 +784,11 @@ class Runnable(ABC, Generic[Input, Output]):
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
- llm = FakeStreamingListLLM(responses=["foo-lish"])
+ model = FakeStreamingListLLM(responses=["foo-lish"])
- chain: Runnable = prompt | llm | {"str": StrOutputParser()}
+ chain: Runnable = prompt | model | {"str": StrOutputParser()}
- chain_with_assign = chain.assign(hello=itemgetter("str") | llm)
+ chain_with_assign = chain.assign(hello=itemgetter("str") | model)
print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
@@ -818,10 +826,12 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
input: The input to the `Runnable`.
config: A config to use when invoking the `Runnable`.
+
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
- do in parallel, and other keys. Please refer to the `RunnableConfig`
- for more details. Defaults to `None`.
+ do in parallel, and other keys.
+
+ Please refer to `RunnableConfig` for more details.
Returns:
The output of the `Runnable`.
@@ -838,10 +848,12 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
input: The input to the `Runnable`.
config: A config to use when invoking the `Runnable`.
+
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
- do in parallel, and other keys. Please refer to the `RunnableConfig`
- for more details. Defaults to `None`.
+ do in parallel, and other keys.
+
+ Please refer to `RunnableConfig` for more details.
Returns:
The output of the `Runnable`.
@@ -860,7 +872,7 @@ class Runnable(ABC, Generic[Input, Output]):
The default implementation of batch works well for IO bound runnables.
- Subclasses should override this method if they can batch more efficiently;
+ Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying `Runnable` uses an API which supports a batch mode.
Args:
@@ -868,10 +880,10 @@ class Runnable(ABC, Generic[Input, Output]):
config: A config to use when invoking the `Runnable`. The config supports
standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work
- to do in parallel, and other keys. Please refer to the
- `RunnableConfig` for more details. Defaults to `None`.
+ to do in parallel, and other keys.
+
+ Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
- Defaults to `False`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Returns:
@@ -933,12 +945,13 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
+
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
- do in parallel, and other keys. Please refer to the `RunnableConfig`
- for more details. Defaults to `None`.
+ do in parallel, and other keys.
+
+ Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
- Defaults to `False`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
@@ -994,18 +1007,19 @@ class Runnable(ABC, Generic[Input, Output]):
The default implementation of `batch` works well for IO bound runnables.
- Subclasses should override this method if they can batch more efficiently;
+ Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying `Runnable` uses an API which supports a batch mode.
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
+
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
- do in parallel, and other keys. Please refer to the `RunnableConfig`
- for more details. Defaults to `None`.
+ do in parallel, and other keys.
+
+ Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
- Defaults to `False`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Returns:
@@ -1064,12 +1078,13 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
+
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
- do in parallel, and other keys. Please refer to the `RunnableConfig`
- for more details. Defaults to `None`.
+ do in parallel, and other keys.
+
+ Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
- Defaults to `False`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
@@ -1116,11 +1131,11 @@ class Runnable(ABC, Generic[Input, Output]):
) -> Iterator[Output]:
"""Default implementation of `stream`, which calls `invoke`.
- Subclasses should override this method if they support streaming output.
+ Subclasses must override this method if they support streaming output.
Args:
input: The input to the `Runnable`.
- config: The config to use for the `Runnable`. Defaults to `None`.
+ config: The config to use for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
@@ -1137,11 +1152,11 @@ class Runnable(ABC, Generic[Input, Output]):
) -> AsyncIterator[Output]:
"""Default implementation of `astream`, which calls `ainvoke`.
- Subclasses should override this method if they support streaming output.
+ Subclasses must override this method if they support streaming output.
Args:
input: The input to the `Runnable`.
- config: The config to use for the `Runnable`. Defaults to `None`.
+ config: The config to use for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
@@ -1273,22 +1288,20 @@ class Runnable(ABC, Generic[Input, Output]):
A `StreamEvent` is a dictionary with the following schema:
- - `event`: **str** - Event names are of the format:
+ - `event`: Event names are of the format:
`on_[runnable_type]_(start|stream|end)`.
- - `name`: **str** - The name of the `Runnable` that generated the event.
- - `run_id`: **str** - randomly generated ID associated with the given
- execution of the `Runnable` that emitted the event. A child `Runnable` that gets
- invoked as part of the execution of a parent `Runnable` is assigned its own
- unique ID.
- - `parent_ids`: **list[str]** - The IDs of the parent runnables that generated
- the event. The root `Runnable` will have an empty list. The order of the parent
- IDs is from the root to the immediate parent. Only available for v2 version of
- the API. The v1 version of the API will return an empty list.
- - `tags`: **list[str] | None** - The tags of the `Runnable` that generated
- the event.
- - `metadata`: **dict[str, Any] | None** - The metadata of the `Runnable` that
- generated the event.
- - `data`: **dict[str, Any]**
+ - `name`: The name of the `Runnable` that generated the event.
+ - `run_id`: Randomly generated ID associated with the given execution of the
+ `Runnable` that emitted the event. A child `Runnable` that gets invoked as
+ part of the execution of a parent `Runnable` is assigned its own unique ID.
+ - `parent_ids`: The IDs of the parent runnables that generated the event. The
+ root `Runnable` will have an empty list. The order of the parent IDs is from
+ the root to the immediate parent. Only available for v2 version of the API.
+ The v1 version of the API will return an empty list.
+ - `tags`: The tags of the `Runnable` that generated the event.
+ - `metadata`: The metadata of the `Runnable` that generated the event.
+ - `data`: The data associated with the event. The contents of this field
+ depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various
chains. Metadata fields have been omitted from the table for brevity.
@@ -1297,39 +1310,23 @@ class Runnable(ABC, Generic[Input, Output]):
!!! note
This reference table is for the v2 version of the schema.
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | event | name | chunk | input | output |
- +==========================+==================+=====================================+===================================================+=====================================================+
- | `on_chat_model_start` | [model name] | | `{"messages": [[SystemMessage, HumanMessage]]}` | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_chat_model_stream` | [model name] | `AIMessageChunk(content="hello")` | | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_chat_model_end` | [model name] | | `{"messages": [[SystemMessage, HumanMessage]]}` | `AIMessageChunk(content="hello world")` |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_llm_start` | [model name] | | `{'input': 'hello'}` | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_llm_stream` | [model name] | `'Hello' ` | | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_llm_end` | [model name] | | `'Hello human!'` | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_chain_start` | format_docs | | | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_chain_stream` | format_docs | `'hello world!, goodbye world!'` | | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_chain_end` | format_docs | | `[Document(...)]` | `'hello world!, goodbye world!'` |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_tool_start` | some_tool | | `{"x": 1, "y": "2"}` | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_tool_end` | some_tool | | | `{"x": 1, "y": "2"}` |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_retriever_start` | [retriever name] | | `{"query": "hello"}` | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_retriever_end` | [retriever name] | | `{"query": "hello"}` | `[Document(...), ..]` |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_prompt_start` | [template_name] | | `{"question": "hello"}` | |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
- | `on_prompt_end` | [template_name] | | `{"question": "hello"}` | `ChatPromptValue(messages: [SystemMessage, ...])` |
- +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
+ | event | name | chunk | input | output |
+ | ---------------------- | -------------------- | ----------------------------------- | ------------------------------------------------- | --------------------------------------------------- |
+ | `on_chat_model_start` | `'[model name]'` | | `{"messages": [[SystemMessage, HumanMessage]]}` | |
+ | `on_chat_model_stream` | `'[model name]'` | `AIMessageChunk(content="hello")` | | |
+ | `on_chat_model_end` | `'[model name]'` | | `{"messages": [[SystemMessage, HumanMessage]]}` | `AIMessageChunk(content="hello world")` |
+ | `on_llm_start` | `'[model name]'` | | `{'input': 'hello'}` | |
+ | `on_llm_stream` | `'[model name]'` | `'Hello' ` | | |
+ | `on_llm_end` | `'[model name]'` | | `'Hello human!'` | |
+ | `on_chain_start` | `'format_docs'` | | | |
+ | `on_chain_stream` | `'format_docs'` | `'hello world!, goodbye world!'` | | |
+ | `on_chain_end` | `'format_docs'` | | `[Document(...)]` | `'hello world!, goodbye world!'` |
+ | `on_tool_start` | `'some_tool'` | | `{"x": 1, "y": "2"}` | |
+ | `on_tool_end` | `'some_tool'` | | | `{"x": 1, "y": "2"}` |
+ | `on_retriever_start` | `'[retriever name]'` | | `{"query": "hello"}` | |
+ | `on_retriever_end` | `'[retriever name]'` | | `{"query": "hello"}` | `[Document(...), ..]` |
+ | `on_prompt_start` | `'[template_name]'` | | `{"question": "hello"}` | |
+ | `on_prompt_end` | `'[template_name]'` | | `{"question": "hello"}` | `ChatPromptValue(messages: [SystemMessage, ...])` |
In addition to the standard events, users can also dispatch custom events (see example below).
@@ -1337,13 +1334,10 @@ class Runnable(ABC, Generic[Input, Output]):
A custom event has following format:
- +-----------+------+-----------------------------------------------------------------------------------------------------------+
- | Attribute | Type | Description |
- +===========+======+===========================================================================================================+
- | name | str | A user defined name for the event. |
- +-----------+------+-----------------------------------------------------------------------------------------------------------+
- | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |
- +-----------+------+-----------------------------------------------------------------------------------------------------------+
+ | Attribute | Type | Description |
+ | ----------- | ------ | --------------------------------------------------------------------------------------------------------- |
+ | `name` | `str` | A user defined name for the event. |
+ | `data` | `Any` | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |
Here are declarations associated with the standard events shown above:
@@ -1378,7 +1372,8 @@ class Runnable(ABC, Generic[Input, Output]):
).with_config({"run_name": "my_template", "tags": ["my_template"]})
```
- Example:
+ For instance:
+
```python
from langchain_core.runnables import RunnableLambda
@@ -1391,8 +1386,8 @@ class Runnable(ABC, Generic[Input, Output]):
events = [event async for event in chain.astream_events("hello", version="v2")]
- # will produce the following events (run_id, and parent_ids
- # has been omitted for brevity):
+ # Will produce the following events
+ # (run_id, and parent_ids has been omitted for brevity):
[
{
"data": {"input": "hello"},
@@ -1447,7 +1442,7 @@ class Runnable(ABC, Generic[Input, Output]):
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
- ``
+ ```
Args:
input: The input to the `Runnable`.
@@ -1521,12 +1516,12 @@ class Runnable(ABC, Generic[Input, Output]):
Default implementation of transform, which buffers input and calls `astream`.
- Subclasses should override this method if they can start producing output while
+ Subclasses must override this method if they can start producing output while
input is still being generated.
Args:
input: An iterator of inputs to the `Runnable`.
- config: The config to use for the `Runnable`. Defaults to `None`.
+ config: The config to use for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
@@ -1566,12 +1561,12 @@ class Runnable(ABC, Generic[Input, Output]):
Default implementation of atransform, which buffers input and calls `astream`.
- Subclasses should override this method if they can start producing output while
+ Subclasses must override this method if they can start producing output while
input is still being generated.
Args:
input: An async iterator of inputs to the `Runnable`.
- config: The config to use for the `Runnable`. Defaults to `None`.
+ config: The config to use for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
@@ -1619,16 +1614,16 @@ class Runnable(ABC, Generic[Input, Output]):
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
- llm = ChatOllama(model="llama3.1")
+ model = ChatOllama(model="llama3.1")
# Without bind
- chain = llm | StrOutputParser()
+ chain = model | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind
- chain = llm.bind(stop=["three"]) | StrOutputParser()
+ chain = model.bind(stop=["three"]) | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
@@ -1682,11 +1677,11 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
on_start: Called before the `Runnable` starts running, with the `Run`
- object. Defaults to `None`.
+ object.
on_end: Called after the `Runnable` finishes running, with the `Run`
- object. Defaults to `None`.
+ object.
on_error: Called if the `Runnable` throws an error, with the `Run`
- object. Defaults to `None`.
+ object.
Returns:
A new `Runnable` with the listeners bound.
@@ -1750,11 +1745,11 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
on_start: Called asynchronously before the `Runnable` starts running,
- with the `Run` object. Defaults to `None`.
+ with the `Run` object.
on_end: Called asynchronously after the `Runnable` finishes running,
- with the `Run` object. Defaults to `None`.
+ with the `Run` object.
on_error: Called asynchronously if the `Runnable` throws an error,
- with the `Run` object. Defaults to `None`.
+ with the `Run` object.
Returns:
A new `Runnable` with the listeners bound.
@@ -1766,46 +1761,52 @@ class Runnable(ABC, Generic[Input, Output]):
import time
import asyncio
+
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
+
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
+
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
+
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
+
runnable = RunnableLambda(test_runnable).with_alisteners(
- on_start=fn_start,
- on_end=fn_end
+ on_start=fn_start, on_end=fn_end
)
+
+
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
- asyncio.run(concurrent_runs())
- Result:
- on start callback starts at 2025-03-01T07:05:22.875378+00:00
- on start callback starts at 2025-03-01T07:05:22.875495+00:00
- on start callback ends at 2025-03-01T07:05:25.878862+00:00
- on start callback ends at 2025-03-01T07:05:25.878947+00:00
- Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
- Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
- Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
- on end callback starts at 2025-03-01T07:05:27.882360+00:00
- Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
- on end callback starts at 2025-03-01T07:05:28.882428+00:00
- on end callback ends at 2025-03-01T07:05:29.883893+00:00
- on end callback ends at 2025-03-01T07:05:30.884831+00:00
+ asyncio.run(concurrent_runs())
+ # Result:
+ # on start callback starts at 2025-03-01T07:05:22.875378+00:00
+ # on start callback starts at 2025-03-01T07:05:22.875495+00:00
+ # on start callback ends at 2025-03-01T07:05:25.878862+00:00
+ # on start callback ends at 2025-03-01T07:05:25.878947+00:00
+ # Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
+ # Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
+ # Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
+ # on end callback starts at 2025-03-01T07:05:27.882360+00:00
+ # Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
+ # on end callback starts at 2025-03-01T07:05:28.882428+00:00
+ # on end callback ends at 2025-03-01T07:05:29.883893+00:00
+ # on end callback ends at 2025-03-01T07:05:30.884831+00:00
```
"""
return RunnableBinding(
@@ -1833,11 +1834,11 @@ class Runnable(ABC, Generic[Input, Output]):
"""Bind input and output types to a `Runnable`, returning a new `Runnable`.
Args:
- input_type: The input type to bind to the `Runnable`. Defaults to `None`.
- output_type: The output type to bind to the `Runnable`. Defaults to `None`.
+ input_type: The input type to bind to the `Runnable`.
+ output_type: The output type to bind to the `Runnable`.
Returns:
- A new Runnable with the types bound.
+ A new `Runnable` with the types bound.
"""
return RunnableBinding(
bound=self,
@@ -1858,17 +1859,16 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
retry_if_exception_type: A tuple of exception types to retry on.
- Defaults to (Exception,).
wait_exponential_jitter: Whether to add jitter to the wait
- time between retries. Defaults to `True`.
+ time between retries.
stop_after_attempt: The maximum number of attempts to make before
- giving up. Defaults to 3.
+ giving up.
exponential_jitter_params: Parameters for
`tenacity.wait_exponential_jitter`. Namely: `initial`, `max`,
- `exp_base`, and `jitter` (all float values).
+ `exp_base`, and `jitter` (all `float` values).
Returns:
- A new Runnable that retries the original Runnable on exceptions.
+ A new `Runnable` that retries the original `Runnable` on exceptions.
Example:
```python
@@ -1950,16 +1950,17 @@ class Runnable(ABC, Generic[Input, Output]):
fallbacks: A sequence of runnables to try if the original `Runnable`
fails.
exceptions_to_handle: A tuple of exception types to handle.
- Defaults to `(Exception,)`.
- exception_key: If string is specified then handled exceptions will be passed
- to fallbacks as part of the input under the specified key.
+ exception_key: If `string` is specified then handled exceptions will be
+ passed to fallbacks as part of the input under the specified key.
+
If `None`, exceptions will not be passed to fallbacks.
+
If used, the base `Runnable` and its fallbacks must accept a
- dictionary as input. Defaults to `None`.
+ dictionary as input.
Returns:
A new `Runnable` that will try the original `Runnable`, and then each
- Fallback in order, upon failures.
+ Fallback in order, upon failures.
Example:
```python
@@ -1987,16 +1988,17 @@ class Runnable(ABC, Generic[Input, Output]):
fallbacks: A sequence of runnables to try if the original `Runnable`
fails.
exceptions_to_handle: A tuple of exception types to handle.
- exception_key: If string is specified then handled exceptions will be passed
- to fallbacks as part of the input under the specified key.
+ exception_key: If `string` is specified then handled exceptions will be
+ passed to fallbacks as part of the input under the specified key.
+
If `None`, exceptions will not be passed to fallbacks.
+
If used, the base `Runnable` and its fallbacks must accept a
dictionary as input.
Returns:
A new `Runnable` that will try the original `Runnable`, and then each
- Fallback in order, upon failures.
-
+ Fallback in order, upon failures.
"""
# Import locally to prevent circular import
from langchain_core.runnables.fallbacks import ( # noqa: PLC0415
@@ -2456,16 +2458,20 @@ class Runnable(ABC, Generic[Input, Output]):
`as_tool` will instantiate a `BaseTool` with a name, description, and
`args_schema` from a `Runnable`. Where possible, schemas are inferred
- from `runnable.get_input_schema`. Alternatively (e.g., if the
- `Runnable` takes a dict as input and the specific dict keys are not typed),
- the schema can be specified directly with `args_schema`. You can also
- pass `arg_types` to just specify the required arguments and their types.
+ from `runnable.get_input_schema`.
+
+ Alternatively (e.g., if the `Runnable` takes a dict as input and the specific
+ `dict` keys are not typed), the schema can be specified directly with
+ `args_schema`.
+
+ You can also pass `arg_types` to just specify the required arguments and their
+ types.
Args:
- args_schema: The schema for the tool. Defaults to `None`.
- name: The name of the tool. Defaults to `None`.
- description: The description of the tool. Defaults to `None`.
- arg_types: A dictionary of argument names to types. Defaults to `None`.
+ args_schema: The schema for the tool.
+ name: The name of the tool.
+ description: The description of the tool.
+ arg_types: A dictionary of argument names to types.
Returns:
A `BaseTool` instance.
@@ -2528,7 +2534,7 @@ class Runnable(ABC, Generic[Input, Output]):
as_tool.invoke({"a": 3, "b": [1, 2]})
```
- String input:
+ `str` input:
```python
from langchain_core.runnables import RunnableLambda
@@ -2546,9 +2552,6 @@ class Runnable(ABC, Generic[Input, Output]):
as_tool = runnable.as_tool()
as_tool.invoke("b")
```
-
- !!! version-added "Added in version 0.2.14"
-
"""
# Avoid circular import
from langchain_core.tools import convert_runnable_to_tool # noqa: PLC0415
@@ -2654,9 +2657,7 @@ class RunnableSerializable(Serializable, Runnable[Input, Output]):
which: The `ConfigurableField` instance that will be used to select the
alternative.
default_key: The default key to use if no alternative is selected.
- Defaults to `'default'`.
prefix_keys: Whether to prefix the keys with the `ConfigurableField` id.
- Defaults to `False`.
**kwargs: A dictionary of keys to `Runnable` instances or callables that
return `Runnable` instances.
@@ -2669,7 +2670,7 @@ class RunnableSerializable(Serializable, Runnable[Input, Output]):
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
- model_name="claude-3-7-sonnet-20250219"
+ model_name="claude-sonnet-4-5-20250929"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
@@ -2782,6 +2783,9 @@ def _seq_output_schema(
return last.get_output_schema(config)
+_RUNNABLE_SEQUENCE_MIN_STEPS = 2
+
+
class RunnableSequence(RunnableSerializable[Input, Output]):
"""Sequence of `Runnable` objects, where the output of one is the input of the next.
@@ -2888,10 +2892,10 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
Args:
steps: The steps to include in the sequence.
- name: The name of the `Runnable`. Defaults to `None`.
- first: The first `Runnable` in the sequence. Defaults to `None`.
- middle: The middle `Runnable` objects in the sequence. Defaults to `None`.
- last: The last Runnable in the sequence. Defaults to `None`.
+ name: The name of the `Runnable`.
+ first: The first `Runnable` in the sequence.
+ middle: The middle `Runnable` objects in the sequence.
+ last: The last `Runnable` in the sequence.
Raises:
ValueError: If the sequence has less than 2 steps.
@@ -2904,8 +2908,11 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
steps_flat.extend(step.steps)
else:
steps_flat.append(coerce_to_runnable(step))
- if len(steps_flat) < 2:
- msg = f"RunnableSequence must have at least 2 steps, got {len(steps_flat)}"
+ if len(steps_flat) < _RUNNABLE_SEQUENCE_MIN_STEPS:
+ msg = (
+ f"RunnableSequence must have at least {_RUNNABLE_SEQUENCE_MIN_STEPS} "
+ f"steps, got {len(steps_flat)}"
+ )
raise ValueError(msg)
super().__init__(
first=steps_flat[0],
@@ -2917,7 +2924,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -2936,7 +2943,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
model_config = ConfigDict(
@@ -2960,7 +2967,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
"""Get the input schema of the `Runnable`.
Args:
- config: The config to use. Defaults to `None`.
+ config: The config to use.
Returns:
The input schema of the `Runnable`.
@@ -2975,7 +2982,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
"""Get the output schema of the `Runnable`.
Args:
- config: The config to use. Defaults to `None`.
+ config: The config to use.
Returns:
The output schema of the `Runnable`.
@@ -3002,7 +3009,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
"""Get the graph representation of the `Runnable`.
Args:
- config: The config to use. Defaults to `None`.
+ config: The config to use.
Returns:
The graph representation of the `Runnable`.
@@ -3532,7 +3539,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
Returns a mapping of their outputs.
- `RunnableParallel` is one of the two main composition primitives for the LCEL,
+ `RunnableParallel` is one of the two main composition primitives,
alongside `RunnableSequence`. It invokes `Runnable`s concurrently, providing the
same input to each.
@@ -3629,7 +3636,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
"""Create a `RunnableParallel`.
Args:
- steps__: The steps to include. Defaults to `None`.
+ steps__: The steps to include.
**kwargs: Additional steps to include.
"""
@@ -3642,13 +3649,13 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -3664,8 +3671,8 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
"""Get the name of the `Runnable`.
Args:
- suffix: The suffix to use. Defaults to `None`.
- name: The name to use. Defaults to `None`.
+ suffix: The suffix to use.
+ name: The name to use.
Returns:
The name of the `Runnable`.
@@ -3689,7 +3696,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
"""Get the input schema of the `Runnable`.
Args:
- config: The config to use. Defaults to `None`.
+ config: The config to use.
Returns:
The input schema of the `Runnable`.
@@ -3700,6 +3707,12 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
== "object"
for s in self.steps__.values()
):
+ for step in self.steps__.values():
+ fields = step.get_input_schema(config).model_fields
+ root_field = fields.get("root")
+ if root_field is not None and root_field.annotation != Any:
+ return super().get_input_schema(config)
+
# This is correct, but pydantic typings/mypy don't think so.
return create_model_v2(
self.get_name("Input"),
@@ -3720,7 +3733,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
"""Get the output schema of the `Runnable`.
Args:
- config: The config to use. Defaults to `None`.
+ config: The config to use.
Returns:
The output schema of the `Runnable`.
@@ -3747,7 +3760,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
"""Get the graph representation of the `Runnable`.
Args:
- config: The config to use. Defaults to `None`.
+ config: The config to use.
Returns:
The graph representation of the `Runnable`.
@@ -4157,8 +4170,8 @@ class RunnableGenerator(Runnable[Input, Output]):
Args:
transform: The transform function.
- atransform: The async transform function. Defaults to `None`.
- name: The name of the `Runnable`. Defaults to `None`.
+ atransform: The async transform function.
+ name: The name of the `Runnable`.
Raises:
TypeError: If the transform is not a generator function.
@@ -4435,8 +4448,8 @@ class RunnableLambda(Runnable[Input, Output]):
Args:
func: Either sync or async callable
afunc: An async callable that takes an input and returns an output.
- Defaults to `None`.
- name: The name of the `Runnable`. Defaults to `None`.
+
+ name: The name of the `Runnable`.
Raises:
TypeError: If the `func` is not a callable type.
@@ -4493,10 +4506,10 @@ class RunnableLambda(Runnable[Input, Output]):
@override
def get_input_schema(self, config: RunnableConfig | None = None) -> type[BaseModel]:
- """The pydantic schema for the input to this `Runnable`.
+ """The Pydantic schema for the input to this `Runnable`.
Args:
- config: The config to use. Defaults to `None`.
+ config: The config to use.
Returns:
The input schema for this `Runnable`.
@@ -4509,7 +4522,7 @@ class RunnableLambda(Runnable[Input, Output]):
# on itemgetter objects, so we have to parse the repr
items = str(func).replace("operator.itemgetter(", "")[:-1].split(", ")
if all(
- item[0] == "'" and item[-1] == "'" and len(item) > 2 for item in items
+ item[0] == "'" and item[-1] == "'" and item != "''" for item in items
):
fields = {item[1:-1]: (Any, ...) for item in items}
# It's a dict, lol
@@ -4830,7 +4843,7 @@ class RunnableLambda(Runnable[Input, Output]):
Args:
input: The input to this `Runnable`.
- config: The config to use. Defaults to `None`.
+ config: The config to use.
**kwargs: Additional keyword arguments.
Returns:
@@ -4861,7 +4874,7 @@ class RunnableLambda(Runnable[Input, Output]):
Args:
input: The input to this `Runnable`.
- config: The config to use. Defaults to `None`.
+ config: The config to use.
**kwargs: Additional keyword arguments.
Returns:
@@ -5127,7 +5140,7 @@ class RunnableEachBase(RunnableSerializable[list[Input], list[Output]]):
None,
),
# create model needs access to appropriate type annotations to be
- # able to construct the pydantic model.
+ # able to construct the Pydantic model.
# When we create the model, we pass information about the namespace
# where the model is being created, so the type annotations can
# be resolved correctly as well.
@@ -5150,7 +5163,7 @@ class RunnableEachBase(RunnableSerializable[list[Input], list[Output]]):
self.get_name("Output"),
root=list[schema], # type: ignore[valid-type]
# create model needs access to appropriate type annotations to be
- # able to construct the pydantic model.
+ # able to construct the Pydantic model.
# When we create the model, we pass information about the namespace
# where the model is being created, so the type annotations can
# be resolved correctly as well.
@@ -5171,13 +5184,13 @@ class RunnableEachBase(RunnableSerializable[list[Input], list[Output]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -5303,11 +5316,11 @@ class RunnableEach(RunnableEachBase[Input, Output]):
Args:
on_start: Called before the `Runnable` starts running, with the `Run`
- object. Defaults to `None`.
+ object.
on_end: Called after the `Runnable` finishes running, with the `Run`
- object. Defaults to `None`.
+ object.
on_error: Called if the `Runnable` throws an error, with the `Run`
- object. Defaults to `None`.
+ object.
Returns:
A new `Runnable` with the listeners bound.
@@ -5336,11 +5349,11 @@ class RunnableEach(RunnableEachBase[Input, Output]):
Args:
on_start: Called asynchronously before the `Runnable` starts running,
- with the `Run` object. Defaults to `None`.
+ with the `Run` object.
on_end: Called asynchronously after the `Runnable` finishes running,
- with the `Run` object. Defaults to `None`.
+ with the `Run` object.
on_error: Called asynchronously if the `Runnable` throws an error,
- with the `Run` object. Defaults to `None`.
+ with the `Run` object.
Returns:
A new `Runnable` with the listeners bound.
@@ -5354,7 +5367,7 @@ class RunnableEach(RunnableEachBase[Input, Output]):
class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[no-redef]
- """`Runnable` that delegates calls to another `Runnable` with a set of kwargs.
+ """`Runnable` that delegates calls to another `Runnable` with a set of `**kwargs`.
Use only if creating a new `RunnableBinding` subclass with different `__init__`
args.
@@ -5387,13 +5400,13 @@ class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[
custom_input_type: Any | None = None
"""Override the input type of the underlying `Runnable` with a custom type.
- The type can be a pydantic model, or a type annotation (e.g., `list[str]`).
+ The type can be a Pydantic model, or a type annotation (e.g., `list[str]`).
"""
# Union[Type[Output], BaseModel] + things like list[str]
custom_output_type: Any | None = None
"""Override the output type of the underlying `Runnable` with a custom type.
- The type can be a pydantic model, or a type annotation (e.g., `list[str]`).
+ The type can be a Pydantic model, or a type annotation (e.g., `list[str]`).
"""
model_config = ConfigDict(
@@ -5420,16 +5433,16 @@ class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[
kwargs: optional kwargs to pass to the underlying `Runnable`, when running
the underlying `Runnable` (e.g., via `invoke`, `batch`,
`transform`, or `stream` or async variants)
- Defaults to `None`.
+
config: optional config to bind to the underlying `Runnable`.
- Defaults to `None`.
+
config_factories: optional list of config factories to apply to the
config before binding to the underlying `Runnable`.
- Defaults to `None`.
+
custom_input_type: Specify to override the input type of the underlying
- `Runnable` with a custom type. Defaults to `None`.
+ `Runnable` with a custom type.
custom_output_type: Specify to override the output type of the underlying
- `Runnable` with a custom type. Defaults to `None`.
+ `Runnable` with a custom type.
**other_kwargs: Unpacked into the base class.
"""
super().__init__(
@@ -5494,13 +5507,13 @@ class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -5782,9 +5795,9 @@ class RunnableBinding(RunnableBindingBase[Input, Output]): # type: ignore[no-re
`bind`: Bind kwargs to pass to the underlying `Runnable` when running it.
```python
- # Create a Runnable binding that invokes the ChatModel with the
+ # Create a Runnable binding that invokes the chat model with the
# additional kwarg `stop=['-']` when running it.
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
model = ChatOpenAI()
model.invoke('Say "Parrot-MAGIC"', stop=["-"]) # Should return `Parrot`
@@ -5866,11 +5879,11 @@ class RunnableBinding(RunnableBindingBase[Input, Output]): # type: ignore[no-re
Args:
on_start: Called before the `Runnable` starts running, with the `Run`
- object. Defaults to `None`.
+ object.
on_end: Called after the `Runnable` finishes running, with the `Run`
- object. Defaults to `None`.
+ object.
on_error: Called if the `Runnable` throws an error, with the `Run`
- object. Defaults to `None`.
+ object.
Returns:
A new `Runnable` with the listeners bound.
@@ -6077,10 +6090,10 @@ def chain(
@chain
def my_func(fields):
prompt = PromptTemplate("Hello, {name}!")
- llm = OpenAI()
+ model = OpenAI()
formatted = prompt.invoke(**fields)
- for chunk in llm.stream(formatted):
+ for chunk in model.stream(formatted):
yield chunk
```
"""
diff --git a/libs/core/langchain_core/runnables/branch.py b/libs/core/langchain_core/runnables/branch.py
index deba85521c0..e9396ce5c25 100644
--- a/libs/core/langchain_core/runnables/branch.py
+++ b/libs/core/langchain_core/runnables/branch.py
@@ -36,17 +36,19 @@ from langchain_core.runnables.utils import (
get_unique_config_specs,
)
+_MIN_BRANCHES = 2
+
class RunnableBranch(RunnableSerializable[Input, Output]):
- """Runnable that selects which branch to run based on a condition.
+ """`Runnable` that selects which branch to run based on a condition.
- The Runnable is initialized with a list of (condition, Runnable) pairs and
+ The `Runnable` is initialized with a list of `(condition, Runnable)` pairs and
a default branch.
When operating on an input, the first condition that evaluates to True is
- selected, and the corresponding Runnable is run on the input.
+ selected, and the corresponding `Runnable` is run on the input.
- If no condition evaluates to True, the default branch is run on the input.
+ If no condition evaluates to `True`, the default branch is run on the input.
Examples:
```python
@@ -65,9 +67,9 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
"""
branches: Sequence[tuple[Runnable[Input, bool], Runnable[Input, Output]]]
- """A list of (condition, Runnable) pairs."""
+ """A list of `(condition, Runnable)` pairs."""
default: Runnable[Input, Output]
- """A Runnable to run if no condition is met."""
+ """A `Runnable` to run if no condition is met."""
def __init__(
self,
@@ -79,19 +81,19 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
]
| RunnableLike,
) -> None:
- """A Runnable that runs one of two branches based on a condition.
+ """A `Runnable` that runs one of two branches based on a condition.
Args:
- *branches: A list of (condition, Runnable) pairs.
- Defaults a Runnable to run if no condition is met.
+ *branches: A list of `(condition, Runnable)` pairs.
+ Defaults a `Runnable` to run if no condition is met.
Raises:
- ValueError: If the number of branches is less than 2.
- TypeError: If the default branch is not Runnable, Callable or Mapping.
- TypeError: If a branch is not a tuple or list.
- ValueError: If a branch is not of length 2.
+ ValueError: If the number of branches is less than `2`.
+ TypeError: If the default branch is not `Runnable`, `Callable` or `Mapping`.
+ TypeError: If a branch is not a `tuple` or `list`.
+ ValueError: If a branch is not of length `2`.
"""
- if len(branches) < 2:
+ if len(branches) < _MIN_BRANCHES:
msg = "RunnableBranch requires at least two branches"
raise ValueError(msg)
@@ -118,7 +120,7 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
)
raise TypeError(msg)
- if len(branch) != 2:
+ if len(branch) != _MIN_BRANCHES:
msg = (
f"RunnableBranch branches must be "
f"tuples or lists of length 2, not {len(branch)}"
@@ -140,13 +142,13 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
@classmethod
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -187,12 +189,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
def invoke(
self, input: Input, config: RunnableConfig | None = None, **kwargs: Any
) -> Output:
- """First evaluates the condition, then delegate to true or false branch.
+ """First evaluates the condition, then delegate to `True` or `False` branch.
Args:
- input: The input to the Runnable.
- config: The configuration for the Runnable. Defaults to `None`.
- **kwargs: Additional keyword arguments to pass to the Runnable.
+ input: The input to the `Runnable`.
+ config: The configuration for the `Runnable`.
+ **kwargs: Additional keyword arguments to pass to the `Runnable`.
Returns:
The output of the branch that was run.
@@ -297,12 +299,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> Iterator[Output]:
- """First evaluates the condition, then delegate to true or false branch.
+ """First evaluates the condition, then delegate to `True` or `False` branch.
Args:
- input: The input to the Runnable.
- config: The configuration for the Runnable. Defaults to `None`.
- **kwargs: Additional keyword arguments to pass to the Runnable.
+ input: The input to the `Runnable`.
+ config: The configuration for the Runna`ble.
+ **kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
The output of the branch that was run.
@@ -381,12 +383,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]:
- """First evaluates the condition, then delegate to true or false branch.
+ """First evaluates the condition, then delegate to `True` or `False` branch.
Args:
- input: The input to the Runnable.
- config: The configuration for the Runnable. Defaults to `None`.
- **kwargs: Additional keyword arguments to pass to the Runnable.
+ input: The input to the `Runnable`.
+ config: The configuration for the `Runnable`.
+ **kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
The output of the branch that was run.
diff --git a/libs/core/langchain_core/runnables/config.py b/libs/core/langchain_core/runnables/config.py
index 33f062607ed..f67bf095927 100644
--- a/libs/core/langchain_core/runnables/config.py
+++ b/libs/core/langchain_core/runnables/config.py
@@ -75,26 +75,26 @@ class RunnableConfig(TypedDict, total=False):
max_concurrency: int | None
"""
Maximum number of parallel calls to make. If not provided, defaults to
- ThreadPoolExecutor's default.
+ `ThreadPoolExecutor`'s default.
"""
recursion_limit: int
"""
- Maximum number of times a call can recurse. If not provided, defaults to 25.
+ Maximum number of times a call can recurse. If not provided, defaults to `25`.
"""
configurable: dict[str, Any]
"""
- Runtime values for attributes previously made configurable on this Runnable,
- or sub-Runnables, through .configurable_fields() or .configurable_alternatives().
- Check .output_schema() for a description of the attributes that have been made
+ Runtime values for attributes previously made configurable on this `Runnable`,
+ or sub-Runnables, through `configurable_fields` or `configurable_alternatives`.
+ Check `output_schema` for a description of the attributes that have been made
configurable.
"""
run_id: uuid.UUID | None
"""
Unique identifier for the tracer run for this call. If not provided, a new UUID
- will be generated.
+ will be generated.
"""
@@ -193,7 +193,7 @@ def ensure_config(config: RunnableConfig | None = None) -> RunnableConfig:
"""Ensure that a config is a dict with all keys present.
Args:
- config: The config to ensure. Defaults to `None`.
+ config: The config to ensure.
Returns:
The ensured config.
@@ -412,7 +412,7 @@ def call_func_with_variable_args(
func: The function to call.
input: The input to the function.
config: The config to pass to the function.
- run_manager: The run manager to pass to the function. Defaults to `None`.
+ run_manager: The run manager to pass to the function.
**kwargs: The keyword arguments to pass to the function.
Returns:
@@ -446,7 +446,7 @@ def acall_func_with_variable_args(
func: The function to call.
input: The input to the function.
config: The config to pass to the function.
- run_manager: The run manager to pass to the function. Defaults to `None`.
+ run_manager: The run manager to pass to the function.
**kwargs: The keyword arguments to pass to the function.
Returns:
@@ -527,16 +527,15 @@ class ContextThreadPoolExecutor(ThreadPoolExecutor):
self,
fn: Callable[..., T],
*iterables: Iterable[Any],
- timeout: float | None = None,
- chunksize: int = 1,
+ **kwargs: Any,
) -> Iterator[T]:
"""Map a function to multiple iterables.
Args:
fn: The function to map.
*iterables: The iterables to map over.
- timeout: The timeout for the map. Defaults to `None`.
- chunksize: The chunksize for the map. Defaults to 1.
+ timeout: The timeout for the map.
+ chunksize: The chunksize for the map.
Returns:
The iterator for the mapped function.
@@ -549,8 +548,7 @@ class ContextThreadPoolExecutor(ThreadPoolExecutor):
return super().map(
_wrapped_fn,
*iterables,
- timeout=timeout,
- chunksize=chunksize,
+ **kwargs,
)
diff --git a/libs/core/langchain_core/runnables/configurable.py b/libs/core/langchain_core/runnables/configurable.py
index da4e75f72fb..5552fa620ea 100644
--- a/libs/core/langchain_core/runnables/configurable.py
+++ b/libs/core/langchain_core/runnables/configurable.py
@@ -1,4 +1,4 @@
-"""Runnables that can be dynamically configured."""
+"""`Runnable` objects that can be dynamically configured."""
from __future__ import annotations
@@ -47,14 +47,14 @@ if TYPE_CHECKING:
class DynamicRunnable(RunnableSerializable[Input, Output]):
- """Serializable Runnable that can be dynamically configured.
+ """Serializable `Runnable` that can be dynamically configured.
- A DynamicRunnable should be initiated using the `configurable_fields` or
- `configurable_alternatives` method of a Runnable.
+ A `DynamicRunnable` should be initiated using the `configurable_fields` or
+ `configurable_alternatives` method of a `Runnable`.
"""
default: RunnableSerializable[Input, Output]
- """The default Runnable to use."""
+ """The default `Runnable` to use."""
config: RunnableConfig | None = None
"""The configuration to use."""
@@ -66,13 +66,13 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -120,13 +120,13 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
def prepare(
self, config: RunnableConfig | None = None
) -> tuple[Runnable[Input, Output], RunnableConfig]:
- """Prepare the Runnable for invocation.
+ """Prepare the `Runnable` for invocation.
Args:
- config: The configuration to use. Defaults to `None`.
+ config: The configuration to use.
Returns:
- The prepared Runnable and configuration.
+ The prepared `Runnable` and configuration.
"""
runnable: Runnable[Input, Output] = self
while isinstance(runnable, DynamicRunnable):
@@ -316,12 +316,12 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
- """Runnable that can be dynamically configured.
+ """`Runnable` that can be dynamically configured.
- A RunnableConfigurableFields should be initiated using the
- `configurable_fields` method of a Runnable.
+ A `RunnableConfigurableFields` should be initiated using the
+ `configurable_fields` method of a `Runnable`.
- Here is an example of using a RunnableConfigurableFields with LLMs:
+ Here is an example of using a `RunnableConfigurableFields` with LLMs:
```python
from langchain_core.prompts import PromptTemplate
@@ -348,7 +348,7 @@ class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
chain.invoke({"x": 0}, config={"configurable": {"temperature": 0.9}})
```
- Here is an example of using a RunnableConfigurableFields with HubRunnables:
+ Here is an example of using a `RunnableConfigurableFields` with `HubRunnables`:
```python
from langchain_core.prompts import PromptTemplate
@@ -380,7 +380,7 @@ class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
@property
def config_specs(self) -> list[ConfigurableFieldSpec]:
- """Get the configuration specs for the RunnableConfigurableFields.
+ """Get the configuration specs for the `RunnableConfigurableFields`.
Returns:
The configuration specs.
@@ -473,13 +473,13 @@ _enums_for_spec_lock = threading.Lock()
class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
- """Runnable that can be dynamically configured.
+ """`Runnable` that can be dynamically configured.
- A RunnableConfigurableAlternatives should be initiated using the
- `configurable_alternatives` method of a Runnable or can be
+ A `RunnableConfigurableAlternatives` should be initiated using the
+ `configurable_alternatives` method of a `Runnable` or can be
initiated directly as well.
- Here is an example of using a RunnableConfigurableAlternatives that uses
+ Here is an example of using a `RunnableConfigurableAlternatives` that uses
alternative prompts to illustrate its functionality:
```python
@@ -506,7 +506,7 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
chain.with_config(configurable={"prompt": "poem"}).invoke({"topic": "bears"})
```
- Equivalently, you can initialize RunnableConfigurableAlternatives directly
+ Equivalently, you can initialize `RunnableConfigurableAlternatives` directly
and use in LCEL in the same way:
```python
@@ -531,7 +531,7 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
"""
which: ConfigurableField
- """The ConfigurableField to use to choose between alternatives."""
+ """The `ConfigurableField` to use to choose between alternatives."""
alternatives: dict[
str,
@@ -540,12 +540,13 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
"""The alternatives to choose from."""
default_key: str = "default"
- """The enum value to use for the default option. Defaults to `'default'`."""
+ """The enum value to use for the default option."""
prefix_keys: bool
"""Whether to prefix configurable fields of each alternative with a namespace
- of the form ==, eg. a key named "temperature" used by
- the alternative named "gpt3" becomes "model==gpt3/temperature"."""
+ of the form ==, e.g. a key named "temperature" used by
+ the alternative named "gpt3" becomes "model==gpt3/temperature".
+ """
@property
@override
@@ -638,24 +639,24 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
def _strremoveprefix(s: str, prefix: str) -> str:
- """str.removeprefix() is only available in Python 3.9+."""
+ """`str.removeprefix()` is only available in Python 3.9+."""
return s.replace(prefix, "", 1) if s.startswith(prefix) else s
def prefix_config_spec(
spec: ConfigurableFieldSpec, prefix: str
) -> ConfigurableFieldSpec:
- """Prefix the id of a ConfigurableFieldSpec.
+ """Prefix the id of a `ConfigurableFieldSpec`.
- This is useful when a RunnableConfigurableAlternatives is used as a
- ConfigurableField of another RunnableConfigurableAlternatives.
+ This is useful when a `RunnableConfigurableAlternatives` is used as a
+ `ConfigurableField` of another `RunnableConfigurableAlternatives`.
Args:
- spec: The ConfigurableFieldSpec to prefix.
+ spec: The `ConfigurableFieldSpec` to prefix.
prefix: The prefix to add.
Returns:
- The prefixed ConfigurableFieldSpec.
+ The prefixed `ConfigurableFieldSpec`.
"""
return (
ConfigurableFieldSpec(
@@ -677,15 +678,15 @@ def make_options_spec(
) -> ConfigurableFieldSpec:
"""Make options spec.
- Make a ConfigurableFieldSpec for a ConfigurableFieldSingleOption or
- ConfigurableFieldMultiOption.
+ Make a `ConfigurableFieldSpec` for a `ConfigurableFieldSingleOption` or
+ `ConfigurableFieldMultiOption`.
Args:
- spec: The ConfigurableFieldSingleOption or ConfigurableFieldMultiOption.
+ spec: The `ConfigurableFieldSingleOption` or `ConfigurableFieldMultiOption`.
description: The description to use if the spec does not have one.
Returns:
- The ConfigurableFieldSpec.
+ The `ConfigurableFieldSpec`.
"""
with _enums_for_spec_lock:
if enum := _enums_for_spec.get(spec):
diff --git a/libs/core/langchain_core/runnables/fallbacks.py b/libs/core/langchain_core/runnables/fallbacks.py
index 61f1753d8dd..0584d64e60c 100644
--- a/libs/core/langchain_core/runnables/fallbacks.py
+++ b/libs/core/langchain_core/runnables/fallbacks.py
@@ -35,20 +35,20 @@ if TYPE_CHECKING:
class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
- """Runnable that can fallback to other Runnables if it fails.
+ """`Runnable` that can fallback to other `Runnable`s if it fails.
External APIs (e.g., APIs for a language model) may at times experience
degraded performance or even downtime.
- In these cases, it can be useful to have a fallback Runnable that can be
- used in place of the original Runnable (e.g., fallback to another LLM provider).
+ In these cases, it can be useful to have a fallback `Runnable` that can be
+ used in place of the original `Runnable` (e.g., fallback to another LLM provider).
- Fallbacks can be defined at the level of a single Runnable, or at the level
- of a chain of Runnables. Fallbacks are tried in order until one succeeds or
+ Fallbacks can be defined at the level of a single `Runnable`, or at the level
+ of a chain of `Runnable`s. Fallbacks are tried in order until one succeeds or
all fail.
While you can instantiate a `RunnableWithFallbacks` directly, it is usually
- more convenient to use the `with_fallbacks` method on a Runnable.
+ more convenient to use the `with_fallbacks` method on a `Runnable`.
Example:
```python
@@ -87,7 +87,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
"""
runnable: Runnable[Input, Output]
- """The Runnable to run first."""
+ """The `Runnable` to run first."""
fallbacks: Sequence[Runnable[Input, Output]]
"""A sequence of fallbacks to try."""
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,)
@@ -96,10 +96,13 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
Any exception that is not a subclass of these exceptions will be raised immediately.
"""
exception_key: str | None = None
- """If string is specified then handled exceptions will be passed to fallbacks as
- part of the input under the specified key. If `None`, exceptions
- will not be passed to fallbacks. If used, the base Runnable and its fallbacks
- must accept a dictionary as input."""
+ """If `string` is specified then handled exceptions will be passed to fallbacks as
+ part of the input under the specified key.
+
+ If `None`, exceptions will not be passed to fallbacks.
+
+ If used, the base `Runnable` and its fallbacks must accept a dictionary as input.
+ """
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -137,13 +140,13 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -152,10 +155,10 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
@property
def runnables(self) -> Iterator[Runnable[Input, Output]]:
- """Iterator over the Runnable and its fallbacks.
+ """Iterator over the `Runnable` and its fallbacks.
Yields:
- The Runnable then its fallbacks.
+ The `Runnable` then its fallbacks.
"""
yield self.runnable
yield from self.fallbacks
@@ -589,14 +592,14 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
await run_manager.on_chain_end(output)
def __getattr__(self, name: str) -> Any:
- """Get an attribute from the wrapped Runnable and its fallbacks.
+ """Get an attribute from the wrapped `Runnable` and its fallbacks.
Returns:
- If the attribute is anything other than a method that outputs a Runnable,
- returns getattr(self.runnable, name). If the attribute is a method that
- does return a new Runnable (e.g. llm.bind_tools([...]) outputs a new
- RunnableBinding) then self.runnable and each of the runnables in
- self.fallbacks is replaced with getattr(x, name).
+ If the attribute is anything other than a method that outputs a `Runnable`,
+ returns `getattr(self.runnable, name)`. If the attribute is a method that
+ does return a new `Runnable` (e.g. `model.bind_tools([...])` outputs a new
+ `RunnableBinding`) then `self.runnable` and each of the runnables in
+ `self.fallbacks` is replaced with `getattr(x, name)`.
Example:
```python
@@ -604,21 +607,20 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
from langchain_anthropic import ChatAnthropic
gpt_4o = ChatOpenAI(model="gpt-4o")
- claude_3_sonnet = ChatAnthropic(model="claude-3-7-sonnet-20250219")
- llm = gpt_4o.with_fallbacks([claude_3_sonnet])
+ claude_3_sonnet = ChatAnthropic(model="claude-sonnet-4-5-20250929")
+ model = gpt_4o.with_fallbacks([claude_3_sonnet])
- llm.model_name
+ model.model_name
# -> "gpt-4o"
# .bind_tools() is called on both ChatOpenAI and ChatAnthropic
# Equivalent to:
# gpt_4o.bind_tools([...]).with_fallbacks([claude_3_sonnet.bind_tools([...])])
- llm.bind_tools([...])
+ model.bind_tools([...])
# -> RunnableWithFallbacks(
runnable=RunnableBinding(bound=ChatOpenAI(...), kwargs={"tools": [...]}),
fallbacks=[RunnableBinding(bound=ChatAnthropic(...), kwargs={"tools": [...]})],
)
-
```
""" # noqa: E501
attr = getattr(self.runnable, name)
diff --git a/libs/core/langchain_core/runnables/graph.py b/libs/core/langchain_core/runnables/graph.py
index 589b797fd2a..76f1cc1f5b0 100644
--- a/libs/core/langchain_core/runnables/graph.py
+++ b/libs/core/langchain_core/runnables/graph.py
@@ -52,7 +52,7 @@ def is_uuid(value: str) -> bool:
value: The string to check.
Returns:
- True if the string is a valid UUID, False otherwise.
+ `True` if the string is a valid UUID, `False` otherwise.
"""
try:
UUID(value)
@@ -69,16 +69,16 @@ class Edge(NamedTuple):
target: str
"""The target node id."""
data: Stringifiable | None = None
- """Optional data associated with the edge. Defaults to `None`."""
+ """Optional data associated with the edge. """
conditional: bool = False
- """Whether the edge is conditional. Defaults to `False`."""
+ """Whether the edge is conditional."""
def copy(self, *, source: str | None = None, target: str | None = None) -> Edge:
"""Return a copy of the edge with optional new source and target nodes.
Args:
- source: The new source node id. Defaults to `None`.
- target: The new target node id. Defaults to `None`.
+ source: The new source node id.
+ target: The new target node id.
Returns:
A copy of the edge with the new source and target nodes.
@@ -101,7 +101,7 @@ class Node(NamedTuple):
data: type[BaseModel] | RunnableType | None
"""The data of the node."""
metadata: dict[str, Any] | None
- """Optional metadata for the node. Defaults to `None`."""
+ """Optional metadata for the node. """
def copy(
self,
@@ -112,8 +112,8 @@ class Node(NamedTuple):
"""Return a copy of the node with optional new id and name.
Args:
- id: The new node id. Defaults to `None`.
- name: The new node name. Defaults to `None`.
+ id: The new node id.
+ name: The new node name.
Returns:
A copy of the node with the new id and name.
@@ -132,7 +132,7 @@ class Branch(NamedTuple):
condition: Callable[..., str]
"""A callable that returns a string representation of the condition."""
ends: dict[str, str] | None
- """Optional dictionary of end node ids for the branches. Defaults to `None`."""
+ """Optional dictionary of end node IDs for the branches. """
class CurveStyle(Enum):
@@ -157,9 +157,9 @@ class NodeStyles:
"""Schema for Hexadecimal color codes for different node types.
Args:
- default: The default color code. Defaults to "fill:#f2f0ff,line-height:1.2".
- first: The color code for the first node. Defaults to "fill-opacity:0".
- last: The color code for the last node. Defaults to "fill:#bfb6fc".
+ default: The default color code.
+ first: The color code for the first node.
+ last: The color code for the last node.
"""
default: str = "fill:#f2f0ff,line-height:1.2"
@@ -201,9 +201,9 @@ def node_data_json(
"""Convert the data of a node to a JSON-serializable format.
Args:
- node: The node to convert.
- with_schemas: Whether to include the schema of the data if
- it is a Pydantic model. Defaults to `False`.
+ node: The `Node` to convert.
+ with_schemas: Whether to include the schema of the data if it is a Pydantic
+ model.
Returns:
A dictionary with the type of the data and the data itself.
@@ -267,7 +267,7 @@ class Graph:
Args:
with_schemas: Whether to include the schemas of the nodes if they are
- Pydantic models. Defaults to `False`.
+ Pydantic models.
Returns:
A dictionary with the nodes and edges of the graph.
@@ -321,8 +321,8 @@ class Graph:
Args:
data: The data of the node.
- id: The id of the node. Defaults to `None`.
- metadata: Optional metadata for the node. Defaults to `None`.
+ id: The id of the node.
+ metadata: Optional metadata for the node.
Returns:
The node that was added to the graph.
@@ -361,8 +361,8 @@ class Graph:
Args:
source: The source node of the edge.
target: The target node of the edge.
- data: Optional data associated with the edge. Defaults to `None`.
- conditional: Whether the edge is conditional. Defaults to `False`.
+ data: Optional data associated with the edge.
+ conditional: Whether the edge is conditional.
Returns:
The edge that was added to the graph.
@@ -391,7 +391,7 @@ class Graph:
Args:
graph: The graph to add.
- prefix: The prefix to add to the node ids. Defaults to "".
+ prefix: The prefix to add to the node ids.
Returns:
A tuple of the first and last nodes of the subgraph.
@@ -458,7 +458,7 @@ class Graph:
def first_node(self) -> Node | None:
"""Find the single node that is not a target of any edge.
- If there is no such node, or there are multiple, return None.
+ If there is no such node, or there are multiple, return `None`.
When drawing the graph, this node would be the origin.
Returns:
@@ -470,7 +470,7 @@ class Graph:
def last_node(self) -> Node | None:
"""Find the single node that is not a source of any edge.
- If there is no such node, or there are multiple, return None.
+ If there is no such node, or there are multiple, return `None`.
When drawing the graph, this node would be the destination.
Returns:
@@ -549,8 +549,8 @@ class Graph:
Args:
output_file_path: The path to save the image to. If `None`, the image
- is not saved. Defaults to `None`.
- fontname: The name of the font to use. Defaults to `None`.
+ is not saved.
+ fontname: The name of the font to use.
labels: Optional labels for nodes and edges in the graph. Defaults to
`None`.
@@ -585,14 +585,13 @@ class Graph:
"""Draw the graph as a Mermaid syntax string.
Args:
- with_styles: Whether to include styles in the syntax. Defaults to `True`.
- curve_style: The style of the edges. Defaults to CurveStyle.LINEAR.
- node_colors: The colors of the nodes. Defaults to NodeStyles().
+ with_styles: Whether to include styles in the syntax.
+ curve_style: The style of the edges.
+ node_colors: The colors of the nodes.
wrap_label_n_words: The number of words to wrap the node labels at.
- Defaults to 9.
frontmatter_config: Mermaid frontmatter config.
Can be used to customize theme and styles. Will be converted to YAML and
- added to the beginning of the mermaid graph. Defaults to `None`.
+ added to the beginning of the mermaid graph.
See more here: https://mermaid.js.org/config/configuration.html.
@@ -647,23 +646,19 @@ class Graph:
"""Draw the graph as a PNG image using Mermaid.
Args:
- curve_style: The style of the edges. Defaults to CurveStyle.LINEAR.
- node_colors: The colors of the nodes. Defaults to NodeStyles().
+ curve_style: The style of the edges.
+ node_colors: The colors of the nodes.
wrap_label_n_words: The number of words to wrap the node labels at.
- Defaults to 9.
output_file_path: The path to save the image to. If `None`, the image
- is not saved. Defaults to `None`.
+ is not saved.
draw_method: The method to use to draw the graph.
- Defaults to MermaidDrawMethod.API.
- background_color: The color of the background. Defaults to "white".
- padding: The padding around the graph. Defaults to 10.
- max_retries: The maximum number of retries (MermaidDrawMethod.API).
- Defaults to 1.
- retry_delay: The delay between retries (MermaidDrawMethod.API).
- Defaults to 1.0.
+ background_color: The color of the background.
+ padding: The padding around the graph.
+ max_retries: The maximum number of retries (`MermaidDrawMethod.API`).
+ retry_delay: The delay between retries (`MermaidDrawMethod.API`).
frontmatter_config: Mermaid frontmatter config.
Can be used to customize theme and styles. Will be converted to YAML and
- added to the beginning of the mermaid graph. Defaults to `None`.
+ added to the beginning of the mermaid graph.
See more here: https://mermaid.js.org/config/configuration.html.
@@ -679,7 +674,7 @@ class Graph:
}
```
base_url: The base URL of the Mermaid server for rendering via API.
- Defaults to `None`.
+
Returns:
The PNG image as bytes.
@@ -711,8 +706,10 @@ class Graph:
def _first_node(graph: Graph, exclude: Sequence[str] = ()) -> Node | None:
"""Find the single node that is not a target of any edge.
- Exclude nodes/sources with ids in the exclude list.
- If there is no such node, or there are multiple, return None.
+ Exclude nodes/sources with IDs in the exclude list.
+
+ If there is no such node, or there are multiple, return `None`.
+
When drawing the graph, this node would be the origin.
"""
targets = {edge.target for edge in graph.edges if edge.source not in exclude}
@@ -727,8 +724,10 @@ def _first_node(graph: Graph, exclude: Sequence[str] = ()) -> Node | None:
def _last_node(graph: Graph, exclude: Sequence[str] = ()) -> Node | None:
"""Find the single node that is not a source of any edge.
- Exclude nodes/targets with ids in the exclude list.
- If there is no such node, or there are multiple, return None.
+ Exclude nodes/targets with IDs in the exclude list.
+
+ If there is no such node, or there are multiple, return `None`.
+
When drawing the graph, this node would be the destination.
"""
sources = {edge.source for edge in graph.edges if edge.target not in exclude}
diff --git a/libs/core/langchain_core/runnables/graph_mermaid.py b/libs/core/langchain_core/runnables/graph_mermaid.py
index d6c6d2c7d4c..4f0e494b7c7 100644
--- a/libs/core/langchain_core/runnables/graph_mermaid.py
+++ b/libs/core/langchain_core/runnables/graph_mermaid.py
@@ -58,15 +58,15 @@ def draw_mermaid(
Args:
nodes: List of node ids.
edges: List of edges, object with a source, target and data.
- first_node: Id of the first node. Defaults to `None`.
- last_node: Id of the last node. Defaults to `None`.
- with_styles: Whether to include styles in the graph. Defaults to `True`.
- curve_style: Curve style for the edges. Defaults to CurveStyle.LINEAR.
- node_styles: Node colors for different types. Defaults to NodeStyles().
- wrap_label_n_words: Words to wrap the edge labels. Defaults to 9.
+ first_node: Id of the first node.
+ last_node: Id of the last node.
+ with_styles: Whether to include styles in the graph.
+ curve_style: Curve style for the edges.
+ node_styles: Node colors for different types.
+ wrap_label_n_words: Words to wrap the edge labels.
frontmatter_config: Mermaid frontmatter config.
Can be used to customize theme and styles. Will be converted to YAML and
- added to the beginning of the mermaid graph. Defaults to `None`.
+ added to the beginning of the mermaid graph.
See more here: https://mermaid.js.org/config/configuration.html.
@@ -286,13 +286,13 @@ def draw_mermaid_png(
Args:
mermaid_syntax: Mermaid graph syntax.
- output_file_path: Path to save the PNG image. Defaults to `None`.
- draw_method: Method to draw the graph. Defaults to MermaidDrawMethod.API.
- background_color: Background color of the image. Defaults to "white".
- padding: Padding around the image. Defaults to 10.
- max_retries: Maximum number of retries (MermaidDrawMethod.API). Defaults to 1.
- retry_delay: Delay between retries (MermaidDrawMethod.API). Defaults to 1.0.
- base_url: Base URL for the Mermaid.ink API. Defaults to `None`.
+ output_file_path: Path to save the PNG image.
+ draw_method: Method to draw the graph.
+ background_color: Background color of the image.
+ padding: Padding around the image.
+ max_retries: Maximum number of retries (MermaidDrawMethod.API).
+ retry_delay: Delay between retries (MermaidDrawMethod.API).
+ base_url: Base URL for the Mermaid.ink API.
Returns:
PNG image bytes.
@@ -454,7 +454,10 @@ def _render_mermaid_using_api(
return img_bytes
# If we get a server error (5xx), retry
- if 500 <= response.status_code < 600 and attempt < max_retries:
+ if (
+ requests.codes.internal_server_error <= response.status_code
+ and attempt < max_retries
+ ):
# Exponential backoff with jitter
sleep_time = retry_delay * (2**attempt) * (0.5 + 0.5 * random.random()) # noqa: S311 not used for crypto
time.sleep(sleep_time)
diff --git a/libs/core/langchain_core/runnables/graph_png.py b/libs/core/langchain_core/runnables/graph_png.py
index a01a1c475f3..75cc2a796aa 100644
--- a/libs/core/langchain_core/runnables/graph_png.py
+++ b/libs/core/langchain_core/runnables/graph_png.py
@@ -45,7 +45,7 @@ class PngDrawer:
}
}
The keys are the original labels, and the values are the new labels.
- Defaults to `None`.
+
"""
self.fontname = fontname or "arial"
self.labels = labels or LabelsDict(nodes={}, edges={})
@@ -104,8 +104,8 @@ class PngDrawer:
viz: The graphviz object.
source: The source node.
target: The target node.
- label: The label for the edge. Defaults to `None`.
- conditional: Whether the edge is conditional. Defaults to `False`.
+ label: The label for the edge.
+ conditional: Whether the edge is conditional.
"""
viz.add_edge(
source,
diff --git a/libs/core/langchain_core/runnables/history.py b/libs/core/langchain_core/runnables/history.py
index ce77aac9450..91628a8b1d9 100644
--- a/libs/core/langchain_core/runnables/history.py
+++ b/libs/core/langchain_core/runnables/history.py
@@ -36,23 +36,23 @@ GetSessionHistoryCallable = Callable[..., BaseChatMessageHistory]
class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
- """Runnable that manages chat message history for another Runnable.
+ """`Runnable` that manages chat message history for another `Runnable`.
A chat message history is a sequence of messages that represent a conversation.
- RunnableWithMessageHistory wraps another Runnable and manages the chat message
+ `RunnableWithMessageHistory` wraps another `Runnable` and manages the chat message
history for it; it is responsible for reading and updating the chat message
history.
- The formats supported for the inputs and outputs of the wrapped Runnable
+ The formats supported for the inputs and outputs of the wrapped `Runnable`
are described below.
- RunnableWithMessageHistory must always be called with a config that contains
+ `RunnableWithMessageHistory` must always be called with a config that contains
the appropriate parameters for the chat message history factory.
- By default, the Runnable is expected to take a single configuration parameter
+ By default, the `Runnable` is expected to take a single configuration parameter
called `session_id` which is a string. This parameter is used to create a new
- or look up an existing chat message history that matches the given session_id.
+ or look up an existing chat message history that matches the given `session_id`.
In this case, the invocation would look like this:
@@ -117,12 +117,12 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
```
- Example where the wrapped Runnable takes a dictionary input:
+ Example where the wrapped `Runnable` takes a dictionary input:
```python
from typing import Optional
- from langchain_community.chat_models import ChatAnthropic
+ from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
@@ -166,7 +166,7 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
print(store) # noqa: T201
```
- Example where the session factory takes two keys, user_id and conversation id):
+ Example where the session factory takes two keys (`user_id` and `conversation_id`):
```python
store = {}
@@ -223,21 +223,28 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
"""
get_session_history: GetSessionHistoryCallable
- """Function that returns a new BaseChatMessageHistory.
+ """Function that returns a new `BaseChatMessageHistory`.
+
This function should either take a single positional argument `session_id` of type
- string and return a corresponding chat message history instance"""
+ string and return a corresponding chat message history instance
+ """
input_messages_key: str | None = None
- """Must be specified if the base runnable accepts a dict as input.
- The key in the input dict that contains the messages."""
+ """Must be specified if the base `Runnable` accepts a `dict` as input.
+ The key in the input `dict` that contains the messages.
+ """
output_messages_key: str | None = None
- """Must be specified if the base Runnable returns a dict as output.
- The key in the output dict that contains the messages."""
+ """Must be specified if the base `Runnable` returns a `dict` as output.
+ The key in the output `dict` that contains the messages.
+ """
history_messages_key: str | None = None
- """Must be specified if the base runnable accepts a dict as input and expects a
- separate key for historical messages."""
+ """Must be specified if the base `Runnable` accepts a `dict` as input and expects a
+ separate key for historical messages.
+ """
history_factory_config: Sequence[ConfigurableFieldSpec]
"""Configure fields that should be passed to the chat history factory.
- See `ConfigurableFieldSpec` for more details."""
+
+ See `ConfigurableFieldSpec` for more details.
+ """
def __init__(
self,
@@ -254,15 +261,16 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
history_factory_config: Sequence[ConfigurableFieldSpec] | None = None,
**kwargs: Any,
) -> None:
- """Initialize RunnableWithMessageHistory.
+ """Initialize `RunnableWithMessageHistory`.
Args:
- runnable: The base Runnable to be wrapped.
+ runnable: The base `Runnable` to be wrapped.
+
Must take as input one of:
1. A list of `BaseMessage`
- 2. A dict with one key for all messages
- 3. A dict with one key for the current input string/message(s) and
+ 2. A `dict` with one key for all messages
+ 3. A `dict` with one key for the current input string/message(s) and
a separate key for historical messages. If the input key points
to a string, it will be treated as a `HumanMessage` in history.
@@ -270,13 +278,15 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
1. A string which can be treated as an `AIMessage`
2. A `BaseMessage` or sequence of `BaseMessage`
- 3. A dict with a key for a `BaseMessage` or sequence of
+ 3. A `dict` with a key for a `BaseMessage` or sequence of
`BaseMessage`
- get_session_history: Function that returns a new BaseChatMessageHistory.
+ get_session_history: Function that returns a new `BaseChatMessageHistory`.
+
This function should either take a single positional argument
`session_id` of type string and return a corresponding
chat message history instance.
+
```python
def get_session_history(
session_id: str, *, user_id: str | None = None
@@ -295,16 +305,17 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
) -> BaseChatMessageHistory: ...
```
- input_messages_key: Must be specified if the base runnable accepts a dict
- as input. Default is None.
- output_messages_key: Must be specified if the base runnable returns a dict
- as output. Default is None.
- history_messages_key: Must be specified if the base runnable accepts a dict
- as input and expects a separate key for historical messages.
+ input_messages_key: Must be specified if the base runnable accepts a `dict`
+ as input.
+ output_messages_key: Must be specified if the base runnable returns a `dict`
+ as output.
+ history_messages_key: Must be specified if the base runnable accepts a
+ `dict` as input and expects a separate key for historical messages.
history_factory_config: Configure fields that should be passed to the
chat history factory. See `ConfigurableFieldSpec` for more details.
- Specifying these allows you to pass multiple config keys
- into the get_session_history factory.
+
+ Specifying these allows you to pass multiple config keys into the
+ `get_session_history` factory.
**kwargs: Arbitrary additional kwargs to pass to parent class
`RunnableBindingBase` init.
@@ -364,7 +375,7 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
@property
@override
def config_specs(self) -> list[ConfigurableFieldSpec]:
- """Get the configuration specs for the RunnableWithMessageHistory."""
+ """Get the configuration specs for the `RunnableWithMessageHistory`."""
return get_unique_config_specs(
super().config_specs + list(self.history_factory_config)
)
@@ -606,6 +617,6 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
def _get_parameter_names(callable_: GetSessionHistoryCallable) -> list[str]:
- """Get the parameter names of the callable."""
+ """Get the parameter names of the `Callable`."""
sig = inspect.signature(callable_)
return list(sig.parameters.keys())
diff --git a/libs/core/langchain_core/runnables/passthrough.py b/libs/core/langchain_core/runnables/passthrough.py
index 76a0aa2126a..740bcab8d24 100644
--- a/libs/core/langchain_core/runnables/passthrough.py
+++ b/libs/core/langchain_core/runnables/passthrough.py
@@ -51,10 +51,10 @@ def identity(x: Other) -> Other:
"""Identity function.
Args:
- x: input.
+ x: Input.
Returns:
- output.
+ Output.
"""
return x
@@ -63,10 +63,10 @@ async def aidentity(x: Other) -> Other:
"""Async identity function.
Args:
- x: input.
+ x: Input.
Returns:
- output.
+ Output.
"""
return x
@@ -74,11 +74,11 @@ async def aidentity(x: Other) -> Other:
class RunnablePassthrough(RunnableSerializable[Other, Other]):
"""Runnable to passthrough inputs unchanged or with additional keys.
- This Runnable behaves almost like the identity function, except that it
+ This `Runnable` behaves almost like the identity function, except that it
can be configured to add additional keys to the output, if the input is a
dict.
- The examples below demonstrate this Runnable works using a few simple
+ The examples below demonstrate this `Runnable` works using a few simple
chains. The chains rely on simple lambdas to make the examples easy to execute
and experiment with.
@@ -164,7 +164,7 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
input_type: type[Other] | None = None,
**kwargs: Any,
) -> None:
- """Create e RunnablePassthrough.
+ """Create a `RunnablePassthrough`.
Args:
func: Function to be called with the input.
@@ -180,12 +180,12 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -213,11 +213,11 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
"""Merge the Dict input with the output produced by the mapping argument.
Args:
- **kwargs: Runnable, Callable or a Mapping from keys to Runnables
- or Callables.
+ **kwargs: `Runnable`, `Callable` or a `Mapping` from keys to `Runnable`
+ objects or `Callable`s.
Returns:
- A Runnable that merges the Dict input with the output produced by the
+ A `Runnable` that merges the `dict` input with the output produced by the
mapping argument.
"""
return RunnableAssign(RunnableParallel[dict[str, Any]](kwargs))
@@ -350,7 +350,7 @@ _graph_passthrough: RunnablePassthrough = RunnablePassthrough()
class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
- """Runnable that assigns key-value pairs to dict[str, Any] inputs.
+ """Runnable that assigns key-value pairs to `dict[str, Any]` inputs.
The `RunnableAssign` class takes input dictionaries and, through a
`RunnableParallel` instance, applies transformations, then combines
@@ -392,7 +392,7 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
mapper: RunnableParallel
def __init__(self, mapper: RunnableParallel[dict[str, Any]], **kwargs: Any) -> None:
- """Create a RunnableAssign.
+ """Create a `RunnableAssign`.
Args:
mapper: A `RunnableParallel` instance that will be used to transform the
@@ -403,13 +403,13 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
@@ -668,13 +668,19 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
yield chunk
-class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
- """Runnable that picks keys from dict[str, Any] inputs.
+class RunnablePick(RunnableSerializable[dict[str, Any], Any]):
+ """`Runnable` that picks keys from `dict[str, Any]` inputs.
- RunnablePick class represents a Runnable that selectively picks keys from a
+ `RunnablePick` class represents a `Runnable` that selectively picks keys from a
dictionary input. It allows you to specify one or more keys to extract
- from the input dictionary. It returns a new dictionary containing only
- the selected keys.
+ from the input dictionary.
+
+ !!! note "Return Type Behavior"
+ The return type depends on the `keys` parameter:
+
+ - When `keys` is a `str`: Returns the single value associated with that key
+ - When `keys` is a `list`: Returns a dictionary containing only the selected
+ keys
Example:
```python
@@ -687,18 +693,22 @@ class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
"country": "USA",
}
- runnable = RunnablePick(keys=["name", "age"])
+ # Single key - returns the value directly
+ runnable_single = RunnablePick(keys="name")
+ result_single = runnable_single.invoke(input_data)
+ print(result_single) # Output: "John"
- output_data = runnable.invoke(input_data)
-
- print(output_data) # Output: {'name': 'John', 'age': 30}
+ # Multiple keys - returns a dictionary
+ runnable_multiple = RunnablePick(keys=["name", "age"])
+ result_multiple = runnable_multiple.invoke(input_data)
+ print(result_multiple) # Output: {'name': 'John', 'age': 30}
```
"""
keys: str | list[str]
def __init__(self, keys: str | list[str], **kwargs: Any) -> None:
- """Create a RunnablePick.
+ """Create a `RunnablePick`.
Args:
keys: A single key or a list of keys to pick from the input dictionary.
@@ -708,13 +718,13 @@ class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
diff --git a/libs/core/langchain_core/runnables/retry.py b/libs/core/langchain_core/runnables/retry.py
index ed0adf8eaf7..03466718274 100644
--- a/libs/core/langchain_core/runnables/retry.py
+++ b/libs/core/langchain_core/runnables/retry.py
@@ -126,7 +126,7 @@ class RunnableRetry(RunnableBindingBase[Input, Output]): # type: ignore[no-rede
exponential_jitter_params: ExponentialJitterParams | None = None
"""Parameters for `tenacity.wait_exponential_jitter`. Namely: `initial`,
- `max`, `exp_base`, and `jitter` (all float values).
+ `max`, `exp_base`, and `jitter` (all `float` values).
"""
max_attempt_number: int = 3
diff --git a/libs/core/langchain_core/runnables/router.py b/libs/core/langchain_core/runnables/router.py
index 89b500d5e09..d9b4d44b8c3 100644
--- a/libs/core/langchain_core/runnables/router.py
+++ b/libs/core/langchain_core/runnables/router.py
@@ -40,11 +40,11 @@ class RouterInput(TypedDict):
key: str
"""The key to route on."""
input: Any
- """The input to pass to the selected Runnable."""
+ """The input to pass to the selected `Runnable`."""
class RouterRunnable(RunnableSerializable[RouterInput, Output]):
- """Runnable that routes to a set of Runnables based on Input['key'].
+ """`Runnable` that routes to a set of `Runnable` based on `Input['key']`.
Returns the output of the selected Runnable.
@@ -74,10 +74,10 @@ class RouterRunnable(RunnableSerializable[RouterInput, Output]):
self,
runnables: Mapping[str, Runnable[Any, Output] | Callable[[Any], Output]],
) -> None:
- """Create a RouterRunnable.
+ """Create a `RouterRunnable`.
Args:
- runnables: A mapping of keys to Runnables.
+ runnables: A mapping of keys to `Runnable` objects.
"""
super().__init__(
runnables={key: coerce_to_runnable(r) for key, r in runnables.items()}
@@ -90,13 +90,13 @@ class RouterRunnable(RunnableSerializable[RouterInput, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
- """Return True as this class is serializable."""
+ """Return `True` as this class is serializable."""
return True
@classmethod
@override
def get_lc_namespace(cls) -> list[str]:
- """Get the namespace of the langchain object.
+ """Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "runnable"]`
diff --git a/libs/core/langchain_core/runnables/schema.py b/libs/core/langchain_core/runnables/schema.py
index 828085fd435..919b7db4108 100644
--- a/libs/core/langchain_core/runnables/schema.py
+++ b/libs/core/langchain_core/runnables/schema.py
@@ -1,4 +1,4 @@
-"""Module contains typedefs that are used with Runnables."""
+"""Module contains typedefs that are used with `Runnable` objects."""
from __future__ import annotations
@@ -14,43 +14,43 @@ class EventData(TypedDict, total=False):
"""Data associated with a streaming event."""
input: Any
- """The input passed to the Runnable that generated the event.
+ """The input passed to the `Runnable` that generated the event.
- Inputs will sometimes be available at the *START* of the Runnable, and
- sometimes at the *END* of the Runnable.
+ Inputs will sometimes be available at the *START* of the `Runnable`, and
+ sometimes at the *END* of the `Runnable`.
- If a Runnable is able to stream its inputs, then its input by definition
- won't be known until the *END* of the Runnable when it has finished streaming
+ If a `Runnable` is able to stream its inputs, then its input by definition
+ won't be known until the *END* of the `Runnable` when it has finished streaming
its inputs.
"""
error: NotRequired[BaseException]
- """The error that occurred during the execution of the Runnable.
+ """The error that occurred during the execution of the `Runnable`.
- This field is only available if the Runnable raised an exception.
+ This field is only available if the `Runnable` raised an exception.
- !!! version-added "Added in version 1.0.0"
+ !!! version-added "Added in `langchain-core` 1.0.0"
"""
output: Any
- """The output of the Runnable that generated the event.
+ """The output of the `Runnable` that generated the event.
- Outputs will only be available at the *END* of the Runnable.
+ Outputs will only be available at the *END* of the `Runnable`.
- For most Runnables, this field can be inferred from the `chunk` field,
- though there might be some exceptions for special cased Runnables (e.g., like
+ For most `Runnable` objects, this field can be inferred from the `chunk` field,
+ though there might be some exceptions for special a cased `Runnable` (e.g., like
chat models), which may return more information.
"""
chunk: Any
"""A streaming chunk from the output that generated the event.
chunks support addition in general, and adding them up should result
- in the output of the Runnable that generated the event.
+ in the output of the `Runnable` that generated the event.
"""
class BaseStreamEvent(TypedDict):
"""Streaming event.
- Schema of a streaming event which is produced from the astream_events method.
+ Schema of a streaming event which is produced from the `astream_events` method.
Example:
```python
@@ -65,7 +65,7 @@ class BaseStreamEvent(TypedDict):
events = [event async for event in chain.astream_events("hello")]
- # will produce the following events
+ # Will produce the following events
# (where some fields have been omitted for brevity):
[
{
@@ -94,45 +94,45 @@ class BaseStreamEvent(TypedDict):
"""
event: str
- """Event names are of the format: on_[runnable_type]_(start|stream|end).
+ """Event names are of the format: `on_[runnable_type]_(start|stream|end)`.
Runnable types are one of:
- **llm** - used by non chat models
- **chat_model** - used by chat models
- - **prompt** -- e.g., ChatPromptTemplate
- - **tool** -- from tools defined via @tool decorator or inheriting
- from Tool/BaseTool
- - **chain** - most Runnables are of this type
+ - **prompt** -- e.g., `ChatPromptTemplate`
+ - **tool** -- from tools defined via `@tool` decorator or inheriting
+ from `Tool`/`BaseTool`
+ - **chain** - most `Runnable` objects are of this type
Further, the events are categorized as one of:
- - **start** - when the Runnable starts
- - **stream** - when the Runnable is streaming
- - **end* - when the Runnable ends
+ - **start** - when the `Runnable` starts
+ - **stream** - when the `Runnable` is streaming
+ - **end* - when the `Runnable` ends
start, stream and end are associated with slightly different `data` payload.
Please see the documentation for `EventData` for more details.
"""
run_id: str
- """An randomly generated ID to keep track of the execution of the given Runnable.
+ """An randomly generated ID to keep track of the execution of the given `Runnable`.
- Each child Runnable that gets invoked as part of the execution of a parent Runnable
- is assigned its own unique ID.
+ Each child `Runnable` that gets invoked as part of the execution of a parent
+ `Runnable` is assigned its own unique ID.
"""
tags: NotRequired[list[str]]
- """Tags associated with the Runnable that generated this event.
+ """Tags associated with the `Runnable` that generated this event.
- Tags are always inherited from parent Runnables.
+ Tags are always inherited from parent `Runnable` objects.
- Tags can either be bound to a Runnable using `.with_config({"tags": ["hello"]})`
+ Tags can either be bound to a `Runnable` using `.with_config({"tags": ["hello"]})`
or passed at run time using `.astream_events(..., {"tags": ["hello"]})`.
"""
metadata: NotRequired[dict[str, Any]]
- """Metadata associated with the Runnable that generated this event.
+ """Metadata associated with the `Runnable` that generated this event.
- Metadata can either be bound to a Runnable using
+ Metadata can either be bound to a `Runnable` using
`.with_config({"metadata": { "foo": "bar" }})`
@@ -146,8 +146,8 @@ class BaseStreamEvent(TypedDict):
Root Events will have an empty list.
- For example, if a Runnable A calls Runnable B, then the event generated by Runnable
- B will have Runnable A's ID in the parent_ids field.
+ For example, if a `Runnable` A calls `Runnable` B, then the event generated by
+ `Runnable` B will have `Runnable` A's ID in the `parent_ids` field.
The order of the parent IDs is from the root parent to the immediate parent.
@@ -164,14 +164,11 @@ class StandardStreamEvent(BaseStreamEvent):
The contents of the event data depend on the event type.
"""
name: str
- """The name of the Runnable that generated the event."""
+ """The name of the `Runnable` that generated the event."""
class CustomStreamEvent(BaseStreamEvent):
- """Custom stream event created by the user.
-
- !!! version-added "Added in version 0.2.15"
- """
+ """Custom stream event created by the user."""
# Overwrite the event field to be more specific.
event: Literal["on_custom_event"] # type: ignore[misc]
diff --git a/libs/core/langchain_core/runnables/utils.py b/libs/core/langchain_core/runnables/utils.py
index 51318bee13d..4860e70bc2c 100644
--- a/libs/core/langchain_core/runnables/utils.py
+++ b/libs/core/langchain_core/runnables/utils.py
@@ -5,6 +5,7 @@ from __future__ import annotations
import ast
import asyncio
import inspect
+import sys
import textwrap
from collections.abc import Callable, Mapping, Sequence
from contextvars import Context
@@ -80,7 +81,7 @@ def accepts_run_manager(callable: Callable[..., Any]) -> bool: # noqa: A002
callable: The callable to check.
Returns:
- True if the callable accepts a run_manager argument, False otherwise.
+ `True` if the callable accepts a run_manager argument, `False` otherwise.
"""
try:
return signature(callable).parameters.get("run_manager") is not None
@@ -95,7 +96,7 @@ def accepts_config(callable: Callable[..., Any]) -> bool: # noqa: A002
callable: The callable to check.
Returns:
- True if the callable accepts a config argument, False otherwise.
+ `True` if the callable accepts a config argument, `False` otherwise.
"""
try:
return signature(callable).parameters.get("config") is not None
@@ -110,7 +111,7 @@ def accepts_context(callable: Callable[..., Any]) -> bool: # noqa: A002
callable: The callable to check.
Returns:
- True if the callable accepts a context argument, False otherwise.
+ `True` if the callable accepts a context argument, `False` otherwise.
"""
try:
return signature(callable).parameters.get("context") is not None
@@ -118,14 +119,13 @@ def accepts_context(callable: Callable[..., Any]) -> bool: # noqa: A002
return False
-@lru_cache(maxsize=1)
def asyncio_accepts_context() -> bool:
- """Cache the result of checking if asyncio.create_task accepts a `context` arg.
+ """Check if asyncio.create_task accepts a `context` arg.
Returns:
- True if `asyncio.create_task` accepts a context argument, False otherwise.
+ True if `asyncio.create_task` accepts a context argument, `False` otherwise.
"""
- return accepts_context(asyncio.create_task)
+ return sys.version_info >= (3, 11)
def coro_with_context(
@@ -136,7 +136,7 @@ def coro_with_context(
Args:
coro: The coroutine to await.
context: The context to use.
- create_task: Whether to create a task. Defaults to `False`.
+ create_task: Whether to create a task.
Returns:
The coroutine with the context.
@@ -552,13 +552,13 @@ class ConfigurableField(NamedTuple):
id: str
"""The unique identifier of the field."""
name: str | None = None
- """The name of the field. Defaults to `None`."""
+ """The name of the field. """
description: str | None = None
- """The description of the field. Defaults to `None`."""
+ """The description of the field. """
annotation: Any | None = None
- """The annotation of the field. Defaults to `None`."""
+ """The annotation of the field. """
is_shared: bool = False
- """Whether the field is shared. Defaults to `False`."""
+ """Whether the field is shared."""
@override
def __hash__(self) -> int:
@@ -575,11 +575,11 @@ class ConfigurableFieldSingleOption(NamedTuple):
default: str
"""The default value for the field."""
name: str | None = None
- """The name of the field. Defaults to `None`."""
+ """The name of the field. """
description: str | None = None
- """The description of the field. Defaults to `None`."""
+ """The description of the field. """
is_shared: bool = False
- """Whether the field is shared. Defaults to `False`."""
+ """Whether the field is shared."""
@override
def __hash__(self) -> int:
@@ -596,11 +596,11 @@ class ConfigurableFieldMultiOption(NamedTuple):
default: Sequence[str]
"""The default values for the field."""
name: str | None = None
- """The name of the field. Defaults to `None`."""
+ """The name of the field. """
description: str | None = None
- """The description of the field. Defaults to `None`."""
+ """The description of the field. """
is_shared: bool = False
- """Whether the field is shared. Defaults to `False`."""
+ """Whether the field is shared."""
@override
def __hash__(self) -> int:
@@ -620,15 +620,15 @@ class ConfigurableFieldSpec(NamedTuple):
annotation: Any
"""The annotation of the field."""
name: str | None = None
- """The name of the field. Defaults to `None`."""
+ """The name of the field. """
description: str | None = None
- """The description of the field. Defaults to `None`."""
+ """The description of the field. """
default: Any = None
- """The default value for the field. Defaults to `None`."""
+ """The default value for the field. """
is_shared: bool = False
- """Whether the field is shared. Defaults to `False`."""
+ """Whether the field is shared."""
dependencies: list[str] | None = None
- """The dependencies of the field. Defaults to `None`."""
+ """The dependencies of the field. """
def get_unique_config_specs(
@@ -727,7 +727,7 @@ def is_async_generator(
func: The function to check.
Returns:
- True if the function is an async generator, False otherwise.
+ `True` if the function is an async generator, `False` otherwise.
"""
return inspect.isasyncgenfunction(func) or (
hasattr(func, "__call__") # noqa: B004
@@ -744,7 +744,7 @@ def is_async_callable(
func: The function to check.
Returns:
- True if the function is async, False otherwise.
+ `True` if the function is async, `False` otherwise.
"""
return asyncio.iscoroutinefunction(func) or (
hasattr(func, "__call__") # noqa: B004
diff --git a/libs/core/langchain_core/stores.py b/libs/core/langchain_core/stores.py
index 2c570b66580..77408d60962 100644
--- a/libs/core/langchain_core/stores.py
+++ b/libs/core/langchain_core/stores.py
@@ -86,7 +86,7 @@ class BaseStore(ABC, Generic[K, V]):
Returns:
A sequence of optional values associated with the keys.
- If a key is not found, the corresponding value will be None.
+ If a key is not found, the corresponding value will be `None`.
"""
async def amget(self, keys: Sequence[K]) -> list[V | None]:
@@ -97,7 +97,7 @@ class BaseStore(ABC, Generic[K, V]):
Returns:
A sequence of optional values associated with the keys.
- If a key is not found, the corresponding value will be None.
+ If a key is not found, the corresponding value will be `None`.
"""
return await run_in_executor(None, self.mget, keys)
@@ -209,7 +209,7 @@ class InMemoryBaseStore(BaseStore[str, V], Generic[V]):
"""Get an iterator over keys that match the given prefix.
Args:
- prefix: The prefix to match. Defaults to `None`.
+ prefix: The prefix to match.
Yields:
The keys that match the given prefix.
@@ -225,7 +225,7 @@ class InMemoryBaseStore(BaseStore[str, V], Generic[V]):
"""Async get an async iterator over keys that match the given prefix.
Args:
- prefix: The prefix to match. Defaults to `None`.
+ prefix: The prefix to match.
Yields:
The keys that match the given prefix.
@@ -243,8 +243,7 @@ class InMemoryStore(InMemoryBaseStore[Any]):
"""In-memory store for any type of data.
Attributes:
- store (dict[str, Any]): The underlying dictionary that stores
- the key-value pairs.
+ store: The underlying dictionary that stores the key-value pairs.
Examples:
```python
@@ -267,8 +266,7 @@ class InMemoryByteStore(InMemoryBaseStore[bytes]):
"""In-memory store for bytes.
Attributes:
- store (dict[str, bytes]): The underlying dictionary that stores
- the key-value pairs.
+ store: The underlying dictionary that stores the key-value pairs.
Examples:
```python
diff --git a/libs/core/langchain_core/sys_info.py b/libs/core/langchain_core/sys_info.py
index ac83ad7d42e..86716af4714 100644
--- a/libs/core/langchain_core/sys_info.py
+++ b/libs/core/langchain_core/sys_info.py
@@ -125,9 +125,11 @@ def print_sys_info(*, additional_pkgs: Sequence[str] = ()) -> None:
for dep in sub_dependencies:
try:
dep_version = metadata.version(dep)
- print(f"> {dep}: {dep_version}")
except Exception:
- print(f"> {dep}: Installed. No version info available.")
+ dep_version = None
+
+ if dep_version is not None:
+ print(f"> {dep}: {dep_version}")
if __name__ == "__main__":
diff --git a/libs/core/langchain_core/tools/base.py b/libs/core/langchain_core/tools/base.py
index adeb49e6150..64648273e0f 100644
--- a/libs/core/langchain_core/tools/base.py
+++ b/libs/core/langchain_core/tools/base.py
@@ -92,7 +92,7 @@ def _is_annotated_type(typ: type[Any]) -> bool:
typ: The type to check.
Returns:
- True if the type is an Annotated type, False otherwise.
+ `True` if the type is an Annotated type, `False` otherwise.
"""
return get_origin(typ) is typing.Annotated
@@ -226,7 +226,7 @@ def _is_pydantic_annotation(annotation: Any, pydantic_version: str = "v2") -> bo
pydantic_version: The Pydantic version to check against ("v1" or "v2").
Returns:
- True if the annotation is a Pydantic model, False otherwise.
+ `True` if the annotation is a Pydantic model, `False` otherwise.
"""
base_model_class = BaseModelV1 if pydantic_version == "v1" else BaseModel
try:
@@ -245,7 +245,7 @@ def _function_annotations_are_pydantic_v1(
func: The function being checked.
Returns:
- True if all Pydantic annotations are from V1, False otherwise.
+ True if all Pydantic annotations are from V1, `False` otherwise.
Raises:
NotImplementedError: If the function contains mixed V1 and V2 annotations.
@@ -285,18 +285,17 @@ def create_schema_from_function(
error_on_invalid_docstring: bool = False,
include_injected: bool = True,
) -> type[BaseModel]:
- """Create a pydantic schema from a function's signature.
+ """Create a Pydantic schema from a function's signature.
Args:
- model_name: Name to assign to the generated pydantic schema.
+ model_name: Name to assign to the generated Pydantic schema.
func: Function to generate the schema from.
filter_args: Optional list of arguments to exclude from the schema.
- Defaults to FILTERED_ARGS.
+ Defaults to `FILTERED_ARGS`.
parse_docstring: Whether to parse the function's docstring for descriptions
- for each argument. Defaults to `False`.
+ for each argument.
error_on_invalid_docstring: if `parse_docstring` is provided, configure
- whether to raise ValueError on invalid Google Style docstrings.
- Defaults to `False`.
+ whether to raise `ValueError` on invalid Google Style docstrings.
include_injected: Whether to include injected arguments in the schema.
Defaults to `True`, since we want to include them in the schema
when *validating* tool inputs.
@@ -312,7 +311,7 @@ def create_schema_from_function(
# https://docs.pydantic.dev/latest/usage/validation_decorator/
with warnings.catch_warnings():
# We are using deprecated functionality here.
- # This code should be re-written to simply construct a pydantic model
+ # This code should be re-written to simply construct a Pydantic model
# using inspect.signature and create_model.
warnings.simplefilter("ignore", category=PydanticDeprecationWarning)
validated = validate_arguments(func, config=_SchemaConfig) # type: ignore[operator]
@@ -392,6 +391,7 @@ class BaseTool(RunnableSerializable[str | dict | ToolCall, Any]):
"""Base class for all LangChain tools.
This abstract class defines the interface that all LangChain tools must implement.
+
Tools are components that can be called by agents to perform specific actions.
"""
@@ -402,7 +402,7 @@ class BaseTool(RunnableSerializable[str | dict | ToolCall, Any]):
**kwargs: Additional keyword arguments passed to the parent class.
Raises:
- SchemaAnnotationError: If args_schema has incorrect type annotation.
+ SchemaAnnotationError: If `args_schema` has incorrect type annotation.
"""
super().__init_subclass__(**kwargs)
@@ -443,15 +443,15 @@ class ChildTool(BaseTool):
Args schema should be either:
- - A subclass of pydantic.BaseModel.
- - A subclass of pydantic.v1.BaseModel if accessing v1 namespace in pydantic 2
- - a JSON schema dict
+ - A subclass of `pydantic.BaseModel`.
+ - A subclass of `pydantic.v1.BaseModel` if accessing v1 namespace in pydantic 2
+ - A JSON schema dict
"""
return_direct: bool = False
"""Whether to return the tool's output directly.
- Setting this to True means
- that after the tool is called, the AgentExecutor will stop looping.
+ Setting this to `True` means that after the tool is called, the `AgentExecutor` will
+ stop looping.
"""
verbose: bool = False
"""Whether to log the tool's progress."""
@@ -460,32 +460,38 @@ class ChildTool(BaseTool):
"""Callbacks to be called during tool execution."""
tags: list[str] | None = None
- """Optional list of tags associated with the tool. Defaults to `None`.
+ """Optional list of tags associated with the tool.
+
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
- You can use these to eg identify a specific instance of a tool with its use case.
+
+ You can use these to, e.g., identify a specific instance of a tool with its use
+ case.
"""
metadata: dict[str, Any] | None = None
- """Optional metadata associated with the tool. Defaults to `None`.
+ """Optional metadata associated with the tool.
+
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
- You can use these to eg identify a specific instance of a tool with its use case.
+
+ You can use these to, e.g., identify a specific instance of a tool with its use
+ case.
"""
handle_tool_error: bool | str | Callable[[ToolException], str] | None = False
- """Handle the content of the ToolException thrown."""
+ """Handle the content of the `ToolException` thrown."""
handle_validation_error: (
bool | str | Callable[[ValidationError | ValidationErrorV1], str] | None
) = False
- """Handle the content of the ValidationError thrown."""
+ """Handle the content of the `ValidationError` thrown."""
response_format: Literal["content", "content_and_artifact"] = "content"
- """The tool response format. Defaults to 'content'.
+ """The tool response format.
- If "content" then the output of the tool is interpreted as the contents of a
- ToolMessage. If "content_and_artifact" then the output is expected to be a
- two-tuple corresponding to the (content, artifact) of a ToolMessage.
+ If `'content'` then the output of the tool is interpreted as the contents of a
+ `ToolMessage`. If `'content_and_artifact'` then the output is expected to be a
+ two-tuple corresponding to the `(content, artifact)` of a `ToolMessage`.
"""
def __init__(self, **kwargs: Any) -> None:
@@ -493,7 +499,7 @@ class ChildTool(BaseTool):
Raises:
TypeError: If `args_schema` is not a subclass of pydantic `BaseModel` or
- dict.
+ `dict`.
"""
if (
"args_schema" in kwargs
@@ -517,7 +523,7 @@ class ChildTool(BaseTool):
"""Check if the tool accepts only a single input argument.
Returns:
- True if the tool has only one input argument, False otherwise.
+ `True` if the tool has only one input argument, `False` otherwise.
"""
keys = {k for k in self.args if k != "kwargs"}
return len(keys) == 1
@@ -527,7 +533,7 @@ class ChildTool(BaseTool):
"""Get the tool's input arguments schema.
Returns:
- Dictionary containing the tool's argument properties.
+ `dict` containing the tool's argument properties.
"""
if isinstance(self.args_schema, dict):
json_schema = self.args_schema
@@ -616,10 +622,10 @@ class ChildTool(BaseTool):
The parsed and validated input.
Raises:
- ValueError: If string input is provided with JSON schema `args_schema`.
- ValueError: If InjectedToolCallId is required but `tool_call_id` is not
+ ValueError: If `string` input is provided with JSON schema `args_schema`.
+ ValueError: If `InjectedToolCallId` is required but `tool_call_id` is not
provided.
- TypeError: If args_schema is not a Pydantic `BaseModel` or dict.
+ TypeError: If `args_schema` is not a Pydantic `BaseModel` or dict.
"""
input_args = self.args_schema
if isinstance(tool_input, str):
@@ -708,6 +714,35 @@ class ChildTool(BaseTool):
kwargs["run_manager"] = kwargs["run_manager"].get_sync()
return await run_in_executor(None, self._run, *args, **kwargs)
+ def _filter_injected_args(self, tool_input: dict) -> dict:
+ """Filter out injected tool arguments from the input dictionary.
+
+ Injected arguments are those annotated with `InjectedToolArg` or its
+ subclasses, or arguments in `FILTERED_ARGS` like `run_manager` and callbacks.
+
+ Args:
+ tool_input: The tool input dictionary to filter.
+
+ Returns:
+ A filtered dictionary with injected arguments removed.
+ """
+ # Start with filtered args from the constant
+ filtered_keys = set[str](FILTERED_ARGS)
+
+ # If we have an args_schema, use it to identify injected args
+ if self.args_schema is not None:
+ try:
+ annotations = get_all_basemodel_annotations(self.args_schema)
+ for field_name, field_type in annotations.items():
+ if _is_injected_arg_type(field_type):
+ filtered_keys.add(field_name)
+ except Exception: # noqa: S110
+ # If we can't get annotations, just use FILTERED_ARGS
+ pass
+
+ # Filter out the injected keys from tool_input
+ return {k: v for k, v in tool_input.items() if k not in filtered_keys}
+
def _to_args_and_kwargs(
self, tool_input: str | dict, tool_call_id: str | None
) -> tuple[tuple, dict]:
@@ -718,7 +753,7 @@ class ChildTool(BaseTool):
tool_call_id: The ID of the tool call, if available.
Returns:
- A tuple of (positional_args, keyword_args) for the tool.
+ A tuple of `(positional_args, keyword_args)` for the tool.
Raises:
TypeError: If the tool input type is invalid.
@@ -767,16 +802,16 @@ class ChildTool(BaseTool):
Args:
tool_input: The input to the tool.
- verbose: Whether to log the tool's progress. Defaults to `None`.
- start_color: The color to use when starting the tool. Defaults to 'green'.
- color: The color to use when ending the tool. Defaults to 'green'.
- callbacks: Callbacks to be called during tool execution. Defaults to `None`.
- tags: Optional list of tags associated with the tool. Defaults to `None`.
- metadata: Optional metadata associated with the tool. Defaults to `None`.
- run_name: The name of the run. Defaults to `None`.
- run_id: The id of the run. Defaults to `None`.
- config: The configuration for the tool. Defaults to `None`.
- tool_call_id: The id of the tool call. Defaults to `None`.
+ verbose: Whether to log the tool's progress.
+ start_color: The color to use when starting the tool.
+ color: The color to use when ending the tool.
+ callbacks: Callbacks to be called during tool execution.
+ tags: Optional list of tags associated with the tool.
+ metadata: Optional metadata associated with the tool.
+ run_name: The name of the run.
+ run_id: The id of the run.
+ config: The configuration for the tool.
+ tool_call_id: The id of the tool call.
**kwargs: Keyword arguments to be passed to tool callbacks (event handler)
Returns:
@@ -795,17 +830,29 @@ class ChildTool(BaseTool):
self.metadata,
)
+ # Filter out injected arguments from callback inputs
+ filtered_tool_input = (
+ self._filter_injected_args(tool_input)
+ if isinstance(tool_input, dict)
+ else None
+ )
+
+ # Use filtered inputs for the input_str parameter as well
+ tool_input_str = (
+ tool_input
+ if isinstance(tool_input, str)
+ else str(
+ filtered_tool_input if filtered_tool_input is not None else tool_input
+ )
+ )
+
run_manager = callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
- tool_input if isinstance(tool_input, str) else str(tool_input),
+ tool_input_str,
color=start_color,
name=run_name,
run_id=run_id,
- # Inputs by definition should always be dicts.
- # For now, it's unclear whether this assumption is ever violated,
- # but if it is we will send a `None` value to the callback instead
- # TODO: will need to address issue via a patch.
- inputs=tool_input if isinstance(tool_input, dict) else None,
+ inputs=filtered_tool_input,
**kwargs,
)
@@ -825,16 +872,19 @@ class ChildTool(BaseTool):
tool_kwargs |= {config_param: config}
response = context.run(self._run, *tool_args, **tool_kwargs)
if self.response_format == "content_and_artifact":
- if not isinstance(response, tuple) or len(response) != 2:
- msg = (
- "Since response_format='content_and_artifact' "
- "a two-tuple of the message content and raw tool output is "
- f"expected. Instead generated response of type: "
- f"{type(response)}."
- )
+ msg = (
+ "Since response_format='content_and_artifact' "
+ "a two-tuple of the message content and raw tool output is "
+ f"expected. Instead, generated response is of type: "
+ f"{type(response)}."
+ )
+ if not isinstance(response, tuple):
error_to_raise = ValueError(msg)
else:
- content, artifact = response
+ try:
+ content, artifact = response
+ except ValueError:
+ error_to_raise = ValueError(msg)
else:
content = response
except (ValidationError, ValidationErrorV1) as e:
@@ -879,16 +929,16 @@ class ChildTool(BaseTool):
Args:
tool_input: The input to the tool.
- verbose: Whether to log the tool's progress. Defaults to `None`.
- start_color: The color to use when starting the tool. Defaults to 'green'.
- color: The color to use when ending the tool. Defaults to 'green'.
- callbacks: Callbacks to be called during tool execution. Defaults to `None`.
- tags: Optional list of tags associated with the tool. Defaults to `None`.
- metadata: Optional metadata associated with the tool. Defaults to `None`.
- run_name: The name of the run. Defaults to `None`.
- run_id: The id of the run. Defaults to `None`.
- config: The configuration for the tool. Defaults to `None`.
- tool_call_id: The id of the tool call. Defaults to `None`.
+ verbose: Whether to log the tool's progress.
+ start_color: The color to use when starting the tool.
+ color: The color to use when ending the tool.
+ callbacks: Callbacks to be called during tool execution.
+ tags: Optional list of tags associated with the tool.
+ metadata: Optional metadata associated with the tool.
+ run_name: The name of the run.
+ run_id: The id of the run.
+ config: The configuration for the tool.
+ tool_call_id: The id of the tool call.
**kwargs: Keyword arguments to be passed to tool callbacks
Returns:
@@ -906,17 +956,30 @@ class ChildTool(BaseTool):
metadata,
self.metadata,
)
+
+ # Filter out injected arguments from callback inputs
+ filtered_tool_input = (
+ self._filter_injected_args(tool_input)
+ if isinstance(tool_input, dict)
+ else None
+ )
+
+ # Use filtered inputs for the input_str parameter as well
+ tool_input_str = (
+ tool_input
+ if isinstance(tool_input, str)
+ else str(
+ filtered_tool_input if filtered_tool_input is not None else tool_input
+ )
+ )
+
run_manager = await callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
- tool_input if isinstance(tool_input, str) else str(tool_input),
+ tool_input_str,
color=start_color,
name=run_name,
run_id=run_id,
- # Inputs by definition should always be dicts.
- # For now, it's unclear whether this assumption is ever violated,
- # but if it is we will send a `None` value to the callback instead
- # TODO: will need to address issue via a patch.
- inputs=tool_input if isinstance(tool_input, dict) else None,
+ inputs=filtered_tool_input,
**kwargs,
)
content = None
@@ -938,16 +1001,19 @@ class ChildTool(BaseTool):
coro = self._arun(*tool_args, **tool_kwargs)
response = await coro_with_context(coro, context)
if self.response_format == "content_and_artifact":
- if not isinstance(response, tuple) or len(response) != 2:
- msg = (
- "Since response_format='content_and_artifact' "
- "a two-tuple of the message content and raw tool output is "
- f"expected. Instead generated response of type: "
- f"{type(response)}."
- )
+ msg = (
+ "Since response_format='content_and_artifact' "
+ "a two-tuple of the message content and raw tool output is "
+ f"expected. Instead, generated response is of type: "
+ f"{type(response)}."
+ )
+ if not isinstance(response, tuple):
error_to_raise = ValueError(msg)
else:
- content, artifact = response
+ try:
+ content, artifact = response
+ except ValueError:
+ error_to_raise = ValueError(msg)
else:
content = response
except ValidationError as e:
@@ -981,7 +1047,7 @@ def _is_tool_call(x: Any) -> bool:
x: The input to check.
Returns:
- True if the input is a tool call, False otherwise.
+ `True` if the input is a tool call, `False` otherwise.
"""
return isinstance(x, dict) and x.get("type") == "tool_call"
@@ -995,7 +1061,7 @@ def _handle_validation_error(
Args:
e: The validation error that occurred.
- flag: How to handle the error (bool, string, or callable).
+ flag: How to handle the error (`bool`, `str`, or `Callable`).
Returns:
The error message to return.
@@ -1027,7 +1093,7 @@ def _handle_tool_error(
Args:
e: The tool exception that occurred.
- flag: How to handle the error (bool, string, or callable).
+ flag: How to handle the error (`bool`, `str`, or `Callable`).
Returns:
The error message to return.
@@ -1058,12 +1124,12 @@ def _prep_run_args(
"""Prepare arguments for tool execution.
Args:
- value: The input value (string, dict, or ToolCall).
+ value: The input value (`str`, `dict`, or `ToolCall`).
config: The runnable configuration.
**kwargs: Additional keyword arguments.
Returns:
- A tuple of (tool_input, run_kwargs).
+ A tuple of `(tool_input, run_kwargs)`.
"""
config = ensure_config(config)
if _is_tool_call(value):
@@ -1094,7 +1160,7 @@ def _format_output(
name: str,
status: str,
) -> ToolOutputMixin | Any:
- """Format tool output as a ToolMessage if appropriate.
+ """Format tool output as a `ToolMessage` if appropriate.
Args:
content: The main content of the tool output.
@@ -1104,7 +1170,7 @@ def _format_output(
status: The execution status.
Returns:
- The formatted output, either as a ToolMessage or the original content.
+ The formatted output, either as a `ToolMessage` or the original content.
"""
if isinstance(content, ToolOutputMixin) or tool_call_id is None:
return content
@@ -1128,7 +1194,7 @@ def _is_message_content_type(obj: Any) -> bool:
obj: The object to check.
Returns:
- True if the object is valid message content, False otherwise.
+ `True` if the object is valid message content, `False` otherwise.
"""
return isinstance(obj, str) or (
isinstance(obj, list) and all(_is_message_content_block(e) for e in obj)
@@ -1144,7 +1210,7 @@ def _is_message_content_block(obj: Any) -> bool:
obj: The object to check.
Returns:
- True if the object is a valid content block, False otherwise.
+ `True` if the object is a valid content block, `False` otherwise.
"""
if isinstance(obj, str):
return True
@@ -1175,7 +1241,7 @@ def _get_type_hints(func: Callable) -> dict[str, type] | None:
func: The function to get type hints from.
Returns:
- Dictionary of type hints, or None if extraction fails.
+ `dict` of type hints, or `None` if extraction fails.
"""
if isinstance(func, functools.partial):
func = func.func
@@ -1186,13 +1252,13 @@ def _get_type_hints(func: Callable) -> dict[str, type] | None:
def _get_runnable_config_param(func: Callable) -> str | None:
- """Find the parameter name for RunnableConfig in a function.
+ """Find the parameter name for `RunnableConfig` in a function.
Args:
func: The function to check.
Returns:
- The parameter name for RunnableConfig, or None if not found.
+ The parameter name for `RunnableConfig`, or `None` if not found.
"""
type_hints = _get_type_hints(func)
if not type_hints:
@@ -1211,6 +1277,28 @@ class InjectedToolArg:
"""
+class _DirectlyInjectedToolArg:
+ """Annotation for tool arguments that are injected at runtime.
+
+ Injected via direct type annotation, rather than annotated metadata.
+
+ For example, `ToolRuntime` is a directly injected argument.
+
+ Note the direct annotation rather than the verbose alternative:
+ `Annotated[ToolRuntime, InjectedRuntime]`
+
+ ```python
+ from langchain_core.tools import tool, ToolRuntime
+
+
+ @tool
+ def foo(x: int, runtime: ToolRuntime) -> str:
+ # use runtime.state, runtime.context, runtime.store, etc.
+ ...
+ ```
+ """
+
+
class InjectedToolCallId(InjectedToolArg):
"""Annotation for injecting the tool call ID.
@@ -1238,6 +1326,24 @@ class InjectedToolCallId(InjectedToolArg):
"""
+def _is_directly_injected_arg_type(type_: Any) -> bool:
+ """Check if a type annotation indicates a directly injected argument.
+
+ This is currently only used for `ToolRuntime`.
+ Checks if either the annotation itself is a subclass of `_DirectlyInjectedToolArg`
+ or the origin of the annotation is a subclass of `_DirectlyInjectedToolArg`.
+
+ Ex: `ToolRuntime` or `ToolRuntime[ContextT, StateT]` would both return `True`.
+ """
+ return (
+ isinstance(type_, type) and issubclass(type_, _DirectlyInjectedToolArg)
+ ) or (
+ (origin := get_origin(type_)) is not None
+ and isinstance(origin, type)
+ and issubclass(origin, _DirectlyInjectedToolArg)
+ )
+
+
def _is_injected_arg_type(
type_: type | TypeVar, injected_type: type[InjectedToolArg] | None = None
) -> bool:
@@ -1248,9 +1354,17 @@ def _is_injected_arg_type(
injected_type: The specific injected type to check for.
Returns:
- True if the type is an injected argument, False otherwise.
+ `True` if the type is an injected argument, `False` otherwise.
"""
- injected_type = injected_type or InjectedToolArg
+ if injected_type is None:
+ # if no injected type is specified,
+ # check if the type is a directly injected argument
+ if _is_directly_injected_arg_type(type_):
+ return True
+ injected_type = InjectedToolArg
+
+ # if the type is an Annotated type, check if annotated metadata
+ # is an intance or subclass of the injected type
return any(
isinstance(arg, injected_type)
or (isinstance(arg, type) and issubclass(arg, injected_type))
@@ -1261,14 +1375,14 @@ def _is_injected_arg_type(
def get_all_basemodel_annotations(
cls: TypeBaseModel | Any, *, default_to_bound: bool = True
) -> dict[str, type | TypeVar]:
- """Get all annotations from a Pydantic BaseModel and its parents.
+ """Get all annotations from a Pydantic `BaseModel` and its parents.
Args:
- cls: The Pydantic BaseModel class.
- default_to_bound: Whether to default to the bound of a TypeVar if it exists.
+ cls: The Pydantic `BaseModel` class.
+ default_to_bound: Whether to default to the bound of a `TypeVar` if it exists.
Returns:
- A dictionary of field names to their type annotations.
+ `dict` of field names to their type annotations.
"""
# cls has no subscript: cls = FooBar
if isinstance(cls, type):
@@ -1334,15 +1448,15 @@ def _replace_type_vars(
*,
default_to_bound: bool = True,
) -> type | TypeVar:
- """Replace TypeVars in a type annotation with concrete types.
+ """Replace `TypeVar`s in a type annotation with concrete types.
Args:
type_: The type annotation to process.
- generic_map: Mapping of TypeVars to concrete types.
- default_to_bound: Whether to use TypeVar bounds as defaults.
+ generic_map: Mapping of `TypeVar`s to concrete types.
+ default_to_bound: Whether to use `TypeVar` bounds as defaults.
Returns:
- The type with TypeVars replaced.
+ The type with `TypeVar`s replaced.
"""
generic_map = generic_map or {}
if isinstance(type_, TypeVar):
diff --git a/libs/core/langchain_core/tools/convert.py b/libs/core/langchain_core/tools/convert.py
index 0bc4e2e98b6..1dabed5e002 100644
--- a/libs/core/langchain_core/tools/convert.py
+++ b/libs/core/langchain_core/tools/convert.py
@@ -81,61 +81,72 @@ def tool(
parse_docstring: bool = False,
error_on_invalid_docstring: bool = True,
) -> BaseTool | Callable[[Callable | Runnable], BaseTool]:
- """Make tools out of functions, can be used with or without arguments.
+ """Convert Python functions and `Runnables` to LangChain tools.
+
+ Can be used as a decorator with or without arguments to create tools from functions.
+
+ Functions can have any signature - the tool will automatically infer input schemas
+ unless disabled.
+
+ !!! note "Requirements"
+ - Functions must have type hints for proper schema inference
+ - When `infer_schema=False`, functions must be `(str) -> str` and have
+ docstrings
+ - When using with `Runnable`, a string name must be provided
Args:
- name_or_callable: Optional name of the tool or the callable to be
- converted to a tool. Must be provided as a positional argument.
- runnable: Optional runnable to convert to a tool. Must be provided as a
- positional argument.
+ name_or_callable: Optional name of the tool or the `Callable` to be
+ converted to a tool. Overrides the function's name.
+
+ Must be provided as a positional argument.
+ runnable: Optional `Runnable` to convert to a tool.
+
+ Must be provided as a positional argument.
description: Optional description for the tool.
+
Precedence for the tool description value is as follows:
- - `description` argument
+ - This `description` argument
(used even if docstring and/or `args_schema` are provided)
- - tool function docstring
+ - Tool function docstring
(used even if `args_schema` is provided)
- `args_schema` description
- (used only if `description` / docstring are not provided)
+ (used only if `description` and docstring are not provided)
*args: Extra positional arguments. Must be empty.
- return_direct: Whether to return directly from the tool rather
- than continuing the agent loop. Defaults to `False`.
- args_schema: optional argument schema for user to specify.
- Defaults to `None`.
- infer_schema: Whether to infer the schema of the arguments from
- the function's signature. This also makes the resultant tool
- accept a dictionary input to its `run()` function.
- Defaults to `True`.
- response_format: The tool response format. If "content" then the output of
- the tool is interpreted as the contents of a ToolMessage. If
- "content_and_artifact" then the output is expected to be a two-tuple
- corresponding to the (content, artifact) of a ToolMessage.
- Defaults to "content".
- parse_docstring: if `infer_schema` and `parse_docstring`, will attempt to
+ return_direct: Whether to return directly from the tool rather than continuing
+ the agent loop.
+ args_schema: Optional argument schema for user to specify.
+ infer_schema: Whether to infer the schema of the arguments from the function's
+ signature. This also makes the resultant tool accept a dictionary input to
+ its `run()` function.
+ response_format: The tool response format.
+
+ If `'content'`, then the output of the tool is interpreted as the contents
+ of a `ToolMessage`.
+
+ If `'content_and_artifact'`, then the output is expected to be a two-tuple
+ corresponding to the `(content, artifact)` of a `ToolMessage`.
+ parse_docstring: If `infer_schema` and `parse_docstring`, will attempt to
parse parameter descriptions from Google Style function docstrings.
- Defaults to `False`.
- error_on_invalid_docstring: if `parse_docstring` is provided, configure
- whether to raise ValueError on invalid Google Style docstrings.
- Defaults to `True`.
+ error_on_invalid_docstring: If `parse_docstring` is provided, configure
+ whether to raise `ValueError` on invalid Google Style docstrings.
Raises:
- ValueError: If too many positional arguments are provided.
- ValueError: If a runnable is provided without a string name.
+ ValueError: If too many positional arguments are provided (e.g. violating the
+ `*args` constraint).
+ ValueError: If a `Runnable` is provided without a string name. When using `tool`
+ with a `Runnable`, a `str` name must be provided as the `name_or_callable`.
ValueError: If the first argument is not a string or callable with
a `__name__` attribute.
ValueError: If the function does not have a docstring and description
- is not provided and `infer_schema` is False.
- ValueError: If `parse_docstring` is True and the function has an invalid
+ is not provided and `infer_schema` is `False`.
+ ValueError: If `parse_docstring` is `True` and the function has an invalid
Google-style docstring and `error_on_invalid_docstring` is True.
- ValueError: If a Runnable is provided that does not have an object schema.
+ ValueError: If a `Runnable` is provided that does not have an object schema.
Returns:
The tool.
- Requires:
- - Function must be of type (str) -> str
- - Function must have a docstring
-
Examples:
```python
@tool
@@ -155,8 +166,6 @@ def tool(
return "partial json of results", {"full": "object of results"}
```
- !!! version-added "Added in version 0.2.14"
-
Parse Google-style docstrings:
```python
@@ -197,7 +206,7 @@ def tool(
Note that parsing by default will raise `ValueError` if the docstring
is considered invalid. A docstring is considered invalid if it contains
arguments not in the function signature, or is unable to be parsed into
- a summary and "Args:" blocks. Examples below:
+ a summary and `"Args:"` blocks. Examples below:
```python
# No args section
@@ -397,10 +406,10 @@ def convert_runnable_to_tool(
Args:
runnable: The runnable to convert.
- args_schema: The schema for the tool's input arguments. Defaults to `None`.
- name: The name of the tool. Defaults to `None`.
- description: The description of the tool. Defaults to `None`.
- arg_types: The types of the arguments. Defaults to `None`.
+ args_schema: The schema for the tool's input arguments.
+ name: The name of the tool.
+ description: The description of the tool.
+ arg_types: The types of the arguments.
Returns:
The tool.
diff --git a/libs/core/langchain_core/tools/retriever.py b/libs/core/langchain_core/tools/retriever.py
index fff2fe3fe87..5677e97bd74 100644
--- a/libs/core/langchain_core/tools/retriever.py
+++ b/libs/core/langchain_core/tools/retriever.py
@@ -81,13 +81,14 @@ def create_retriever_tool(
so should be unique and somewhat descriptive.
description: The description for the tool. This will be passed to the language
model, so should be descriptive.
- document_prompt: The prompt to use for the document. Defaults to `None`.
- document_separator: The separator to use between documents. Defaults to "\n\n".
- response_format: The tool response format. If "content" then the output of
- the tool is interpreted as the contents of a ToolMessage. If
- "content_and_artifact" then the output is expected to be a two-tuple
- corresponding to the (content, artifact) of a ToolMessage (artifact
- being a list of documents in this case). Defaults to "content".
+ document_prompt: The prompt to use for the document.
+ document_separator: The separator to use between documents.
+ response_format: The tool response format.
+
+ If `"content"` then the output of the tool is interpreted as the contents of
+ a `ToolMessage`. If `"content_and_artifact"` then the output is expected to
+ be a two-tuple corresponding to the `(content, artifact)` of a `ToolMessage`
+ (artifact being a list of documents in this case).
Returns:
Tool class to pass to an agent.
diff --git a/libs/core/langchain_core/tools/simple.py b/libs/core/langchain_core/tools/simple.py
index 249fc41fefd..68c70e61258 100644
--- a/libs/core/langchain_core/tools/simple.py
+++ b/libs/core/langchain_core/tools/simple.py
@@ -69,7 +69,7 @@ class Tool(BaseTool):
def _to_args_and_kwargs(
self, tool_input: str | dict, tool_call_id: str | None
) -> tuple[tuple, dict]:
- """Convert tool input to pydantic model.
+ """Convert tool input to Pydantic model.
Args:
tool_input: The input to the tool.
@@ -79,8 +79,7 @@ class Tool(BaseTool):
ToolException: If the tool input is invalid.
Returns:
- the pydantic model args and kwargs.
-
+ The Pydantic model args and kwargs.
"""
args, kwargs = super()._to_args_and_kwargs(tool_input, tool_call_id)
# For backwards compatibility. The tool must be run with a single input
@@ -177,9 +176,9 @@ class Tool(BaseTool):
func: The function to create the tool from.
name: The name of the tool.
description: The description of the tool.
- return_direct: Whether to return the output directly. Defaults to `False`.
- args_schema: The schema of the tool's input arguments. Defaults to `None`.
- coroutine: The asynchronous version of the function. Defaults to `None`.
+ return_direct: Whether to return the output directly.
+ args_schema: The schema of the tool's input arguments.
+ coroutine: The asynchronous version of the function.
**kwargs: Additional arguments to pass to the tool.
Returns:
diff --git a/libs/core/langchain_core/tools/structured.py b/libs/core/langchain_core/tools/structured.py
index cc978446f3b..43e981570a0 100644
--- a/libs/core/langchain_core/tools/structured.py
+++ b/libs/core/langchain_core/tools/structured.py
@@ -149,21 +149,18 @@ class StructuredTool(BaseTool):
description: The description of the tool.
Defaults to the function docstring.
return_direct: Whether to return the result directly or as a callback.
- Defaults to `False`.
- args_schema: The schema of the tool's input arguments. Defaults to `None`.
+ args_schema: The schema of the tool's input arguments.
infer_schema: Whether to infer the schema from the function's signature.
- Defaults to `True`.
- response_format: The tool response format. If "content" then the output of
- the tool is interpreted as the contents of a ToolMessage. If
- "content_and_artifact" then the output is expected to be a two-tuple
- corresponding to the (content, artifact) of a ToolMessage.
- Defaults to "content".
- parse_docstring: if `infer_schema` and `parse_docstring`, will attempt
+ response_format: The tool response format.
+
+ If `"content"` then the output of the tool is interpreted as the
+ contents of a `ToolMessage`. If `"content_and_artifact"` then the output
+ is expected to be a two-tuple corresponding to the `(content, artifact)`
+ of a `ToolMessage`.
+ parse_docstring: If `infer_schema` and `parse_docstring`, will attempt
to parse parameter descriptions from Google Style function docstrings.
- Defaults to `False`.
error_on_invalid_docstring: if `parse_docstring` is provided, configure
- whether to raise ValueError on invalid Google Style docstrings.
- Defaults to `False`.
+ whether to raise `ValueError` on invalid Google Style docstrings.
**kwargs: Additional arguments to pass to the tool
Returns:
diff --git a/libs/core/langchain_core/tracers/base.py b/libs/core/langchain_core/tracers/base.py
index 1ff9c9ed0cb..01bc9da0aa8 100644
--- a/libs/core/langchain_core/tracers/base.py
+++ b/libs/core/langchain_core/tracers/base.py
@@ -67,9 +67,9 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
serialized: The serialized model.
messages: The messages to start the chat with.
run_id: The run ID.
- tags: The tags for the run. Defaults to `None`.
- parent_run_id: The parent run ID. Defaults to `None`.
- metadata: The metadata for the run. Defaults to `None`.
+ tags: The tags for the run.
+ parent_run_id: The parent run ID.
+ metadata: The metadata for the run.
name: The name of the run.
**kwargs: Additional arguments.
@@ -108,9 +108,9 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
serialized: The serialized model.
prompts: The prompts to start the LLM with.
run_id: The run ID.
- tags: The tags for the run. Defaults to `None`.
- parent_run_id: The parent run ID. Defaults to `None`.
- metadata: The metadata for the run. Defaults to `None`.
+ tags: The tags for the run.
+ parent_run_id: The parent run ID.
+ metadata: The metadata for the run.
name: The name of the run.
**kwargs: Additional arguments.
@@ -145,9 +145,9 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
Args:
token: The token.
- chunk: The chunk. Defaults to `None`.
+ chunk: The chunk.
run_id: The run ID.
- parent_run_id: The parent run ID. Defaults to `None`.
+ parent_run_id: The parent run ID.
**kwargs: Additional arguments.
Returns:
@@ -255,10 +255,10 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
serialized: The serialized chain.
inputs: The inputs for the chain.
run_id: The run ID.
- tags: The tags for the run. Defaults to `None`.
- parent_run_id: The parent run ID. Defaults to `None`.
- metadata: The metadata for the run. Defaults to `None`.
- run_type: The type of the run. Defaults to `None`.
+ tags: The tags for the run.
+ parent_run_id: The parent run ID.
+ metadata: The metadata for the run.
+ run_type: The type of the run.
name: The name of the run.
**kwargs: Additional arguments.
@@ -294,7 +294,7 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
Args:
outputs: The outputs for the chain.
run_id: The run ID.
- inputs: The inputs for the chain. Defaults to `None`.
+ inputs: The inputs for the chain.
**kwargs: Additional arguments.
Returns:
@@ -322,7 +322,7 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
Args:
error: The error.
- inputs: The inputs for the chain. Defaults to `None`.
+ inputs: The inputs for the chain.
run_id: The run ID.
**kwargs: Additional arguments.
@@ -357,9 +357,9 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
serialized: The serialized tool.
input_str: The input string.
run_id: The run ID.
- tags: The tags for the run. Defaults to `None`.
- parent_run_id: The parent run ID. Defaults to `None`.
- metadata: The metadata for the run. Defaults to `None`.
+ tags: The tags for the run.
+ parent_run_id: The parent run ID.
+ metadata: The metadata for the run.
name: The name of the run.
inputs: The inputs for the tool.
**kwargs: Additional arguments.
@@ -446,9 +446,9 @@ class BaseTracer(_TracerCore, BaseCallbackHandler, ABC):
serialized: The serialized retriever.
query: The query.
run_id: The run ID.
- parent_run_id: The parent run ID. Defaults to `None`.
- tags: The tags for the run. Defaults to `None`.
- metadata: The metadata for the run. Defaults to `None`.
+ parent_run_id: The parent run ID.
+ tags: The tags for the run.
+ metadata: The metadata for the run.
name: The name of the run.
**kwargs: Additional arguments.
diff --git a/libs/core/langchain_core/tracers/context.py b/libs/core/langchain_core/tracers/context.py
index 2dc2043a920..982054b1322 100644
--- a/libs/core/langchain_core/tracers/context.py
+++ b/libs/core/langchain_core/tracers/context.py
@@ -48,9 +48,9 @@ def tracing_v2_enabled(
Args:
project_name: The name of the project. Defaults to `'default'`.
- example_id: The ID of the example. Defaults to `None`.
- tags: The tags to add to the run. Defaults to `None`.
- client: The client of the langsmith. Defaults to `None`.
+ example_id: The ID of the example.
+ tags: The tags to add to the run.
+ client: The client of the langsmith.
Yields:
The LangChain tracer.
diff --git a/libs/core/langchain_core/tracers/event_stream.py b/libs/core/langchain_core/tracers/event_stream.py
index 67a2f81ce6c..acccd979d1c 100644
--- a/libs/core/langchain_core/tracers/event_stream.py
+++ b/libs/core/langchain_core/tracers/event_stream.py
@@ -128,7 +128,10 @@ class _AstreamEventsCallbackHandler(AsyncCallbackHandler, _StreamingCallbackHand
exclude_tags=exclude_tags,
)
- loop = asyncio.get_event_loop()
+ try:
+ loop = asyncio.get_event_loop()
+ except RuntimeError:
+ loop = asyncio.new_event_loop()
memory_stream = _MemoryStream[StreamEvent](loop)
self.send_stream = memory_stream.get_send_stream()
self.receive_stream = memory_stream.get_receive_stream()
diff --git a/libs/core/langchain_core/tracers/langchain.py b/libs/core/langchain_core/tracers/langchain.py
index d1928622cb5..15186007d6a 100644
--- a/libs/core/langchain_core/tracers/langchain.py
+++ b/libs/core/langchain_core/tracers/langchain.py
@@ -134,10 +134,10 @@ class LangChainTracer(BaseTracer):
serialized: The serialized model.
messages: The messages.
run_id: The run ID.
- tags: The tags. Defaults to `None`.
- parent_run_id: The parent run ID. Defaults to `None`.
- metadata: The metadata. Defaults to `None`.
- name: The name. Defaults to `None`.
+ tags: The tags.
+ parent_run_id: The parent run ID.
+ metadata: The metadata.
+ name: The name.
**kwargs: Additional keyword arguments.
Returns:
diff --git a/libs/core/langchain_core/tracers/log_stream.py b/libs/core/langchain_core/tracers/log_stream.py
index 65a6a1baf7c..345d5176735 100644
--- a/libs/core/langchain_core/tracers/log_stream.py
+++ b/libs/core/langchain_core/tracers/log_stream.py
@@ -96,10 +96,10 @@ class RunLogPatch:
"""Patch to the run log."""
ops: list[dict[str, Any]]
- """List of jsonpatch operations, which describe how to create the run state
+ """List of JSONPatch operations, which describe how to create the run state
from an empty dict. This is the minimal representation of the log, designed to
be serialized as JSON and sent over the wire to reconstruct the log on the other
- side. Reconstruction of the state can be done with any jsonpatch-compliant library,
+ side. Reconstruction of the state can be done with any JSONPatch-compliant library,
see https://jsonpatch.com for more information."""
def __init__(self, *ops: dict[str, Any]) -> None:
@@ -190,7 +190,7 @@ class RunLog(RunLogPatch):
other: The other `RunLog` to compare to.
Returns:
- True if the `RunLog`s are equal, False otherwise.
+ `True` if the `RunLog`s are equal, `False` otherwise.
"""
# First compare that the state is the same
if not isinstance(other, RunLog):
@@ -264,7 +264,10 @@ class LogStreamCallbackHandler(BaseTracer, _StreamingCallbackHandler):
self.exclude_types = exclude_types
self.exclude_tags = exclude_tags
- loop = asyncio.get_event_loop()
+ try:
+ loop = asyncio.get_event_loop()
+ except RuntimeError:
+ loop = asyncio.new_event_loop()
memory_stream = _MemoryStream[RunLogPatch](loop)
self.lock = threading.Lock()
self.send_stream = memory_stream.get_send_stream()
@@ -288,7 +291,7 @@ class LogStreamCallbackHandler(BaseTracer, _StreamingCallbackHandler):
*ops: The operations to send to the stream.
Returns:
- True if the patch was sent successfully, False if the stream is closed.
+ `True` if the patch was sent successfully, False if the stream is closed.
"""
# We will likely want to wrap this in try / except at some point
# to handle exceptions that might arise at run time.
@@ -368,7 +371,7 @@ class LogStreamCallbackHandler(BaseTracer, _StreamingCallbackHandler):
run: The Run to check.
Returns:
- True if the run should be included, False otherwise.
+ `True` if the run should be included, `False` otherwise.
"""
if run.id == self.root_id:
return False
diff --git a/libs/core/langchain_core/tracers/memory_stream.py b/libs/core/langchain_core/tracers/memory_stream.py
index a59e47821d4..0a9facf17af 100644
--- a/libs/core/langchain_core/tracers/memory_stream.py
+++ b/libs/core/langchain_core/tracers/memory_stream.py
@@ -5,7 +5,7 @@ channel. The writer and reader can be in the same event loop or in different eve
loops. When they're in different event loops, they will also be in different
threads.
-This is useful in situations when there's a mix of synchronous and asynchronous
+Useful in situations when there's a mix of synchronous and asynchronous
used in the code.
"""
diff --git a/libs/core/langchain_core/tracers/root_listeners.py b/libs/core/langchain_core/tracers/root_listeners.py
index 043805c16d4..923cd1c16f6 100644
--- a/libs/core/langchain_core/tracers/root_listeners.py
+++ b/libs/core/langchain_core/tracers/root_listeners.py
@@ -24,7 +24,7 @@ class RootListenersTracer(BaseTracer):
"""Tracer that calls listeners on run start, end, and error."""
log_missing_parent = False
- """Whether to log a warning if the parent is missing. Default is False."""
+ """Whether to log a warning if the parent is missing."""
def __init__(
self,
@@ -79,7 +79,7 @@ class AsyncRootListenersTracer(AsyncBaseTracer):
"""Async Tracer that calls listeners on run start, end, and error."""
log_missing_parent = False
- """Whether to log a warning if the parent is missing. Default is False."""
+ """Whether to log a warning if the parent is missing."""
def __init__(
self,
diff --git a/libs/core/langchain_core/tracers/schemas.py b/libs/core/langchain_core/tracers/schemas.py
index f06ad0a80d4..67a37035b4d 100644
--- a/libs/core/langchain_core/tracers/schemas.py
+++ b/libs/core/langchain_core/tracers/schemas.py
@@ -2,140 +2,13 @@
from __future__ import annotations
-import warnings
-from datetime import datetime, timezone
-from typing import Any
-from uuid import UUID
-
from langsmith import RunTree
-from langsmith.schemas import RunTypeEnum as RunTypeEnumDep
-from pydantic import PydanticDeprecationWarning
-from pydantic.v1 import BaseModel as BaseModelV1
-from pydantic.v1 import Field as FieldV1
-
-from langchain_core._api import deprecated
-
-
-@deprecated("0.1.0", alternative="Use string instead.", removal="1.0")
-def RunTypeEnum() -> type[RunTypeEnumDep]: # noqa: N802
- """`RunTypeEnum`.
-
- Returns:
- The `RunTypeEnum` class.
- """
- warnings.warn(
- "RunTypeEnum is deprecated. Please directly use a string instead"
- " (e.g. 'llm', 'chain', 'tool').",
- DeprecationWarning,
- stacklevel=2,
- )
- return RunTypeEnumDep
-
-
-@deprecated("0.1.0", removal="1.0")
-class TracerSessionV1Base(BaseModelV1):
- """Base class for TracerSessionV1."""
-
- start_time: datetime = FieldV1(default_factory=lambda: datetime.now(timezone.utc))
- name: str | None = None
- extra: dict[str, Any] | None = None
-
-
-@deprecated("0.1.0", removal="1.0")
-class TracerSessionV1Create(TracerSessionV1Base):
- """Create class for TracerSessionV1."""
-
-
-@deprecated("0.1.0", removal="1.0")
-class TracerSessionV1(TracerSessionV1Base):
- """TracerSessionV1 schema."""
-
- id: int
-
-
-@deprecated("0.1.0", removal="1.0")
-class TracerSessionBase(TracerSessionV1Base):
- """Base class for TracerSession."""
-
- tenant_id: UUID
-
-
-@deprecated("0.1.0", removal="1.0")
-class TracerSession(TracerSessionBase):
- """TracerSessionV1 schema for the V2 API."""
-
- id: UUID
-
-
-@deprecated("0.1.0", alternative="Run", removal="1.0")
-class BaseRun(BaseModelV1):
- """Base class for Run."""
-
- uuid: str
- parent_uuid: str | None = None
- start_time: datetime = FieldV1(default_factory=lambda: datetime.now(timezone.utc))
- end_time: datetime = FieldV1(default_factory=lambda: datetime.now(timezone.utc))
- extra: dict[str, Any] | None = None
- execution_order: int
- child_execution_order: int
- serialized: dict[str, Any]
- session_id: int
- error: str | None = None
-
-
-@deprecated("0.1.0", alternative="Run", removal="1.0")
-class LLMRun(BaseRun):
- """Class for LLMRun."""
-
- prompts: list[str]
-
-
-@deprecated("0.1.0", alternative="Run", removal="1.0")
-class ChainRun(BaseRun):
- """Class for ChainRun."""
-
- inputs: dict[str, Any]
- outputs: dict[str, Any] | None = None
- child_llm_runs: list[LLMRun] = FieldV1(default_factory=list)
- child_chain_runs: list[ChainRun] = FieldV1(default_factory=list)
- child_tool_runs: list[ToolRun] = FieldV1(default_factory=list)
-
-
-@deprecated("0.1.0", alternative="Run", removal="1.0")
-class ToolRun(BaseRun):
- """Class for ToolRun."""
-
- tool_input: str
- output: str | None = None
- action: str
- child_llm_runs: list[LLMRun] = FieldV1(default_factory=list)
- child_chain_runs: list[ChainRun] = FieldV1(default_factory=list)
- child_tool_runs: list[ToolRun] = FieldV1(default_factory=list)
-
# Begin V2 API Schemas
Run = RunTree # For backwards compatibility
-# TODO: Update once langsmith moves to Pydantic V2 and we can swap Run.model_rebuild
-# for Run.update_forward_refs
-with warnings.catch_warnings():
- warnings.simplefilter("ignore", category=PydanticDeprecationWarning)
-
- ChainRun.update_forward_refs()
- ToolRun.update_forward_refs()
-
__all__ = [
- "BaseRun",
- "ChainRun",
- "LLMRun",
"Run",
- "RunTypeEnum",
- "ToolRun",
- "TracerSession",
- "TracerSessionBase",
- "TracerSessionV1",
- "TracerSessionV1Base",
- "TracerSessionV1Create",
]
diff --git a/libs/core/langchain_core/tracers/stdout.py b/libs/core/langchain_core/tracers/stdout.py
index 72e6cee19a6..119b9127fd6 100644
--- a/libs/core/langchain_core/tracers/stdout.py
+++ b/libs/core/langchain_core/tracers/stdout.py
@@ -49,8 +49,7 @@ class FunctionCallbackHandler(BaseTracer):
"""Tracer that calls a function with a single str parameter."""
name: str = "function_callback_handler"
- """The name of the tracer. This is used to identify the tracer in the logs.
- Default is "function_callback_handler"."""
+ """The name of the tracer. This is used to identify the tracer in the logs."""
def __init__(self, function: Callable[[str], None], **kwargs: Any) -> None:
"""Create a FunctionCallbackHandler.
diff --git a/libs/core/langchain_core/utils/__init__.py b/libs/core/langchain_core/utils/__init__.py
index e16f3c11583..04216583702 100644
--- a/libs/core/langchain_core/utils/__init__.py
+++ b/libs/core/langchain_core/utils/__init__.py
@@ -1,4 +1,4 @@
-"""**Utility functions** for LangChain.
+"""Utility functions for LangChain.
These functions do not depend on any other LangChain module.
"""
diff --git a/libs/core/langchain_core/utils/aiter.py b/libs/core/langchain_core/utils/aiter.py
index e910cee43c6..b196b43aba8 100644
--- a/libs/core/langchain_core/utils/aiter.py
+++ b/libs/core/langchain_core/utils/aiter.py
@@ -201,9 +201,9 @@ class Tee(Generic[T]):
Args:
iterable: The iterable to split.
- n: The number of iterators to create. Defaults to 2.
+ n: The number of iterators to create.
lock: The lock to synchronise access to the shared buffers.
- Defaults to `None`.
+
"""
self._iterator = iterable.__aiter__() # before 3.10 aiter() doesn't exist
self._buffers: list[deque[T]] = [deque() for _ in range(n)]
diff --git a/libs/core/langchain_core/utils/env.py b/libs/core/langchain_core/utils/env.py
index 5d9abe916e5..f19928e8761 100644
--- a/libs/core/langchain_core/utils/env.py
+++ b/libs/core/langchain_core/utils/env.py
@@ -13,7 +13,7 @@ def env_var_is_set(env_var: str) -> bool:
env_var: The name of the environment variable.
Returns:
- True if the environment variable is set, False otherwise.
+ `True` if the environment variable is set, `False` otherwise.
"""
return env_var in os.environ and os.environ[env_var] not in {
"",
@@ -38,7 +38,7 @@ def get_from_dict_or_env(
env_key: The environment variable to look up if the key is not
in the dictionary.
default: The default value to return if the key is not in the dictionary
- or the environment. Defaults to `None`.
+ or the environment.
Returns:
The dict value or the environment variable value.
@@ -64,7 +64,7 @@ def get_from_env(key: str, env_key: str, default: str | None = None) -> str:
env_key: The environment variable to look up if the key is not
in the dictionary.
default: The default value to return if the key is not in the dictionary
- or the environment. Defaults to `None`.
+ or the environment.
Returns:
The value of the key.
diff --git a/libs/core/langchain_core/utils/function_calling.py b/libs/core/langchain_core/utils/function_calling.py
index 5ba5efa59e7..f96b3ee1a9b 100644
--- a/libs/core/langchain_core/utils/function_calling.py
+++ b/libs/core/langchain_core/utils/function_calling.py
@@ -27,7 +27,7 @@ from pydantic.v1 import create_model as create_model_v1
from typing_extensions import TypedDict, is_typeddict
import langchain_core
-from langchain_core._api import beta, deprecated
+from langchain_core._api import beta
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage
from langchain_core.utils.json_schema import dereference_refs
from langchain_core.utils.pydantic import is_basemodel_subclass
@@ -114,7 +114,7 @@ def _convert_json_schema_to_openai_function(
used.
description: The description of the function. If not provided, the description
of the schema will be used.
- rm_titles: Whether to remove titles from the schema. Defaults to `True`.
+ rm_titles: Whether to remove titles from the schema.
Returns:
The function description.
@@ -148,7 +148,7 @@ def _convert_pydantic_to_openai_function(
used.
description: The description of the function. If not provided, the description
of the schema will be used.
- rm_titles: Whether to remove titles from the schema. Defaults to `True`.
+ rm_titles: Whether to remove titles from the schema.
Raises:
TypeError: If the model is not a Pydantic model.
@@ -168,42 +168,6 @@ def _convert_pydantic_to_openai_function(
)
-convert_pydantic_to_openai_function = deprecated(
- "0.1.16",
- alternative="langchain_core.utils.function_calling.convert_to_openai_function()",
- removal="1.0",
-)(_convert_pydantic_to_openai_function)
-
-
-@deprecated(
- "0.1.16",
- alternative="langchain_core.utils.function_calling.convert_to_openai_tool()",
- removal="1.0",
-)
-def convert_pydantic_to_openai_tool(
- model: type[BaseModel],
- *,
- name: str | None = None,
- description: str | None = None,
-) -> ToolDescription:
- """Converts a Pydantic model to a function description for the OpenAI API.
-
- Args:
- model: The Pydantic model to convert.
- name: The name of the function. If not provided, the title of the schema will be
- used.
- description: The description of the function. If not provided, the description
- of the schema will be used.
-
- Returns:
- The tool description.
- """
- function = _convert_pydantic_to_openai_function(
- model, name=name, description=description
- )
- return {"type": "function", "function": function}
-
-
def _get_python_function_name(function: Callable) -> str:
"""Get the name of a Python function."""
return function.__name__
@@ -240,13 +204,6 @@ def _convert_python_function_to_openai_function(
)
-convert_python_function_to_openai_function = deprecated(
- "0.1.16",
- alternative="langchain_core.utils.function_calling.convert_to_openai_function()",
- removal="1.0",
-)(_convert_python_function_to_openai_function)
-
-
def _convert_typed_dict_to_openai_function(typed_dict: type) -> FunctionDescription:
visited: dict = {}
@@ -368,31 +325,6 @@ def _format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription:
}
-format_tool_to_openai_function = deprecated(
- "0.1.16",
- alternative="langchain_core.utils.function_calling.convert_to_openai_function()",
- removal="1.0",
-)(_format_tool_to_openai_function)
-
-
-@deprecated(
- "0.1.16",
- alternative="langchain_core.utils.function_calling.convert_to_openai_tool()",
- removal="1.0",
-)
-def format_tool_to_openai_tool(tool: BaseTool) -> ToolDescription:
- """Format tool into the OpenAI function API.
-
- Args:
- tool: The tool to format.
-
- Returns:
- The tool description.
- """
- function = _format_tool_to_openai_function(tool)
- return {"type": "function", "function": function}
-
-
def convert_to_openai_function(
function: dict[str, Any] | type | Callable | BaseTool,
*,
@@ -402,11 +334,11 @@ def convert_to_openai_function(
Args:
function:
- A dictionary, Pydantic BaseModel class, TypedDict class, a LangChain
- Tool object, or a Python function. If a dictionary is passed in, it is
+ A dictionary, Pydantic `BaseModel` class, `TypedDict` class, a LangChain
+ `Tool` object, or a Python function. If a dictionary is passed in, it is
assumed to already be a valid OpenAI function, a JSON schema with
- top-level 'title' key specified, an Anthropic format
- tool, or an Amazon Bedrock Converse format tool.
+ top-level `title` key specified, an Anthropic format tool, or an Amazon
+ Bedrock Converse format tool.
strict:
If `True`, model output is guaranteed to exactly match the JSON Schema
provided in the function definition. If `None`, `strict` argument will not
@@ -419,17 +351,8 @@ def convert_to_openai_function(
Raises:
ValueError: If function is not in a supported format.
- !!! warning "Behavior changed in 0.2.29"
- `strict` arg added.
-
- !!! warning "Behavior changed in 0.3.13"
- Support for Anthropic format tools added.
-
- !!! warning "Behavior changed in 0.3.14"
- Support for Amazon Bedrock Converse format tools added.
-
- !!! warning "Behavior changed in 0.3.16"
- 'description' and 'parameters' keys are now optional. Only 'name' is
+ !!! warning "Behavior changed in `langchain-core` 0.3.16"
+ `description` and `parameters` keys are now optional. Only `name` is
required and guaranteed to be part of the output.
"""
# an Anthropic format tool
@@ -489,7 +412,7 @@ def convert_to_openai_function(
if strict is not None:
if "strict" in oai_function and oai_function["strict"] != strict:
msg = (
- f"Tool/function already has a 'strict' key wth value "
+ f"Tool/function already has a 'strict' key with value "
f"{oai_function['strict']} which is different from the explicit "
f"`strict` arg received {strict=}."
)
@@ -502,6 +425,14 @@ def convert_to_openai_function(
oai_function["parameters"] = _recursive_set_additional_properties_false(
oai_function["parameters"]
)
+ # All fields must be `required`
+ parameters = oai_function.get("parameters")
+ if isinstance(parameters, dict):
+ fields = parameters.get("properties")
+ if isinstance(fields, dict) and fields:
+ parameters = dict(parameters)
+ parameters["required"] = list(fields.keys())
+ oai_function["parameters"] = parameters
return oai_function
@@ -527,16 +458,14 @@ def convert_to_openai_tool(
) -> dict[str, Any]:
"""Convert a tool-like object to an OpenAI tool schema.
- OpenAI tool schema reference:
- https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools
+ [OpenAI tool schema reference](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools)
Args:
tool:
- Either a dictionary, a pydantic.BaseModel class, Python function, or
- BaseTool. If a dictionary is passed in, it is
- assumed to already be a valid OpenAI function, a JSON schema with
- top-level 'title' key specified, an Anthropic format
- tool, or an Amazon Bedrock Converse format tool.
+ Either a dictionary, a `pydantic.BaseModel` class, Python function, or
+ `BaseTool`. If a dictionary is passed in, it is assumed to already be a
+ valid OpenAI function, a JSON schema with top-level `title` key specified,
+ an Anthropic format tool, or an Amazon Bedrock Converse format tool.
strict:
If `True`, model output is guaranteed to exactly match the JSON Schema
provided in the function definition. If `None`, `strict` argument will not
@@ -546,28 +475,16 @@ def convert_to_openai_tool(
A dict version of the passed in tool which is compatible with the
OpenAI tool-calling API.
- !!! warning "Behavior changed in 0.2.29"
- `strict` arg added.
-
- !!! warning "Behavior changed in 0.3.13"
- Support for Anthropic format tools added.
-
- !!! warning "Behavior changed in 0.3.14"
- Support for Amazon Bedrock Converse format tools added.
-
- !!! warning "Behavior changed in 0.3.16"
- 'description' and 'parameters' keys are now optional. Only 'name' is
+ !!! warning "Behavior changed in `langchain-core` 0.3.16"
+ `description` and `parameters` keys are now optional. Only `name` is
required and guaranteed to be part of the output.
- !!! warning "Behavior changed in 0.3.44"
+ !!! warning "Behavior changed in `langchain-core` 0.3.44"
Return OpenAI Responses API-style tools unchanged. This includes
- any dict with "type" in "file_search", "function", "computer_use_preview",
- "web_search_preview".
+ any dict with `"type"` in `"file_search"`, `"function"`,
+ `"computer_use_preview"`, `"web_search_preview"`.
- !!! warning "Behavior changed in 0.3.61"
- Added support for OpenAI's built-in code interpreter and remote MCP tools.
-
- !!! warning "Behavior changed in 0.3.63"
+ !!! warning "Behavior changed in `langchain-core` 0.3.63"
Added support for OpenAI's image generation built-in tool.
"""
# Import locally to prevent circular import
@@ -665,7 +582,7 @@ def tool_example_to_messages(
tool_calls: Tool calls represented as Pydantic BaseModels
tool_outputs: Tool call outputs.
Does not need to be provided. If not provided, a placeholder value
- will be inserted. Defaults to `None`.
+ will be inserted.
ai_response: If provided, content for a final `AIMessage`.
Returns:
@@ -713,7 +630,7 @@ def tool_example_to_messages(
"type": "function",
"function": {
# The name of the function right now corresponds to the name
- # of the pydantic model. This is implicit in the API right now,
+ # of the Pydantic model. This is implicit in the API right now,
# and will be improved over time.
"name": tool_call.__class__.__name__,
"arguments": tool_call.model_dump_json(),
@@ -736,6 +653,9 @@ def tool_example_to_messages(
return messages
+_MIN_DOCSTRING_BLOCKS = 2
+
+
def _parse_google_docstring(
docstring: str | None,
args: list[str],
@@ -754,7 +674,7 @@ def _parse_google_docstring(
arg for arg in args if arg not in {"run_manager", "callbacks", "return"}
}
if filtered_annotations and (
- len(docstring_blocks) < 2
+ len(docstring_blocks) < _MIN_DOCSTRING_BLOCKS
or not any(block.startswith("Args:") for block in docstring_blocks[1:])
):
msg = "Found invalid Google-Style docstring."
diff --git a/libs/core/langchain_core/utils/input.py b/libs/core/langchain_core/utils/input.py
index bedb58cb625..d8946230bf3 100644
--- a/libs/core/langchain_core/utils/input.py
+++ b/libs/core/langchain_core/utils/input.py
@@ -26,6 +26,9 @@ def get_color_mapping(
colors = list(_TEXT_COLOR_MAPPING.keys())
if excluded_colors is not None:
colors = [c for c in colors if c not in excluded_colors]
+ if not colors:
+ msg = "No colors available after applying exclusions."
+ raise ValueError(msg)
return {item: colors[i % len(colors)] for i, item in enumerate(items)}
@@ -65,9 +68,9 @@ def print_text(
Args:
text: The text to print.
- color: The color to use. Defaults to `None`.
- end: The end character to use. Defaults to "".
- file: The file to write to. Defaults to `None`.
+ color: The color to use.
+ end: The end character to use.
+ file: The file to write to.
"""
text_to_print = get_colored_text(text, color) if color else text
print(text_to_print, end=end, file=file)
diff --git a/libs/core/langchain_core/utils/interactive_env.py b/libs/core/langchain_core/utils/interactive_env.py
index 305b8edc146..f86fe0763b2 100644
--- a/libs/core/langchain_core/utils/interactive_env.py
+++ b/libs/core/langchain_core/utils/interactive_env.py
@@ -7,6 +7,6 @@ def is_interactive_env() -> bool:
"""Determine if running within IPython or Jupyter.
Returns:
- True if running in an interactive environment, False otherwise.
+ True if running in an interactive environment, `False` otherwise.
"""
return hasattr(sys, "ps2")
diff --git a/libs/core/langchain_core/utils/iter.py b/libs/core/langchain_core/utils/iter.py
index 2334d9063f6..a4f9b0e1ade 100644
--- a/libs/core/langchain_core/utils/iter.py
+++ b/libs/core/langchain_core/utils/iter.py
@@ -137,9 +137,9 @@ class Tee(Generic[T]):
Args:
iterable: The iterable to split.
- n: The number of iterators to create. Defaults to 2.
+ n: The number of iterators to create.
lock: The lock to synchronise access to the shared buffers.
- Defaults to `None`.
+
"""
self._iterator = iter(iterable)
self._buffers: list[deque[T]] = [deque() for _ in range(n)]
diff --git a/libs/core/langchain_core/utils/json.py b/libs/core/langchain_core/utils/json.py
index b1e3879a476..9c00b11f94f 100644
--- a/libs/core/langchain_core/utils/json.py
+++ b/libs/core/langchain_core/utils/json.py
@@ -51,7 +51,7 @@ def parse_partial_json(s: str, *, strict: bool = False) -> Any:
Args:
s: The JSON string to parse.
- strict: Whether to use strict parsing. Defaults to `False`.
+ strict: Whether to use strict parsing.
Returns:
The parsed JSON object as a Python dictionary.
diff --git a/libs/core/langchain_core/utils/json_schema.py b/libs/core/langchain_core/utils/json_schema.py
index 6118bd95dc2..68b04beb8c8 100644
--- a/libs/core/langchain_core/utils/json_schema.py
+++ b/libs/core/langchain_core/utils/json_schema.py
@@ -226,7 +226,7 @@ def dereference_refs(
... }
>>> result = dereference_refs(schema) # Won't cause infinite recursion
- Note:
+ !!! note
- Circular references are handled gracefully by breaking cycles
- Mixed $ref objects (with both $ref and other properties) are supported
- Additional properties in mixed $refs override resolved properties
diff --git a/libs/core/langchain_core/utils/pydantic.py b/libs/core/langchain_core/utils/pydantic.py
index db7bf460ffb..fd3413e715b 100644
--- a/libs/core/langchain_core/utils/pydantic.py
+++ b/libs/core/langchain_core/utils/pydantic.py
@@ -65,8 +65,8 @@ def get_pydantic_major_version() -> int:
PYDANTIC_MAJOR_VERSION = PYDANTIC_VERSION.major
PYDANTIC_MINOR_VERSION = PYDANTIC_VERSION.minor
-IS_PYDANTIC_V1 = PYDANTIC_VERSION.major == 1
-IS_PYDANTIC_V2 = PYDANTIC_VERSION.major == 2
+IS_PYDANTIC_V1 = False
+IS_PYDANTIC_V2 = True
PydanticBaseModel = BaseModel
TypeBaseModel = type[BaseModel]
@@ -78,7 +78,7 @@ def is_pydantic_v1_subclass(cls: type) -> bool:
"""Check if the given class is Pydantic v1-like.
Returns:
- True if the given class is a subclass of Pydantic `BaseModel` 1.x.
+ `True` if the given class is a subclass of Pydantic `BaseModel` 1.x.
"""
return issubclass(cls, BaseModelV1)
@@ -87,7 +87,7 @@ def is_pydantic_v2_subclass(cls: type) -> bool:
"""Check if the given class is Pydantic v2-like.
Returns:
- True if the given class is a subclass of Pydantic BaseModel 2.x.
+ `True` if the given class is a subclass of Pydantic BaseModel 2.x.
"""
return issubclass(cls, BaseModel)
@@ -101,7 +101,7 @@ def is_basemodel_subclass(cls: type) -> bool:
* pydantic.v1.BaseModel in Pydantic 2.x
Returns:
- True if the given class is a subclass of Pydantic `BaseModel`.
+ `True` if the given class is a subclass of Pydantic `BaseModel`.
"""
# Before we can use issubclass on the cls we need to check if it is a class
if not inspect.isclass(cls) or isinstance(cls, GenericAlias):
@@ -119,7 +119,7 @@ def is_basemodel_instance(obj: Any) -> bool:
* pydantic.v1.BaseModel in Pydantic 2.x
Returns:
- True if the given class is an instance of Pydantic `BaseModel`.
+ `True` if the given class is an instance of Pydantic `BaseModel`.
"""
return isinstance(obj, (BaseModel, BaseModelV1))
@@ -206,7 +206,7 @@ def _create_subset_model_v1(
descriptions: dict | None = None,
fn_description: str | None = None,
) -> type[BaseModel]:
- """Create a pydantic model with only a subset of model's fields."""
+ """Create a Pydantic model with only a subset of model's fields."""
fields = {}
for field_name in field_names:
@@ -235,7 +235,7 @@ def _create_subset_model_v2(
descriptions: dict | None = None,
fn_description: str | None = None,
) -> type[BaseModel]:
- """Create a pydantic model with a subset of the model fields."""
+ """Create a Pydantic model with a subset of the model fields."""
descriptions_ = descriptions or {}
fields = {}
for field_name in field_names:
@@ -438,9 +438,9 @@ def create_model(
/,
**field_definitions: Any,
) -> type[BaseModel]:
- """Create a pydantic model with the given field definitions.
+ """Create a Pydantic model with the given field definitions.
- Please use create_model_v2 instead of this function.
+ Please use `create_model_v2` instead of this function.
Args:
model_name: The name of the model.
@@ -511,7 +511,7 @@ def create_model_v2(
field_definitions: dict[str, Any] | None = None,
root: Any | None = None,
) -> type[BaseModel]:
- """Create a pydantic model with the given field definitions.
+ """Create a Pydantic model with the given field definitions.
Attention:
Please do not use outside of langchain packages. This API
@@ -522,7 +522,7 @@ def create_model_v2(
module_name: The name of the module where the model is defined.
This is used by Pydantic to resolve any forward references.
field_definitions: The field definitions for the model.
- root: Type for a root model (RootModel)
+ root: Type for a root model (`RootModel`)
Returns:
The created model.
diff --git a/libs/core/langchain_core/utils/strings.py b/libs/core/langchain_core/utils/strings.py
index 86f7499dcc6..9326bc0662f 100644
--- a/libs/core/langchain_core/utils/strings.py
+++ b/libs/core/langchain_core/utils/strings.py
@@ -30,10 +30,7 @@ def stringify_dict(data: dict) -> str:
Returns:
The stringified dictionary.
"""
- text = ""
- for key, value in data.items():
- text += key + ": " + stringify_value(value) + "\n"
- return text
+ return "".join(f"{key}: {stringify_value(value)}\n" for key, value in data.items())
def comma_list(items: list[Any]) -> str:
@@ -57,7 +54,7 @@ def sanitize_for_postgres(text: str, replacement: str = "") -> str:
Args:
text: The text to sanitize.
- replacement: String to replace NUL bytes with. Defaults to empty string.
+ replacement: String to replace NUL bytes with.
Returns:
The sanitized text with NUL bytes removed or replaced.
diff --git a/libs/core/langchain_core/utils/utils.py b/libs/core/langchain_core/utils/utils.py
index 3d9c2a838bf..cb22c049dd8 100644
--- a/libs/core/langchain_core/utils/utils.py
+++ b/libs/core/langchain_core/utils/utils.py
@@ -123,8 +123,8 @@ def guard_import(
Args:
module_name: The name of the module to import.
- pip_name: The name of the module to install with pip. Defaults to `None`.
- package: The package to import the module from. Defaults to `None`.
+ pip_name: The name of the module to install with pip.
+ package: The package to import the module from.
Returns:
The imported module.
@@ -155,11 +155,11 @@ def check_package_version(
Args:
package: The name of the package.
- lt_version: The version must be less than this. Defaults to `None`.
- lte_version: The version must be less than or equal to this. Defaults to `None`.
- gt_version: The version must be greater than this. Defaults to `None`.
+ lt_version: The version must be less than this.
+ lte_version: The version must be less than or equal to this.
+ gt_version: The version must be greater than this.
gte_version: The version must be greater than or equal to this.
- Defaults to `None`.
+
Raises:
ValueError: If the package version does not meet the requirements.
@@ -218,7 +218,7 @@ def _build_model_kwargs(
values: dict[str, Any],
all_required_field_names: set[str],
) -> dict[str, Any]:
- """Build "model_kwargs" param from Pydantic constructor values.
+ """Build `model_kwargs` param from Pydantic constructor values.
Args:
values: All init args passed in by user.
@@ -228,8 +228,8 @@ def _build_model_kwargs(
Extra kwargs.
Raises:
- ValueError: If a field is specified in both values and extra_kwargs.
- ValueError: If a field is specified in model_kwargs.
+ ValueError: If a field is specified in both `values` and `extra_kwargs`.
+ ValueError: If a field is specified in `model_kwargs`.
"""
extra_kwargs = values.get("model_kwargs", {})
for field_name in list(values):
@@ -267,6 +267,10 @@ def build_extra_kwargs(
) -> dict[str, Any]:
"""Build extra kwargs from values and extra_kwargs.
+ !!! danger "DON'T USE"
+ Kept for backwards-compatibility but should never have been public. Use the
+ internal `_build_model_kwargs` function instead.
+
Args:
extra_kwargs: Extra kwargs passed in by user.
values: Values passed in by user.
@@ -276,9 +280,10 @@ def build_extra_kwargs(
Extra kwargs.
Raises:
- ValueError: If a field is specified in both values and extra_kwargs.
- ValueError: If a field is specified in model_kwargs.
+ ValueError: If a field is specified in both `values` and `extra_kwargs`.
+ ValueError: If a field is specified in `model_kwargs`.
"""
+ # DON'T USE! Kept for backwards-compatibility but should never have been public.
for field_name in list(values):
if field_name in extra_kwargs:
msg = f"Found {field_name} supplied twice."
@@ -292,6 +297,7 @@ def build_extra_kwargs(
)
extra_kwargs[field_name] = values.pop(field_name)
+ # DON'T USE! Kept for backwards-compatibility but should never have been public.
invalid_model_kwargs = all_required_field_names.intersection(extra_kwargs.keys())
if invalid_model_kwargs:
msg = (
@@ -300,6 +306,7 @@ def build_extra_kwargs(
)
raise ValueError(msg)
+ # DON'T USE! Kept for backwards-compatibility but should never have been public.
return extra_kwargs
diff --git a/libs/core/langchain_core/vectorstores/base.py b/libs/core/langchain_core/vectorstores/base.py
index fa15a3018aa..3580f5e9cde 100644
--- a/libs/core/langchain_core/vectorstores/base.py
+++ b/libs/core/langchain_core/vectorstores/base.py
@@ -52,22 +52,22 @@ class VectorStore(ABC):
ids: list[str] | None = None,
**kwargs: Any,
) -> list[str]:
- """Run more texts through the embeddings and add to the vectorstore.
+ """Run more texts through the embeddings and add to the `VectorStore`.
Args:
- texts: Iterable of strings to add to the vectorstore.
+ texts: Iterable of strings to add to the `VectorStore`.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of IDs associated with the texts.
- **kwargs: vectorstore specific parameters.
+ **kwargs: `VectorStore` specific parameters.
One of the kwargs should be `ids` which is a list of ids
associated with the texts.
Returns:
- List of ids from adding the texts into the vectorstore.
+ List of IDs from adding the texts into the `VectorStore`.
Raises:
ValueError: If the number of metadatas does not match the number of texts.
- ValueError: If the number of ids does not match the number of texts.
+ ValueError: If the number of IDs does not match the number of texts.
"""
if type(self).add_documents != VectorStore.add_documents:
# This condition is triggered if the subclass has provided
@@ -109,11 +109,12 @@ class VectorStore(ABC):
"""Delete by vector ID or other criteria.
Args:
- ids: List of ids to delete. If `None`, delete all. Default is None.
+ ids: List of IDs to delete. If `None`, delete all.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
- True if deletion is successful, False otherwise, None if not implemented.
+ `True` if deletion is successful, `False` otherwise, `None` if not
+ implemented.
"""
msg = "delete method must be implemented by subclass."
raise NotImplementedError(msg)
@@ -135,12 +136,10 @@ class VectorStore(ABC):
some IDs.
Args:
- ids: List of ids to retrieve.
+ ids: List of IDs to retrieve.
Returns:
- List of Documents.
-
- !!! version-added "Added in version 0.2.11"
+ List of `Document` objects.
"""
msg = f"{self.__class__.__name__} does not yet support get_by_ids."
raise NotImplementedError(msg)
@@ -163,12 +162,10 @@ class VectorStore(ABC):
some IDs.
Args:
- ids: List of ids to retrieve.
+ ids: List of IDs to retrieve.
Returns:
- List of Documents.
-
- !!! version-added "Added in version 0.2.11"
+ List of `Document` objects.
"""
return await run_in_executor(None, self.get_by_ids, ids)
@@ -176,11 +173,12 @@ class VectorStore(ABC):
"""Async delete by vector ID or other criteria.
Args:
- ids: List of ids to delete. If `None`, delete all. Default is None.
+ ids: List of IDs to delete. If `None`, delete all.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
- True if deletion is successful, False otherwise, None if not implemented.
+ `True` if deletion is successful, `False` otherwise, `None` if not
+ implemented.
"""
return await run_in_executor(None, self.delete, ids, **kwargs)
@@ -192,21 +190,20 @@ class VectorStore(ABC):
ids: list[str] | None = None,
**kwargs: Any,
) -> list[str]:
- """Async run more texts through the embeddings and add to the vectorstore.
+ """Async run more texts through the embeddings and add to the `VectorStore`.
Args:
- texts: Iterable of strings to add to the vectorstore.
+ texts: Iterable of strings to add to the `VectorStore`.
metadatas: Optional list of metadatas associated with the texts.
- Default is None.
ids: Optional list
- **kwargs: vectorstore specific parameters.
+ **kwargs: `VectorStore` specific parameters.
Returns:
- List of ids from adding the texts into the vectorstore.
+ List of IDs from adding the texts into the `VectorStore`.
Raises:
ValueError: If the number of metadatas does not match the number of texts.
- ValueError: If the number of ids does not match the number of texts.
+ ValueError: If the number of IDs does not match the number of texts.
"""
if ids is not None:
# For backward compatibility
@@ -235,13 +232,14 @@ class VectorStore(ABC):
return await run_in_executor(None, self.add_texts, texts, metadatas, **kwargs)
def add_documents(self, documents: list[Document], **kwargs: Any) -> list[str]:
- """Add or update documents in the vectorstore.
+ """Add or update documents in the `VectorStore`.
Args:
- documents: Documents to add to the vectorstore.
+ documents: Documents to add to the `VectorStore`.
**kwargs: Additional keyword arguments.
- if kwargs contains ids and documents contain ids,
- the ids in the kwargs will receive precedence.
+
+ If kwargs contains IDs and documents contain ids, the IDs in the kwargs
+ will receive precedence.
Returns:
List of IDs of the added texts.
@@ -267,10 +265,10 @@ class VectorStore(ABC):
async def aadd_documents(
self, documents: list[Document], **kwargs: Any
) -> list[str]:
- """Async run more documents through the embeddings and add to the vectorstore.
+ """Async run more documents through the embeddings and add to the `VectorStore`.
Args:
- documents: Documents to add to the vectorstore.
+ documents: Documents to add to the `VectorStore`.
**kwargs: Additional keyword arguments.
Returns:
@@ -296,17 +294,17 @@ class VectorStore(ABC):
"""Return docs most similar to query using a specified search type.
Args:
- query: Input text
- search_type: Type of search to perform. Can be "similarity",
- "mmr", or "similarity_score_threshold".
+ query: Input text.
+ search_type: Type of search to perform. Can be `'similarity'`, `'mmr'`, or
+ `'similarity_score_threshold'`.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents most similar to the query.
+ List of `Document` objects most similar to the query.
Raises:
- ValueError: If search_type is not one of "similarity",
- "mmr", or "similarity_score_threshold".
+ ValueError: If `search_type` is not one of `'similarity'`,
+ `'mmr'`, or `'similarity_score_threshold'`.
"""
if search_type == "similarity":
return self.similarity_search(query, **kwargs)
@@ -331,16 +329,16 @@ class VectorStore(ABC):
Args:
query: Input text.
- search_type: Type of search to perform. Can be "similarity",
- "mmr", or "similarity_score_threshold".
+ search_type: Type of search to perform. Can be `'similarity'`, `'mmr'`, or
+ `'similarity_score_threshold'`.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents most similar to the query.
+ List of `Document` objects most similar to the query.
Raises:
- ValueError: If search_type is not one of "similarity",
- "mmr", or "similarity_score_threshold".
+ ValueError: If `search_type` is not one of `'similarity'`,
+ `'mmr'`, or `'similarity_score_threshold'`.
"""
if search_type == "similarity":
return await self.asimilarity_search(query, **kwargs)
@@ -365,11 +363,11 @@ class VectorStore(ABC):
Args:
query: Input text.
- k: Number of Documents to return. Defaults to 4.
+ k: Number of `Document` objects to return.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents most similar to the query.
+ List of `Document` objects most similar to the query.
"""
@staticmethod
@@ -424,7 +422,7 @@ class VectorStore(ABC):
**kwargs: Arguments to pass to the search method.
Returns:
- List of Tuples of (doc, similarity_score).
+ List of tuples of `(doc, similarity_score)`.
"""
raise NotImplementedError
@@ -438,7 +436,7 @@ class VectorStore(ABC):
**kwargs: Arguments to pass to the search method.
Returns:
- List of Tuples of (doc, similarity_score).
+ List of tuples of `(doc, similarity_score)`.
"""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
@@ -456,19 +454,19 @@ class VectorStore(ABC):
"""Default similarity search with relevance scores.
Modify if necessary in subclass.
- Return docs and relevance scores in the range [0, 1].
+ Return docs and relevance scores in the range `[0, 1]`.
- 0 is dissimilar, 1 is most similar.
+ `0` is dissimilar, `1` is most similar.
Args:
query: Input text.
- k: Number of Documents to return. Defaults to 4.
- **kwargs: kwargs to be passed to similarity search. Should include:
- score_threshold: Optional, a floating point value between 0 to 1 to
- filter the resulting set of retrieved docs
+ k: Number of `Document` objects to return.
+ **kwargs: kwargs to be passed to similarity search. Should include
+ `score_threshold`, An optional floating point value between `0` to `1`
+ to filter the resulting set of retrieved docs
Returns:
- List of Tuples of (doc, similarity_score)
+ List of tuples of `(doc, similarity_score)`
"""
relevance_score_fn = self._select_relevance_score_fn()
docs_and_scores = self.similarity_search_with_score(query, k, **kwargs)
@@ -483,19 +481,19 @@ class VectorStore(ABC):
"""Default similarity search with relevance scores.
Modify if necessary in subclass.
- Return docs and relevance scores in the range [0, 1].
+ Return docs and relevance scores in the range `[0, 1]`.
- 0 is dissimilar, 1 is most similar.
+ `0` is dissimilar, `1` is most similar.
Args:
query: Input text.
- k: Number of Documents to return. Defaults to 4.
- **kwargs: kwargs to be passed to similarity search. Should include:
- score_threshold: Optional, a floating point value between 0 to 1 to
- filter the resulting set of retrieved docs
+ k: Number of `Document` objects to return.
+ **kwargs: kwargs to be passed to similarity search. Should include
+ `score_threshold`, An optional floating point value between `0` to `1`
+ to filter the resulting set of retrieved docs
Returns:
- List of Tuples of (doc, similarity_score)
+ List of tuples of `(doc, similarity_score)`
"""
relevance_score_fn = self._select_relevance_score_fn()
docs_and_scores = await self.asimilarity_search_with_score(query, k, **kwargs)
@@ -507,19 +505,19 @@ class VectorStore(ABC):
k: int = 4,
**kwargs: Any,
) -> list[tuple[Document, float]]:
- """Return docs and relevance scores in the range [0, 1].
+ """Return docs and relevance scores in the range `[0, 1]`.
- 0 is dissimilar, 1 is most similar.
+ `0` is dissimilar, `1` is most similar.
Args:
query: Input text.
- k: Number of Documents to return. Defaults to 4.
- **kwargs: kwargs to be passed to similarity search. Should include:
- score_threshold: Optional, a floating point value between 0 to 1 to
- filter the resulting set of retrieved docs.
+ k: Number of `Document` objects to return.
+ **kwargs: kwargs to be passed to similarity search. Should include
+ `score_threshold`, An optional floating point value between `0` to `1`
+ to filter the resulting set of retrieved docs
Returns:
- List of Tuples of (doc, similarity_score).
+ List of tuples of `(doc, similarity_score)`.
"""
score_threshold = kwargs.pop("score_threshold", None)
@@ -556,19 +554,19 @@ class VectorStore(ABC):
k: int = 4,
**kwargs: Any,
) -> list[tuple[Document, float]]:
- """Async return docs and relevance scores in the range [0, 1].
+ """Async return docs and relevance scores in the range `[0, 1]`.
- 0 is dissimilar, 1 is most similar.
+ `0` is dissimilar, `1` is most similar.
Args:
query: Input text.
- k: Number of Documents to return. Defaults to 4.
- **kwargs: kwargs to be passed to similarity search. Should include:
- score_threshold: Optional, a floating point value between 0 to 1 to
- filter the resulting set of retrieved docs
+ k: Number of `Document` objects to return.
+ **kwargs: kwargs to be passed to similarity search. Should include
+ `score_threshold`, An optional floating point value between `0` to `1`
+ to filter the resulting set of retrieved docs
Returns:
- List of Tuples of (doc, similarity_score)
+ List of tuples of `(doc, similarity_score)`
"""
score_threshold = kwargs.pop("score_threshold", None)
@@ -606,11 +604,11 @@ class VectorStore(ABC):
Args:
query: Input text.
- k: Number of Documents to return. Defaults to 4.
+ k: Number of `Document` objects to return.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents most similar to the query.
+ List of `Document` objects most similar to the query.
"""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
@@ -624,11 +622,11 @@ class VectorStore(ABC):
Args:
embedding: Embedding to look up documents similar to.
- k: Number of Documents to return. Defaults to 4.
+ k: Number of `Document` objects to return.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents most similar to the query vector.
+ List of `Document` objects most similar to the query vector.
"""
raise NotImplementedError
@@ -639,11 +637,11 @@ class VectorStore(ABC):
Args:
embedding: Embedding to look up documents similar to.
- k: Number of Documents to return. Defaults to 4.
+ k: Number of `Document` objects to return.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents most similar to the query vector.
+ List of `Document` objects most similar to the query vector.
"""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
@@ -667,17 +665,15 @@ class VectorStore(ABC):
Args:
query: Text to look up documents similar to.
- k: Number of Documents to return. Defaults to 4.
- fetch_k: Number of Documents to fetch to pass to MMR algorithm.
- Default is 20.
- lambda_mult: Number between 0 and 1 that determines the degree
- of diversity among the results with 0 corresponding
- to maximum diversity and 1 to minimum diversity.
- Defaults to 0.5.
+ k: Number of `Document` objects to return.
+ fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
+ lambda_mult: Number between `0` and `1` that determines the degree
+ of diversity among the results with `0` corresponding
+ to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents selected by maximal marginal relevance.
+ List of `Document` objects selected by maximal marginal relevance.
"""
raise NotImplementedError
@@ -696,17 +692,15 @@ class VectorStore(ABC):
Args:
query: Text to look up documents similar to.
- k: Number of Documents to return. Defaults to 4.
- fetch_k: Number of Documents to fetch to pass to MMR algorithm.
- Default is 20.
- lambda_mult: Number between 0 and 1 that determines the degree
- of diversity among the results with 0 corresponding
- to maximum diversity and 1 to minimum diversity.
- Defaults to 0.5.
+ k: Number of `Document` objects to return.
+ fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
+ lambda_mult: Number between `0` and `1` that determines the degree
+ of diversity among the results with `0` corresponding
+ to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents selected by maximal marginal relevance.
+ List of `Document` objects selected by maximal marginal relevance.
"""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
@@ -736,17 +730,15 @@ class VectorStore(ABC):
Args:
embedding: Embedding to look up documents similar to.
- k: Number of Documents to return. Defaults to 4.
- fetch_k: Number of Documents to fetch to pass to MMR algorithm.
- Default is 20.
- lambda_mult: Number between 0 and 1 that determines the degree
- of diversity among the results with 0 corresponding
- to maximum diversity and 1 to minimum diversity.
- Defaults to 0.5.
+ k: Number of `Document` objects to return.
+ fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
+ lambda_mult: Number between `0` and `1` that determines the degree
+ of diversity among the results with `0` corresponding
+ to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents selected by maximal marginal relevance.
+ List of `Document` objects selected by maximal marginal relevance.
"""
raise NotImplementedError
@@ -765,17 +757,15 @@ class VectorStore(ABC):
Args:
embedding: Embedding to look up documents similar to.
- k: Number of Documents to return. Defaults to 4.
- fetch_k: Number of Documents to fetch to pass to MMR algorithm.
- Default is 20.
- lambda_mult: Number between 0 and 1 that determines the degree
- of diversity among the results with 0 corresponding
- to maximum diversity and 1 to minimum diversity.
- Defaults to 0.5.
+ k: Number of `Document` objects to return.
+ fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
+ lambda_mult: Number between `0` and `1` that determines the degree
+ of diversity among the results with `0` corresponding
+ to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.
Returns:
- List of Documents selected by maximal marginal relevance.
+ List of `Document` objects selected by maximal marginal relevance.
"""
return await run_in_executor(
None,
@@ -794,15 +784,15 @@ class VectorStore(ABC):
embedding: Embeddings,
**kwargs: Any,
) -> Self:
- """Return VectorStore initialized from documents and embeddings.
+ """Return `VectorStore` initialized from documents and embeddings.
Args:
- documents: List of Documents to add to the vectorstore.
+ documents: List of `Document` objects to add to the `VectorStore`.
embedding: Embedding function to use.
**kwargs: Additional keyword arguments.
Returns:
- VectorStore initialized from documents and embeddings.
+ `VectorStore` initialized from documents and embeddings.
"""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
@@ -824,15 +814,15 @@ class VectorStore(ABC):
embedding: Embeddings,
**kwargs: Any,
) -> Self:
- """Async return VectorStore initialized from documents and embeddings.
+ """Async return `VectorStore` initialized from documents and embeddings.
Args:
- documents: List of Documents to add to the vectorstore.
+ documents: List of `Document` objects to add to the `VectorStore`.
embedding: Embedding function to use.
**kwargs: Additional keyword arguments.
Returns:
- VectorStore initialized from documents and embeddings.
+ `VectorStore` initialized from documents and embeddings.
"""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
@@ -858,18 +848,17 @@ class VectorStore(ABC):
ids: list[str] | None = None,
**kwargs: Any,
) -> VST:
- """Return VectorStore initialized from texts and embeddings.
+ """Return `VectorStore` initialized from texts and embeddings.
Args:
- texts: Texts to add to the vectorstore.
+ texts: Texts to add to the `VectorStore`.
embedding: Embedding function to use.
metadatas: Optional list of metadatas associated with the texts.
- Default is None.
ids: Optional list of IDs associated with the texts.
**kwargs: Additional keyword arguments.
Returns:
- VectorStore initialized from texts and embeddings.
+ `VectorStore` initialized from texts and embeddings.
"""
@classmethod
@@ -882,18 +871,17 @@ class VectorStore(ABC):
ids: list[str] | None = None,
**kwargs: Any,
) -> Self:
- """Async return VectorStore initialized from texts and embeddings.
+ """Async return `VectorStore` initialized from texts and embeddings.
Args:
- texts: Texts to add to the vectorstore.
+ texts: Texts to add to the `VectorStore`.
embedding: Embedding function to use.
metadatas: Optional list of metadatas associated with the texts.
- Default is None.
ids: Optional list of IDs associated with the texts.
**kwargs: Additional keyword arguments.
Returns:
- VectorStore initialized from texts and embeddings.
+ `VectorStore` initialized from texts and embeddings.
"""
if ids is not None:
kwargs["ids"] = ids
@@ -909,27 +897,29 @@ class VectorStore(ABC):
return tags
def as_retriever(self, **kwargs: Any) -> VectorStoreRetriever:
- """Return VectorStoreRetriever initialized from this VectorStore.
+ """Return `VectorStoreRetriever` initialized from this `VectorStore`.
Args:
**kwargs: Keyword arguments to pass to the search function.
Can include:
- search_type: Defines the type of search that the Retriever should
- perform. Can be "similarity" (default), "mmr", or
- "similarity_score_threshold".
- search_kwargs: Keyword arguments to pass to the search function. Can
+
+ * `search_type`: Defines the type of search that the Retriever should
+ perform. Can be `'similarity'` (default), `'mmr'`, or
+ `'similarity_score_threshold'`.
+ * `search_kwargs`: Keyword arguments to pass to the search function. Can
include things like:
- k: Amount of documents to return (Default: 4)
- score_threshold: Minimum relevance threshold
- for similarity_score_threshold
- fetch_k: Amount of documents to pass to MMR algorithm
- (Default: 20)
- lambda_mult: Diversity of results returned by MMR;
- 1 for minimum diversity and 0 for maximum. (Default: 0.5)
- filter: Filter by document metadata
+
+ * `k`: Amount of documents to return (Default: `4`)
+ * `score_threshold`: Minimum relevance threshold
+ for `similarity_score_threshold`
+ * `fetch_k`: Amount of documents to pass to MMR algorithm
+ (Default: `20`)
+ * `lambda_mult`: Diversity of results returned by MMR;
+ `1` for minimum diversity and 0 for maximum. (Default: `0.5`)
+ * `filter`: Filter by document metadata
Returns:
- Retriever class for VectorStore.
+ Retriever class for `VectorStore`.
Examples:
```python
@@ -969,7 +959,7 @@ class VectorStoreRetriever(BaseRetriever):
vectorstore: VectorStore
"""VectorStore to use for retrieval."""
search_type: str = "similarity"
- """Type of search to perform. Defaults to "similarity"."""
+ """Type of search to perform."""
search_kwargs: dict = Field(default_factory=dict)
"""Keyword arguments to pass to the search function."""
allowed_search_types: ClassVar[Collection[str]] = (
@@ -994,8 +984,8 @@ class VectorStoreRetriever(BaseRetriever):
Validated values.
Raises:
- ValueError: If search_type is not one of the allowed search types.
- ValueError: If score_threshold is not specified with a float value(0~1)
+ ValueError: If `search_type` is not one of the allowed search types.
+ ValueError: If `score_threshold` is not specified with a float value(`0~1`)
"""
search_type = values.get("search_type", "similarity")
if search_type not in cls.allowed_search_types:
@@ -1083,10 +1073,10 @@ class VectorStoreRetriever(BaseRetriever):
return docs
def add_documents(self, documents: list[Document], **kwargs: Any) -> list[str]:
- """Add documents to the vectorstore.
+ """Add documents to the `VectorStore`.
Args:
- documents: Documents to add to the vectorstore.
+ documents: Documents to add to the `VectorStore`.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
@@ -1097,10 +1087,10 @@ class VectorStoreRetriever(BaseRetriever):
async def aadd_documents(
self, documents: list[Document], **kwargs: Any
) -> list[str]:
- """Async add documents to the vectorstore.
+ """Async add documents to the `VectorStore`.
Args:
- documents: Documents to add to the vectorstore.
+ documents: Documents to add to the `VectorStore`.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
diff --git a/libs/core/langchain_core/vectorstores/in_memory.py b/libs/core/langchain_core/vectorstores/in_memory.py
index 91b5c2c243f..cb8a9b19857 100644
--- a/libs/core/langchain_core/vectorstores/in_memory.py
+++ b/libs/core/langchain_core/vectorstores/in_memory.py
@@ -257,10 +257,10 @@ class InMemoryVectorStore(VectorStore):
"""Get documents by their ids.
Args:
- ids: The ids of the documents to get.
+ ids: The IDs of the documents to get.
Returns:
- A list of Document objects.
+ A list of `Document` objects.
"""
documents = []
@@ -281,10 +281,10 @@ class InMemoryVectorStore(VectorStore):
"""Async get documents by their ids.
Args:
- ids: The ids of the documents to get.
+ ids: The IDs of the documents to get.
Returns:
- A list of Document objects.
+ A list of `Document` objects.
"""
return self.get_by_ids(ids)
diff --git a/libs/core/langchain_core/vectorstores/utils.py b/libs/core/langchain_core/vectorstores/utils.py
index ca46e638223..855af9211e7 100644
--- a/libs/core/langchain_core/vectorstores/utils.py
+++ b/libs/core/langchain_core/vectorstores/utils.py
@@ -1,4 +1,4 @@
-"""Internal utilities for the in memory implementation of VectorStore.
+"""Internal utilities for the in memory implementation of `VectorStore`.
These are part of a private API, and users should not use them directly
as they can change without notice.
@@ -112,8 +112,8 @@ def maximal_marginal_relevance(
Args:
query_embedding: The query embedding.
embedding_list: A list of embeddings.
- lambda_mult: The lambda parameter for MMR. Default is 0.5.
- k: The number of embeddings to return. Default is 4.
+ lambda_mult: The lambda parameter for MMR.
+ k: The number of embeddings to return.
Returns:
A list of indices of the embeddings to return.
diff --git a/libs/core/langchain_core/version.py b/libs/core/langchain_core/version.py
index cf53d6cf6ee..611a108116a 100644
--- a/libs/core/langchain_core/version.py
+++ b/libs/core/langchain_core/version.py
@@ -1,3 +1,3 @@
"""langchain-core version information and utilities."""
-VERSION = "1.0.0a8"
+VERSION = "1.0.3"
diff --git a/libs/core/pyproject.toml b/libs/core/pyproject.toml
index 2fa85a31c84..6fb85621b5b 100644
--- a/libs/core/pyproject.toml
+++ b/libs/core/pyproject.toml
@@ -3,8 +3,13 @@ requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
-authors = []
+name = "langchain-core"
+description = "Building applications with LLMs through composability"
license = {text = "MIT"}
+readme = "README.md"
+authors = []
+
+version = "1.0.3"
requires-python = ">=3.10.0,<4.0.0"
dependencies = [
"langsmith>=0.3.45,<1.0.0",
@@ -15,18 +20,15 @@ dependencies = [
"packaging>=23.2.0,<26.0.0",
"pydantic>=2.7.4,<3.0.0",
]
-name = "langchain-core"
-version = "1.0.0a8"
-description = "Building applications with LLMs through composability"
-readme = "README.md"
[project.urls]
-homepage = "https://docs.langchain.com/"
-repository = "https://github.com/langchain-ai/langchain/tree/master/libs/core"
-changelog = "https://github.com/langchain-ai/langchain/releases?q=%22langchain-core%3D%3D1%22"
-twitter = "https://x.com/LangChainAI"
-slack = "https://www.langchain.com/join-community"
-reddit = "https://www.reddit.com/r/LangChain/"
+Homepage = "https://docs.langchain.com/"
+Documentation = "https://reference.langchain.com/python/langchain_core/"
+Source = "https://github.com/langchain-ai/langchain/tree/master/libs/core"
+Changelog = "https://github.com/langchain-ai/langchain/releases?q=%22langchain-core%3D%3D1%22"
+Twitter = "https://x.com/LangChainAI"
+Slack = "https://www.langchain.com/join-community"
+Reddit = "https://www.reddit.com/r/LangChain/"
[dependency-groups]
lint = ["ruff>=0.13.1,<0.14.0"]
@@ -34,6 +36,7 @@ typing = [
"mypy>=1.18.1,<1.19.0",
"types-pyyaml>=6.0.12.2,<7.0.0.0",
"types-requests>=2.28.11.5,<3.0.0.0",
+ "langchain-model-profiles",
"langchain-text-splitters",
]
dev = [
@@ -55,6 +58,7 @@ test = [
"blockbuster>=1.5.18,<1.6.0",
"numpy>=1.26.4; python_version<'3.13'",
"numpy>=2.1.0; python_version>='3.13'",
+ "langchain-model-profiles",
"langchain-tests",
"pytest-benchmark",
"pytest-codspeed",
@@ -62,6 +66,7 @@ test = [
test_integration = []
[tool.uv.sources]
+langchain-model-profiles = { path = "../model-profiles" }
langchain-tests = { path = "../standard-tests" }
langchain-text-splitters = { path = "../text-splitters" }
@@ -100,7 +105,6 @@ ignore = [
"ANN401", # No Any types
"BLE", # Blind exceptions
"ERA", # No commented-out code
- "PLR2004", # Comparison to magic number
]
unfixable = [
"B028", # People should intentionally tune the stacklevel
@@ -121,7 +125,7 @@ ignore-var-parameters = true # ignore missing documentation for *args and **kwa
"langchain_core/utils/mustache.py" = [ "PLW0603",]
"langchain_core/sys_info.py" = [ "T201",]
"tests/unit_tests/test_tools.py" = [ "ARG",]
-"tests/**" = [ "D1", "S", "SLF",]
+"tests/**" = [ "D1", "PLR2004", "S", "SLF",]
"scripts/**" = [ "INP", "S",]
[tool.coverage.run]
@@ -129,8 +133,10 @@ omit = [ "tests/*",]
[tool.pytest.ini_options]
addopts = "--snapshot-warn-unused --strict-markers --strict-config --durations=5"
-markers = [ "requires: mark tests as requiring a specific library", "compile: mark placeholder test used to compile integration tests without running them", ]
+markers = [
+ "requires: mark tests as requiring a specific library",
+ "compile: mark placeholder test used to compile integration tests without running them",
+]
asyncio_mode = "auto"
-filterwarnings = [ "ignore::langchain_core._api.beta_decorator.LangChainBetaWarning",]
asyncio_default_fixture_loop_scope = "function"
-
+filterwarnings = [ "ignore::langchain_core._api.beta_decorator.LangChainBetaWarning",]
diff --git a/libs/core/tests/unit_tests/callbacks/test_async_callback_manager.py b/libs/core/tests/unit_tests/callbacks/test_async_callback_manager.py
index e2cbaa9c723..32658f0f904 100644
--- a/libs/core/tests/unit_tests/callbacks/test_async_callback_manager.py
+++ b/libs/core/tests/unit_tests/callbacks/test_async_callback_manager.py
@@ -148,4 +148,65 @@ async def test_inline_handlers_share_parent_context_multiple() -> None:
2,
3,
3,
- ], f"Expected order of states was broken due to context loss. Got {states}"
+ ]
+
+
+async def test_shielded_callback_context_preservation() -> None:
+ """Verify that shielded callbacks preserve context variables.
+
+ This test specifically addresses the issue where async callbacks decorated
+ with @shielded do not properly preserve context variables, breaking
+ instrumentation and other context-dependent functionality.
+
+ The issue manifests in callbacks that use the @shielded decorator:
+ * on_llm_end
+ * on_llm_error
+ * on_chain_end
+ * on_chain_error
+ * And other shielded callback methods
+ """
+ context_var: contextvars.ContextVar[str] = contextvars.ContextVar("test_context")
+
+ class ContextTestHandler(AsyncCallbackHandler):
+ """Handler that reads context variables in shielded callbacks."""
+
+ def __init__(self) -> None:
+ self.run_inline = False
+ self.context_values: list[str] = []
+
+ @override
+ async def on_llm_end(self, response: Any, **kwargs: Any) -> None:
+ """This method is decorated with @shielded in the run manager."""
+ # This should preserve the context variable value
+ self.context_values.append(context_var.get("not_found"))
+
+ @override
+ async def on_chain_end(self, outputs: Any, **kwargs: Any) -> None:
+ """This method is decorated with @shielded in the run manager."""
+ # This should preserve the context variable value
+ self.context_values.append(context_var.get("not_found"))
+
+ # Set up the test context
+ context_var.set("test_value")
+ handler = ContextTestHandler()
+ manager = AsyncCallbackManager(handlers=[handler])
+
+ # Create run managers that have the shielded methods
+ llm_managers = await manager.on_llm_start({}, ["test prompt"])
+ llm_run_manager = llm_managers[0]
+
+ chain_run_manager = await manager.on_chain_start({}, {"test": "input"})
+
+ # Test LLM end callback (which is shielded)
+ await llm_run_manager.on_llm_end({"response": "test"}) # type: ignore[arg-type]
+
+ # Test Chain end callback (which is shielded)
+ await chain_run_manager.on_chain_end({"output": "test"})
+
+ # The context should be preserved in shielded callbacks
+ # This was the main issue - shielded decorators were not preserving context
+ assert handler.context_values == ["test_value", "test_value"], (
+ f"Expected context values ['test_value', 'test_value'], "
+ f"but got {handler.context_values}. "
+ f"This indicates the shielded decorator is not preserving context variables."
+ )
diff --git a/libs/core/tests/unit_tests/indexing/test_hashed_document.py b/libs/core/tests/unit_tests/indexing/test_hashed_document.py
index fd88391aa3a..0cabb28bbe1 100644
--- a/libs/core/tests/unit_tests/indexing/test_hashed_document.py
+++ b/libs/core/tests/unit_tests/indexing/test_hashed_document.py
@@ -33,7 +33,7 @@ def test_hashing() -> None:
# hash should be deterministic
assert hashed_document.id == "fd1dc827-051b-537d-a1fe-1fa043e8b276"
- # Verify that hashing with sha1 is determinstic
+ # Verify that hashing with sha1 is deterministic
another_hashed_document = _get_document_with_hash(document, key_encoder="sha1")
assert another_hashed_document.id == hashed_document.id
diff --git a/libs/core/tests/unit_tests/indexing/test_indexing.py b/libs/core/tests/unit_tests/indexing/test_indexing.py
index a4baef198d7..f598048a98f 100644
--- a/libs/core/tests/unit_tests/indexing/test_indexing.py
+++ b/libs/core/tests/unit_tests/indexing/test_indexing.py
@@ -604,7 +604,7 @@ def test_incremental_fails_with_bad_source_ids(
with pytest.raises(
ValueError,
- match="Source ids are required when cleanup mode is incremental or scoped_full",
+ match="Source IDs are required when cleanup mode is incremental or scoped_full",
):
# Should raise an error because no source id function was specified
index(
@@ -654,7 +654,7 @@ async def test_aincremental_fails_with_bad_source_ids(
with pytest.raises(
ValueError,
- match="Source ids are required when cleanup mode is incremental or scoped_full",
+ match="Source IDs are required when cleanup mode is incremental or scoped_full",
):
# Should raise an error because no source id function was specified
await aindex(
@@ -956,7 +956,7 @@ def test_scoped_full_fails_with_bad_source_ids(
with pytest.raises(
ValueError,
- match="Source ids are required when cleanup mode is incremental or scoped_full",
+ match="Source IDs are required when cleanup mode is incremental or scoped_full",
):
# Should raise an error because no source id function was specified
index(
@@ -1006,7 +1006,7 @@ async def test_ascoped_full_fails_with_bad_source_ids(
with pytest.raises(
ValueError,
- match="Source ids are required when cleanup mode is incremental or scoped_full",
+ match="Source IDs are required when cleanup mode is incremental or scoped_full",
):
# Should raise an error because no source id function was specified
await aindex(
@@ -2801,7 +2801,7 @@ def test_index_with_upsert_kwargs(
]
assert [doc.metadata for doc in args[0]] == [{"source": "1"}, {"source": "2"}]
- # Check that ids are present
+ # Check that IDs are present
assert "ids" in kwargs
assert isinstance(kwargs["ids"], list)
assert len(kwargs["ids"]) == 2
@@ -2932,7 +2932,7 @@ async def test_aindex_with_upsert_kwargs(
]
assert [doc.metadata for doc in args[0]] == [{"source": "1"}, {"source": "2"}]
- # Check that ids are present
+ # Check that IDs are present
assert "ids" in kwargs
assert isinstance(kwargs["ids"], list)
assert len(kwargs["ids"]) == 2
diff --git a/libs/core/tests/unit_tests/language_models/chat_models/test_base.py b/libs/core/tests/unit_tests/language_models/chat_models/test_base.py
index 3d8e03b5e75..b20623ad6fb 100644
--- a/libs/core/tests/unit_tests/language_models/chat_models/test_base.py
+++ b/libs/core/tests/unit_tests/language_models/chat_models/test_base.py
@@ -57,7 +57,7 @@ def _content_blocks_equal_ignore_id(
expected: Expected content to compare against (string or list of blocks).
Returns:
- True if content matches (excluding `id` fields), False otherwise.
+ True if content matches (excluding `id` fields), `False` otherwise.
"""
if isinstance(actual, str) or isinstance(expected, str):
@@ -1217,3 +1217,20 @@ def test_get_ls_params() -> None:
ls_params = llm._get_ls_params(stop=["stop"])
assert ls_params["ls_stop"] == ["stop"]
+
+
+def test_model_profiles() -> None:
+ model = GenericFakeChatModel(messages=iter([]))
+ profile = model.profile
+ assert profile == {}
+
+ class MyModel(GenericFakeChatModel):
+ model: str = "gpt-5"
+
+ @property
+ def _llm_type(self) -> str:
+ return "openai-chat"
+
+ model = MyModel(messages=iter([]))
+ profile = model.profile
+ assert profile
diff --git a/libs/core/tests/unit_tests/language_models/chat_models/test_cache.py b/libs/core/tests/unit_tests/language_models/chat_models/test_cache.py
index b246da593fa..667b2fba29b 100644
--- a/libs/core/tests/unit_tests/language_models/chat_models/test_cache.py
+++ b/libs/core/tests/unit_tests/language_models/chat_models/test_cache.py
@@ -26,11 +26,11 @@ class InMemoryCache(BaseCache):
self._cache: dict[tuple[str, str], RETURN_VAL_TYPE] = {}
def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
- """Look up based on prompt and llm_string."""
+ """Look up based on `prompt` and `llm_string`."""
return self._cache.get((prompt, llm_string), None)
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
- """Update cache based on prompt and llm_string."""
+ """Update cache based on `prompt` and `llm_string`."""
self._cache[prompt, llm_string] = return_val
@override
diff --git a/libs/core/tests/unit_tests/language_models/llms/test_cache.py b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
index a0bd8a34b33..720c247a9c9 100644
--- a/libs/core/tests/unit_tests/language_models/llms/test_cache.py
+++ b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
@@ -15,11 +15,11 @@ class InMemoryCache(BaseCache):
self._cache: dict[tuple[str, str], RETURN_VAL_TYPE] = {}
def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
- """Look up based on prompt and llm_string."""
+ """Look up based on `prompt` and `llm_string`."""
return self._cache.get((prompt, llm_string), None)
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
- """Update cache based on prompt and llm_string."""
+ """Update cache based on `prompt` and `llm_string`."""
self._cache[prompt, llm_string] = return_val
@override
@@ -68,12 +68,12 @@ class InMemoryCacheBad(BaseCache):
self._cache: dict[tuple[str, str], RETURN_VAL_TYPE] = {}
def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
- """Look up based on prompt and llm_string."""
+ """Look up based on `prompt` and `llm_string`."""
msg = "This code should not be triggered"
raise NotImplementedError(msg)
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
- """Update cache based on prompt and llm_string."""
+ """Update cache based on `prompt` and `llm_string`."""
msg = "This code should not be triggered"
raise NotImplementedError(msg)
diff --git a/libs/core/tests/unit_tests/messages/test_ai.py b/libs/core/tests/unit_tests/messages/test_ai.py
index d18a1e9b0a3..742f23b68be 100644
--- a/libs/core/tests/unit_tests/messages/test_ai.py
+++ b/libs/core/tests/unit_tests/messages/test_ai.py
@@ -1,7 +1,5 @@
from typing import cast
-import pytest
-
from langchain_core.load import dumpd, load
from langchain_core.messages import AIMessage, AIMessageChunk
from langchain_core.messages import content as types
@@ -358,6 +356,8 @@ def test_content_blocks() -> None:
# test v1 content
chunk_1.content = cast("str | list[str | dict]", chunk_1.content_blocks)
+ assert len(chunk_1.content) == 1
+ chunk_1.content[0]["extras"] = {"baz": "qux"} # type: ignore[index]
chunk_1.response_metadata["output_version"] = "v1"
chunk_2.content = cast("str | list[str | dict]", chunk_2.content_blocks)
@@ -368,6 +368,7 @@ def test_content_blocks() -> None:
"name": "foo",
"args": {"foo": "bar"},
"id": "abc_123",
+ "extras": {"baz": "qux"},
}
]
@@ -481,18 +482,6 @@ def test_content_blocks() -> None:
]
-def test_provider_warns() -> None:
- # Test that major providers warn if content block standardization is not yet
- # implemented.
- # This test should be removed when all major providers support content block
- # standardization.
- message = AIMessage("Hello.", response_metadata={"model_provider": "groq"})
- with pytest.warns(match="not yet fully supported for Groq"):
- content_blocks = message.content_blocks
-
- assert content_blocks == [{"type": "text", "text": "Hello."}]
-
-
def test_content_blocks_reasoning_extraction() -> None:
"""Test best-effort reasoning extraction from `additional_kwargs`."""
message = AIMessage(
diff --git a/libs/core/tests/unit_tests/messages/test_imports.py b/libs/core/tests/unit_tests/messages/test_imports.py
index 4999c74cdc7..263be79bfd3 100644
--- a/libs/core/tests/unit_tests/messages/test_imports.py
+++ b/libs/core/tests/unit_tests/messages/test_imports.py
@@ -55,6 +55,9 @@ EXPECTED_ALL = [
"convert_to_openai_data_block",
"convert_to_openai_image_block",
"convert_to_openai_messages",
+ "UsageMetadata",
+ "InputTokenDetails",
+ "OutputTokenDetails",
]
diff --git a/libs/core/tests/unit_tests/output_parsers/test_base_parsers.py b/libs/core/tests/unit_tests/output_parsers/test_base_parsers.py
index fa5e9c9c9c0..2013790aa05 100644
--- a/libs/core/tests/unit_tests/output_parsers/test_base_parsers.py
+++ b/libs/core/tests/unit_tests/output_parsers/test_base_parsers.py
@@ -25,7 +25,7 @@ def test_base_generation_parser() -> None:
"""Parse a list of model Generations into a specific format.
Args:
- result: A list of Generations to be parsed. The Generations are assumed
+ result: A list of `Generation` to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Many parsers assume that only a single generation is passed it in.
We will assert for that
@@ -67,7 +67,7 @@ def test_base_transform_output_parser() -> None:
"""Parse a list of model Generations into a specific format.
Args:
- result: A list of Generations to be parsed. The Generations are assumed
+ result: A list of `Generation` to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Many parsers assume that only a single generation is passed it in.
We will assert for that
diff --git a/libs/core/tests/unit_tests/output_parsers/test_pydantic_parser.py b/libs/core/tests/unit_tests/output_parsers/test_pydantic_parser.py
index 9bf3bd19489..2df7de35756 100644
--- a/libs/core/tests/unit_tests/output_parsers/test_pydantic_parser.py
+++ b/libs/core/tests/unit_tests/output_parsers/test_pydantic_parser.py
@@ -1,5 +1,6 @@
"""Test PydanticOutputParser."""
+import sys
from enum import Enum
from typing import Literal
@@ -13,7 +14,7 @@ from langchain_core.language_models import ParrotFakeChatModel
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.output_parsers.json import JsonOutputParser
from langchain_core.prompts.prompt import PromptTemplate
-from langchain_core.utils.pydantic import PydanticBaseModel, TBaseModel
+from langchain_core.utils.pydantic import PydanticBaseModel, TypeBaseModel
class ForecastV2(pydantic.BaseModel):
@@ -22,15 +23,23 @@ class ForecastV2(pydantic.BaseModel):
forecast: str
-class ForecastV1(V1BaseModel):
- temperature: int
- f_or_c: Literal["F", "C"]
- forecast: str
+if sys.version_info < (3, 14):
+
+ class ForecastV1(V1BaseModel):
+ temperature: int
+ f_or_c: Literal["F", "C"]
+ forecast: str
+
+ _FORECAST_MODELS_TYPES = type[ForecastV2] | type[ForecastV1]
+ _FORECAST_MODELS = [ForecastV2, ForecastV1]
+else:
+ _FORECAST_MODELS_TYPES = type[ForecastV2]
+ _FORECAST_MODELS = [ForecastV2]
-@pytest.mark.parametrize("pydantic_object", [ForecastV2, ForecastV1])
+@pytest.mark.parametrize("pydantic_object", _FORECAST_MODELS)
def test_pydantic_parser_chaining(
- pydantic_object: type[ForecastV2] | type[ForecastV1],
+ pydantic_object: _FORECAST_MODELS_TYPES,
) -> None:
prompt = PromptTemplate(
template="""{{
@@ -53,8 +62,8 @@ def test_pydantic_parser_chaining(
assert res.forecast == "Sunny"
-@pytest.mark.parametrize("pydantic_object", [ForecastV2, ForecastV1])
-def test_pydantic_parser_validation(pydantic_object: TBaseModel) -> None:
+@pytest.mark.parametrize("pydantic_object", _FORECAST_MODELS)
+def test_pydantic_parser_validation(pydantic_object: TypeBaseModel) -> None:
bad_prompt = PromptTemplate(
template="""{{
"temperature": "oof",
@@ -66,18 +75,16 @@ def test_pydantic_parser_validation(pydantic_object: TBaseModel) -> None:
model = ParrotFakeChatModel()
- parser: PydanticOutputParser[PydanticBaseModel] = PydanticOutputParser(
- pydantic_object=pydantic_object
- )
+ parser = PydanticOutputParser[PydanticBaseModel](pydantic_object=pydantic_object)
chain = bad_prompt | model | parser
with pytest.raises(OutputParserException):
chain.invoke({})
# JSON output parser tests
-@pytest.mark.parametrize("pydantic_object", [ForecastV2, ForecastV1])
+@pytest.mark.parametrize("pydantic_object", _FORECAST_MODELS)
def test_json_parser_chaining(
- pydantic_object: TBaseModel,
+ pydantic_object: TypeBaseModel,
) -> None:
prompt = PromptTemplate(
template="""{{
@@ -185,6 +192,14 @@ def test_pydantic_output_parser_type_inference() -> None:
}
+@pytest.mark.parametrize("pydantic_object", _FORECAST_MODELS)
+def test_format_instructions(pydantic_object: TypeBaseModel) -> None:
+ """Test format instructions."""
+ parser = PydanticOutputParser[PydanticBaseModel](pydantic_object=pydantic_object)
+ instructions = parser.get_format_instructions()
+ assert "temperature" in instructions
+
+
def test_format_instructions_preserves_language() -> None:
"""Test format instructions does not attempt to encode into ascii."""
description = (
diff --git a/libs/core/tests/unit_tests/prompts/__snapshots__/test_chat.ambr b/libs/core/tests/unit_tests/prompts/__snapshots__/test_chat.ambr
index 282713e6f33..964aa8817b9 100644
--- a/libs/core/tests/unit_tests/prompts/__snapshots__/test_chat.ambr
+++ b/libs/core/tests/unit_tests/prompts/__snapshots__/test_chat.ambr
@@ -3,14 +3,13 @@
dict({
'$defs': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -110,8 +109,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -231,7 +229,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -306,7 +303,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -381,7 +377,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -389,7 +384,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -453,7 +448,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -516,11 +510,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -604,7 +597,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -688,9 +680,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -806,7 +798,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -822,7 +816,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -910,7 +903,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -981,7 +973,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -1027,7 +1019,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -1105,7 +1097,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -1114,36 +1105,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -1226,7 +1215,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -1331,8 +1319,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -1424,14 +1417,13 @@
dict({
'$defs': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -1531,8 +1523,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -1652,7 +1643,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -1727,7 +1717,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -1802,7 +1791,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -1810,7 +1798,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -1874,7 +1862,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -1937,11 +1924,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -2025,7 +2011,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -2109,9 +2094,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -2227,7 +2212,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -2243,7 +2230,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -2331,7 +2317,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -2402,7 +2387,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -2448,7 +2433,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -2526,7 +2511,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -2535,36 +2519,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -2647,7 +2629,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -2752,8 +2733,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
diff --git a/libs/core/tests/unit_tests/prompts/test_string.py b/libs/core/tests/unit_tests/prompts/test_string.py
new file mode 100644
index 00000000000..96c573c72f2
--- /dev/null
+++ b/libs/core/tests/unit_tests/prompts/test_string.py
@@ -0,0 +1,32 @@
+import pytest
+from packaging import version
+
+from langchain_core.prompts.string import mustache_schema
+from langchain_core.utils.pydantic import PYDANTIC_VERSION
+
+PYDANTIC_VERSION_AT_LEAST_29 = version.parse("2.9") <= PYDANTIC_VERSION
+
+
+@pytest.mark.skipif(
+ not PYDANTIC_VERSION_AT_LEAST_29,
+ reason=(
+ "Only test with most recent version of pydantic. "
+ "Pydantic introduced small fixes to generated JSONSchema on minor versions."
+ ),
+)
+def test_mustache_schema_parent_child() -> None:
+ template = "{{x.y}} {{x}}"
+ expected = {
+ "$defs": {
+ "x": {
+ "properties": {"y": {"default": None, "title": "Y", "type": "string"}},
+ "title": "x",
+ "type": "object",
+ }
+ },
+ "properties": {"x": {"$ref": "#/$defs/x", "default": None}},
+ "title": "PromptInput",
+ "type": "object",
+ }
+ actual = mustache_schema(template).model_json_schema()
+ assert expected == actual
diff --git a/libs/core/tests/unit_tests/prompts/test_structured.py b/libs/core/tests/unit_tests/prompts/test_structured.py
index 44c6a215ba9..a3568bd380f 100644
--- a/libs/core/tests/unit_tests/prompts/test_structured.py
+++ b/libs/core/tests/unit_tests/prompts/test_structured.py
@@ -26,7 +26,7 @@ def _fake_runnable(
class FakeStructuredChatModel(FakeListChatModel):
- """Fake ChatModel for testing purposes."""
+ """Fake chat model for testing purposes."""
@override
def with_structured_output(
diff --git a/libs/core/tests/unit_tests/pydantic_utils.py b/libs/core/tests/unit_tests/pydantic_utils.py
index 2c8036a129d..b8235415343 100644
--- a/libs/core/tests/unit_tests/pydantic_utils.py
+++ b/libs/core/tests/unit_tests/pydantic_utils.py
@@ -92,43 +92,36 @@ def _schema(obj: Any) -> dict:
replace_all_of_with_ref(schema_)
remove_all_none_default(schema_)
+ _remove_additionalproperties(schema_)
_remove_enum(schema_)
return schema_
-def _remove_additionalproperties_from_untyped_dicts(schema: dict) -> dict[str, Any]:
+def _remove_additionalproperties(schema: dict) -> dict[str, Any]:
"""Remove `"additionalProperties": True` from dicts in the schema.
Pydantic 2.11 and later versions include `"additionalProperties": True` when
generating JSON schemas for dict properties with `Any` or `object` values.
+
+ Pydantic 2.12 and later versions include `"additionalProperties": True` when
+ generating JSON schemas for `TypedDict`.
"""
+ if isinstance(schema, dict):
+ if (
+ schema.get("type") == "object"
+ and schema.get("additionalProperties") is True
+ ):
+ schema.pop("additionalProperties", None)
- def _remove_dict_additional_props(
- obj: dict[str, Any] | list[Any], *, inside_properties: bool = False
- ) -> None:
- if isinstance(obj, dict):
- if (
- inside_properties
- and obj.get("type") == "object"
- and obj.get("additionalProperties") is True
- ):
- obj.pop("additionalProperties", None)
+ # Recursively scan children
+ for value in schema.values():
+ _remove_additionalproperties(value)
- # Recursively scan children
- for key, value in obj.items():
- # We are "inside_properties" if the *current* key is "properties",
- # or if we were already inside properties in the caller.
- next_inside_properties = inside_properties or (key == "properties")
- _remove_dict_additional_props(
- value, inside_properties=next_inside_properties
- )
+ elif isinstance(schema, list):
+ for item in schema:
+ _remove_additionalproperties(item)
- elif isinstance(obj, list):
- for item in obj:
- _remove_dict_additional_props(item, inside_properties=inside_properties)
-
- _remove_dict_additional_props(schema, inside_properties=False)
return schema
@@ -152,5 +145,5 @@ def _normalize_schema(obj: Any) -> dict[str, Any]:
remove_all_none_default(data)
replace_all_of_with_ref(data)
_remove_enum(data)
- _remove_additionalproperties_from_untyped_dicts(data)
+ _remove_additionalproperties(data)
return data
diff --git a/libs/core/tests/unit_tests/runnables/__snapshots__/test_graph.ambr b/libs/core/tests/unit_tests/runnables/__snapshots__/test_graph.ambr
index b2d8068bfd8..0f788aeef95 100644
--- a/libs/core/tests/unit_tests/runnables/__snapshots__/test_graph.ambr
+++ b/libs/core/tests/unit_tests/runnables/__snapshots__/test_graph.ambr
@@ -427,14 +427,13 @@
'data': dict({
'$defs': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -534,8 +533,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -655,7 +653,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -730,7 +727,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -805,7 +801,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -813,7 +808,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -877,7 +872,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -940,11 +934,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -1028,7 +1021,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -1112,9 +1104,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -1230,7 +1222,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -1246,7 +1240,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -1334,7 +1327,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -1405,7 +1397,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -1451,7 +1443,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -1529,7 +1521,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -1538,36 +1529,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -1650,7 +1639,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -1755,8 +1743,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
diff --git a/libs/core/tests/unit_tests/runnables/__snapshots__/test_runnable.ambr b/libs/core/tests/unit_tests/runnables/__snapshots__/test_runnable.ambr
index 599ef032e39..ef6f6f8089b 100644
--- a/libs/core/tests/unit_tests/runnables/__snapshots__/test_runnable.ambr
+++ b/libs/core/tests/unit_tests/runnables/__snapshots__/test_runnable.ambr
@@ -1959,14 +1959,13 @@
dict({
'$defs': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -2065,8 +2064,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -2184,7 +2182,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -2258,7 +2255,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -2332,7 +2328,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -2340,7 +2335,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -2403,7 +2398,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -2465,11 +2459,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -2552,7 +2545,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -2635,9 +2627,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -2752,7 +2744,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -2768,7 +2762,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -2855,7 +2848,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -2925,7 +2917,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -2970,7 +2962,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -3047,7 +3039,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -3056,36 +3047,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -3167,7 +3156,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -3271,8 +3259,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -3360,14 +3353,13 @@
dict({
'$defs': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -3466,8 +3458,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -3585,7 +3576,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -3659,7 +3649,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -3796,7 +3785,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -3804,7 +3792,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -3867,7 +3855,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -3929,11 +3916,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -4016,7 +4002,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -4099,9 +4084,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -4216,7 +4201,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -4251,7 +4238,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -4338,7 +4324,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -4408,7 +4393,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -4453,7 +4438,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -4530,7 +4515,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -4539,36 +4523,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -4650,7 +4632,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -4754,8 +4735,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -4855,14 +4841,13 @@
]),
'definitions': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -4961,8 +4946,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -5080,7 +5064,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -5154,7 +5137,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -5291,7 +5273,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -5299,7 +5280,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -5362,7 +5343,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -5424,11 +5404,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -5511,7 +5490,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -5594,9 +5572,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -5711,7 +5689,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -5746,7 +5726,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -5833,7 +5812,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -5903,7 +5881,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -5948,7 +5926,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -6025,7 +6003,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -6034,36 +6011,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -6145,7 +6120,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -6249,8 +6223,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -6288,14 +6267,13 @@
dict({
'definitions': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -6394,8 +6372,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -6513,7 +6490,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -6587,7 +6563,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -6661,7 +6636,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -6669,7 +6643,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -6732,7 +6706,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -6794,11 +6767,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -6881,7 +6853,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -6964,9 +6935,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -7081,7 +7052,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -7097,7 +7070,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -7184,7 +7156,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -7254,7 +7225,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -7299,7 +7270,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -7376,7 +7347,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -7385,36 +7355,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -7496,7 +7464,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -7600,8 +7567,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -7731,14 +7703,13 @@
]),
'definitions': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -7837,8 +7808,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -7956,7 +7926,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -8030,7 +7999,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -8167,7 +8135,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -8175,7 +8142,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -8238,7 +8205,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -8300,11 +8266,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -8387,7 +8352,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -8470,9 +8434,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -8587,7 +8551,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -8622,7 +8588,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -8709,7 +8674,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -8779,7 +8743,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -8824,7 +8788,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -8901,7 +8865,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -8910,36 +8873,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -9021,7 +8982,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -9125,8 +9085,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -9209,14 +9174,13 @@
]),
'definitions': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -9315,8 +9279,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -9434,7 +9397,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -9508,7 +9470,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -9582,7 +9543,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -9590,7 +9550,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -9653,7 +9613,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -9715,11 +9674,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -9802,7 +9760,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -9885,9 +9842,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -10002,7 +9959,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -10018,7 +9977,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -10105,7 +10063,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -10175,7 +10132,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -10220,7 +10177,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -10297,7 +10254,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -10306,36 +10262,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -10417,7 +10371,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -10521,8 +10474,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -10560,14 +10518,13 @@
dict({
'definitions': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -10666,8 +10623,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -10785,7 +10741,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -10859,7 +10814,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -10996,7 +10950,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -11004,7 +10957,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -11067,7 +11020,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -11129,11 +11081,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -11216,7 +11167,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -11299,9 +11249,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -11416,7 +11366,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -11462,7 +11414,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -11549,7 +11500,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -11619,7 +11569,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -11664,7 +11614,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -11741,7 +11691,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -11750,36 +11699,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -11861,7 +11808,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -11965,8 +11911,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
@@ -12016,14 +11967,13 @@
]),
'definitions': dict({
'AIMessage': dict({
- 'additionalProperties': True,
'description': '''
Message from an AI.
- AIMessage is returned from a chat model as a response to a prompt.
+ An `AIMessage` is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
- the raw output as returned by the model together standardized fields
+ the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
@@ -12122,8 +12072,7 @@
'type': 'object',
}),
'AIMessageChunk': dict({
- 'additionalProperties': True,
- 'description': 'Message chunk from an AI.',
+ 'description': 'Message chunk from an AI (yielded when streaming).',
'properties': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
@@ -12241,7 +12190,6 @@
'type': 'object',
}),
'ChatMessage': dict({
- 'additionalProperties': True,
'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',
'properties': dict({
'additional_kwargs': dict({
@@ -12315,7 +12263,6 @@
'type': 'object',
}),
'ChatMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Chat Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -12452,7 +12399,6 @@
'type': 'object',
}),
'FunctionMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -12460,7 +12406,7 @@
do not contain the `tool_call_id` field.
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -12523,7 +12469,6 @@
'type': 'object',
}),
'FunctionMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Function Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -12585,11 +12530,10 @@
'type': 'object',
}),
'HumanMessage': dict({
- 'additionalProperties': True,
'description': '''
- Message from a human.
+ Message from the user.
- `HumanMessage`s are messages that are passed in from a human to the model.
+ A `HumanMessage` is a message that is passed in from a user to the model.
Example:
```python
@@ -12672,7 +12616,6 @@
'type': 'object',
}),
'HumanMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Human Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -12755,9 +12698,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
-
May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -12872,7 +12815,9 @@
}
```
- !!! version-added "Added in version 0.3.9"
+ May also hold extra provider-specific keys.
+
+ !!! version-added "Added in `langchain-core` 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -12907,7 +12852,6 @@
'type': 'object',
}),
'SystemMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for priming AI behavior.
@@ -12994,7 +12938,6 @@
'type': 'object',
}),
'SystemMessageChunk': dict({
- 'additionalProperties': True,
'description': 'System Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -13064,7 +13007,7 @@
}),
'ToolCall': dict({
'description': '''
- Represents a request to call a tool.
+ Represents an AI's request to call a tool.
Example:
```python
@@ -13109,7 +13052,7 @@
}),
'ToolCallChunk': dict({
'description': '''
- A chunk of a tool call (e.g., as part of a stream).
+ A chunk of a tool call (yielded when streaming).
When merging `ToolCallChunk`s (e.g., via `AIMessageChunk.__add__`),
all string attributes are concatenated. Chunks are only merged if their
@@ -13186,7 +13129,6 @@
'type': 'object',
}),
'ToolMessage': dict({
- 'additionalProperties': True,
'description': '''
Message for passing the result of executing a tool back to a model.
@@ -13195,36 +13137,34 @@
Example: A `ToolMessage` representing a result of `42` from a tool call with id
- ```python
- from langchain_core.messages import ToolMessage
+ ```python
+ from langchain_core.messages import ToolMessage
- ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
- ```
+ ToolMessage(content="42", tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL")
+ ```
Example: A `ToolMessage` where only part of the tool output is sent to the model
- and the full output is passed in to artifact.
+ and the full output is passed in to artifact.
- !!! version-added "Added in version 0.2.17"
+ ```python
+ from langchain_core.messages import ToolMessage
- ```python
- from langchain_core.messages import ToolMessage
+ tool_output = {
+ "stdout": "From the graph we can see that the correlation between "
+ "x and y is ...",
+ "stderr": None,
+ "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
+ }
- tool_output = {
- "stdout": "From the graph we can see that the correlation between "
- "x and y is ...",
- "stderr": None,
- "artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
- }
-
- ToolMessage(
- content=tool_output["stdout"],
- artifact=tool_output,
- tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
- )
- ```
+ ToolMessage(
+ content=tool_output["stdout"],
+ artifact=tool_output,
+ tool_call_id="call_Jja7J89XsjrOLA5r!MEOW!SL",
+ )
+ ```
The `tool_call_id` field is used to associate the tool call request with the
- tool call response. This is useful in situations where a chat model is able
+ tool call response. Useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
@@ -13306,7 +13246,6 @@
'type': 'object',
}),
'ToolMessageChunk': dict({
- 'additionalProperties': True,
'description': 'Tool Message chunk.',
'properties': dict({
'additional_kwargs': dict({
@@ -13410,8 +13349,13 @@
}
```
- !!! warning "Behavior changed in 0.3.9"
+ !!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
+
+ !!! note "LangSmith SDK"
+ The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
+ LangSmith's `UsageMetadata` has additional fields to capture cost information
+ used by the LangSmith platform.
''',
'properties': dict({
'input_token_details': dict({
diff --git a/libs/core/tests/unit_tests/runnables/test_fallbacks.py b/libs/core/tests/unit_tests/runnables/test_fallbacks.py
index 0a6b82540e2..1d10887c725 100644
--- a/libs/core/tests/unit_tests/runnables/test_fallbacks.py
+++ b/libs/core/tests/unit_tests/runnables/test_fallbacks.py
@@ -266,7 +266,7 @@ def _error(msg: str) -> None:
def _generate_immediate_error(_: Iterator) -> Iterator[str]:
- _error("immmediate error")
+ _error("immediate error")
yield ""
diff --git a/libs/core/tests/unit_tests/runnables/test_history.py b/libs/core/tests/unit_tests/runnables/test_history.py
index 9ede918fa2d..5a1c3f827aa 100644
--- a/libs/core/tests/unit_tests/runnables/test_history.py
+++ b/libs/core/tests/unit_tests/runnables/test_history.py
@@ -3,7 +3,6 @@ from collections.abc import Callable, Sequence
from typing import Any
import pytest
-from packaging import version
from pydantic import BaseModel, RootModel
from typing_extensions import override
@@ -25,7 +24,6 @@ from langchain_core.tracers.root_listeners import (
AsyncRootListenersTracer,
RootListenersTracer,
)
-from langchain_core.utils.pydantic import PYDANTIC_VERSION
from tests.unit_tests.pydantic_utils import _schema
@@ -497,8 +495,6 @@ def test_get_output_schema() -> None:
"title": "RunnableWithChatHistoryOutput",
"type": "object",
}
- if version.parse("2.11") <= PYDANTIC_VERSION:
- expected_schema["additionalProperties"] = True
assert _schema(output_type) == expected_schema
diff --git a/libs/core/tests/unit_tests/runnables/test_runnable.py b/libs/core/tests/unit_tests/runnables/test_runnable.py
index c8afa1eea5d..4e82358d9bf 100644
--- a/libs/core/tests/unit_tests/runnables/test_runnable.py
+++ b/libs/core/tests/unit_tests/runnables/test_runnable.py
@@ -94,7 +94,7 @@ PYDANTIC_VERSION_AT_LEAST_210 = version.parse("2.10") <= PYDANTIC_VERSION
class FakeTracer(BaseTracer):
"""Fake tracer that records LangChain execution.
- It replaces run ids with deterministic UUIDs for snapshotting.
+ It replaces run IDs with deterministic UUIDs for snapshotting.
"""
def __init__(self) -> None:
@@ -313,6 +313,12 @@ def test_schemas(snapshot: SnapshotAssertion) -> None:
"description": "Class for storing a piece of text and "
"associated metadata.\n"
"\n"
+ "!!! note\n"
+ " `Document` is for **retrieval workflows**, not chat I/O. For "
+ "sending text\n"
+ " to an LLM in a conversation, use message types from "
+ "`langchain.messages`.\n"
+ "\n"
"Example:\n"
" ```python\n"
" from langchain_core.documents import Document\n"
@@ -2073,7 +2079,7 @@ async def test_prompt_with_llm(
part async for part in chain.astream_log({"question": "What is your name?"})
]
- # remove ids from logs
+ # Remove IDs from logs
for part in stream_log:
for op in part.ops:
if (
@@ -2284,7 +2290,7 @@ async def test_prompt_with_llm_parser(
part async for part in chain.astream_log({"question": "What is your name?"})
]
- # remove ids from logs
+ # Remove IDs from logs
for part in stream_log:
for op in part.ops:
if (
@@ -2472,7 +2478,7 @@ async def test_stream_log_retriever() -> None:
part async for part in chain.astream_log({"question": "What is your name?"})
]
- # remove ids from logs
+ # Remove IDs from logs
for part in stream_log:
for op in part.ops:
if (
@@ -2505,7 +2511,7 @@ async def test_stream_log_lists() -> None:
part async for part in chain.astream_log({"question": "What is your name?"})
]
- # remove ids from logs
+ # Remove IDs from logs
for part in stream_log:
for op in part.ops:
if (
@@ -5719,3 +5725,37 @@ def test_runnable_assign() -> None:
result = runnable_assign.invoke({"input": 5})
assert result == {"input": 5, "add_step": {"added": 15}}
+
+
+def test_runnable_typed_dict_schema() -> None:
+ """Testing that the schema is generated properly(not empty) when using TypedDict.
+
+ subclasses to annotate the arguments of a RunnableParallel children.
+ """
+
+ class Foo(TypedDict):
+ foo: str
+
+ class InputData(Foo):
+ bar: str
+
+ def forward_foo(input_data: InputData) -> str:
+ return input_data["foo"]
+
+ def transform_input(input_data: InputData) -> dict[str, str]:
+ foo = input_data["foo"]
+ bar = input_data["bar"]
+
+ return {"transformed": foo + bar}
+
+ foo_runnable = RunnableLambda(forward_foo)
+ other_runnable = RunnableLambda(transform_input)
+
+ parallel = RunnableParallel(
+ foo=foo_runnable,
+ other=other_runnable,
+ )
+ assert (
+ repr(parallel.input_schema.model_validate({"foo": "Y", "bar": "Z"}))
+ == "RunnableParallelInput(root={'foo': 'Y', 'bar': 'Z'})"
+ )
diff --git a/libs/core/tests/unit_tests/runnables/test_runnable_events_v1.py b/libs/core/tests/unit_tests/runnables/test_runnable_events_v1.py
index 8efd5aea9a6..0b30aa58be5 100644
--- a/libs/core/tests/unit_tests/runnables/test_runnable_events_v1.py
+++ b/libs/core/tests/unit_tests/runnables/test_runnable_events_v1.py
@@ -36,10 +36,10 @@ from tests.unit_tests.stubs import _any_id_ai_message, _any_id_ai_message_chunk
def _with_nulled_run_id(events: Sequence[StreamEvent]) -> list[StreamEvent]:
- """Removes the run ids from events."""
+ """Removes the run IDs from events."""
for event in events:
- assert "parent_ids" in event, "Parent ids should be present in the event."
- assert event["parent_ids"] == [], "Parent ids should be empty."
+ assert "parent_ids" in event, "Parent IDs should be present in the event."
+ assert event["parent_ids"] == [], "Parent IDs should be empty."
return cast("list[StreamEvent]", [{**event, "run_id": ""} for event in events])
diff --git a/libs/core/tests/unit_tests/runnables/test_runnable_events_v2.py b/libs/core/tests/unit_tests/runnables/test_runnable_events_v2.py
index de6c4030cac..42e95d6b160 100644
--- a/libs/core/tests/unit_tests/runnables/test_runnable_events_v2.py
+++ b/libs/core/tests/unit_tests/runnables/test_runnable_events_v2.py
@@ -56,7 +56,7 @@ from tests.unit_tests.stubs import _any_id_ai_message, _any_id_ai_message_chunk
def _with_nulled_run_id(events: Sequence[StreamEvent]) -> list[StreamEvent]:
- """Removes the run ids from events."""
+ """Removes the run IDs from events."""
for event in events:
assert "run_id" in event, f"Event {event} does not have a run_id."
assert "parent_ids" in event, f"Event {event} does not have parent_ids."
diff --git a/libs/core/tests/unit_tests/test_tools.py b/libs/core/tests/unit_tests/test_tools.py
index adc737e8fce..6c7dc25bba7 100644
--- a/libs/core/tests/unit_tests/test_tools.py
+++ b/libs/core/tests/unit_tests/test_tools.py
@@ -68,7 +68,14 @@ from langchain_core.utils.pydantic import (
create_model_v2,
)
from tests.unit_tests.fake.callbacks import FakeCallbackHandler
-from tests.unit_tests.pydantic_utils import _schema
+from tests.unit_tests.pydantic_utils import _normalize_schema, _schema
+
+try:
+ from langgraph.prebuilt import ToolRuntime # type: ignore[import-not-found]
+
+ HAS_LANGGRAPH = True
+except ImportError:
+ HAS_LANGGRAPH = False
def _get_tool_call_json_schema(tool: BaseTool) -> dict:
@@ -105,14 +112,6 @@ class _MockSchema(BaseModel):
arg3: dict | None = None
-class _MockSchemaV1(BaseModelV1):
- """Return the arguments directly."""
-
- arg1: int
- arg2: bool
- arg3: dict | None = None
-
-
class _MockStructuredTool(BaseTool):
name: str = "structured_api"
args_schema: type[BaseModel] = _MockSchema
@@ -206,6 +205,21 @@ def test_decorator_with_specified_schema() -> None:
assert isinstance(tool_func, BaseTool)
assert tool_func.args_schema == _MockSchema
+
+@pytest.mark.skipif(
+ sys.version_info >= (3, 14),
+ reason="pydantic.v1 namespace not supported with Python 3.14+",
+)
+def test_decorator_with_specified_schema_pydantic_v1() -> None:
+ """Test that manually specified schemata are passed through to the tool."""
+
+ class _MockSchemaV1(BaseModelV1):
+ """Return the arguments directly."""
+
+ arg1: int
+ arg2: bool
+ arg3: dict | None = None
+
@tool(args_schema=cast("ArgsSchema", _MockSchemaV1))
def tool_func_v1(*, arg1: int, arg2: bool, arg3: dict | None = None) -> str:
return f"{arg1} {arg2} {arg3}"
@@ -229,7 +243,7 @@ def test_decorated_function_schema_equivalent() -> None:
assert (
_schema(structured_tool_input.args_schema)["properties"]
== _schema(_MockSchema)["properties"]
- == structured_tool_input.args
+ == _normalize_schema(structured_tool_input.args)
)
@@ -348,6 +362,10 @@ def test_structured_tool_types_parsed() -> None:
assert result == expected
+@pytest.mark.skipif(
+ sys.version_info >= (3, 14),
+ reason="pydantic.v1 namespace not supported with Python 3.14+",
+)
def test_structured_tool_types_parsed_pydantic_v1() -> None:
"""Test the non-primitive types are correctly passed to structured tools."""
@@ -1287,7 +1305,7 @@ def test_docstring_parsing() -> None:
assert args_schema2["description"] == "The foo. Additional description here."
assert args_schema2["properties"] == expected["properties"]
- # Multi-line wth Returns block
+ # Multi-line with Returns block
def foo3(bar: str, baz: int) -> str:
"""The foo.
@@ -1880,7 +1898,10 @@ def generate_backwards_compatible_v1() -> list[Any]:
# behave well with either pydantic 1 proper,
# pydantic v1 from pydantic 2,
# or pydantic 2 proper.
-TEST_MODELS = generate_models() + generate_backwards_compatible_v1()
+TEST_MODELS = generate_models()
+
+if sys.version_info < (3, 14):
+ TEST_MODELS += generate_backwards_compatible_v1()
@pytest.mark.parametrize("pydantic_model", TEST_MODELS)
@@ -1934,11 +1955,10 @@ def test_args_schema_as_pydantic(pydantic_model: Any) -> None:
def test_args_schema_explicitly_typed() -> None:
- """This should test that one can type the args schema as a pydantic model.
-
- Please note that this will test using pydantic 2 even though BaseTool
- is a pydantic 1 model!
+ """This should test that one can type the args schema as a Pydantic model.
+ Please note that this will test using pydantic 2 even though `BaseTool`
+ is a Pydantic 1 model!
"""
class Foo(BaseModel):
@@ -1981,7 +2001,7 @@ def test_args_schema_explicitly_typed() -> None:
@pytest.mark.parametrize("pydantic_model", TEST_MODELS)
def test_structured_tool_with_different_pydantic_versions(pydantic_model: Any) -> None:
- """This should test that one can type the args schema as a pydantic model."""
+ """This should test that one can type the args schema as a Pydantic model."""
def foo(a: int, b: str) -> str:
"""Hahaha."""
@@ -2080,6 +2100,8 @@ def test__get_all_basemodel_annotations_v2(*, use_v1_namespace: bool) -> None:
A = TypeVar("A")
if use_v1_namespace:
+ if sys.version_info >= (3, 14):
+ pytest.skip("pydantic.v1 namespace not supported with Python 3.14+")
class ModelA(BaseModelV1, Generic[A], extra="allow"):
a: A
@@ -2758,3 +2780,249 @@ def test_tool_args_schema_with_annotated_type() -> None:
"type": "array",
}
}
+
+
+class CallbackHandlerWithInputCapture(FakeCallbackHandler):
+ """Callback handler that captures inputs passed to on_tool_start."""
+
+ captured_inputs: list[dict | None] = []
+
+ def on_tool_start(
+ self,
+ serialized: dict[str, Any],
+ input_str: str,
+ *,
+ run_id: Any,
+ parent_run_id: Any | None = None,
+ tags: list[str] | None = None,
+ metadata: dict[str, Any] | None = None,
+ inputs: dict[str, Any] | None = None,
+ **kwargs: Any,
+ ) -> Any:
+ """Capture the inputs passed to on_tool_start."""
+ self.captured_inputs.append(inputs)
+ return super().on_tool_start(
+ serialized,
+ input_str,
+ run_id=run_id,
+ parent_run_id=parent_run_id,
+ tags=tags,
+ metadata=metadata,
+ inputs=inputs,
+ **kwargs,
+ )
+
+
+def test_filter_injected_args_from_callbacks() -> None:
+ """Test that injected tool arguments are filtered from callback inputs."""
+
+ @tool
+ def search_tool(
+ query: str,
+ state: Annotated[dict, InjectedToolArg()],
+ ) -> str:
+ """Search with injected state.
+
+ Args:
+ query: The search query.
+ state: Injected state context.
+ """
+ return f"Results for: {query}"
+
+ handler = CallbackHandlerWithInputCapture(captured_inputs=[])
+ result = search_tool.invoke(
+ {"query": "test query", "state": {"user_id": 123}},
+ config={"callbacks": [handler]},
+ )
+
+ assert result == "Results for: test query"
+ assert handler.tool_starts == 1
+ assert len(handler.captured_inputs) == 1
+
+ # Verify that injected 'state' arg is filtered out
+ captured = handler.captured_inputs[0]
+ assert captured is not None
+ assert "query" in captured
+ assert "state" not in captured
+ assert captured["query"] == "test query"
+
+
+def test_filter_run_manager_from_callbacks() -> None:
+ """Test that run_manager is filtered from callback inputs."""
+
+ @tool
+ def tool_with_run_manager(
+ message: str,
+ run_manager: CallbackManagerForToolRun | None = None,
+ ) -> str:
+ """Tool with run_manager parameter.
+
+ Args:
+ message: The message to process.
+ run_manager: The callback manager.
+ """
+ return f"Processed: {message}"
+
+ handler = CallbackHandlerWithInputCapture(captured_inputs=[])
+ result = tool_with_run_manager.invoke(
+ {"message": "hello"},
+ config={"callbacks": [handler]},
+ )
+
+ assert result == "Processed: hello"
+ assert handler.tool_starts == 1
+ assert len(handler.captured_inputs) == 1
+
+ # Verify that run_manager is filtered out
+ captured = handler.captured_inputs[0]
+ assert captured is not None
+ assert "message" in captured
+ assert "run_manager" not in captured
+
+
+def test_filter_multiple_injected_args() -> None:
+ """Test filtering multiple injected arguments from callback inputs."""
+
+ @tool
+ def complex_tool(
+ query: str,
+ limit: int,
+ state: Annotated[dict, InjectedToolArg()],
+ context: Annotated[str, InjectedToolArg()],
+ run_manager: CallbackManagerForToolRun | None = None,
+ ) -> str:
+ """Complex tool with multiple injected args.
+
+ Args:
+ query: The search query.
+ limit: Maximum number of results.
+ state: Injected state.
+ context: Injected context.
+ run_manager: The callback manager.
+ """
+ return f"Query: {query}, Limit: {limit}"
+
+ handler = CallbackHandlerWithInputCapture(captured_inputs=[])
+ result = complex_tool.invoke(
+ {
+ "query": "test",
+ "limit": 10,
+ "state": {"foo": "bar"},
+ "context": "some context",
+ },
+ config={"callbacks": [handler]},
+ )
+
+ assert result == "Query: test, Limit: 10"
+ assert handler.tool_starts == 1
+ assert len(handler.captured_inputs) == 1
+
+ # Verify that only non-injected args remain
+ captured = handler.captured_inputs[0]
+ assert captured is not None
+ assert captured == {"query": "test", "limit": 10}
+ assert "state" not in captured
+ assert "context" not in captured
+ assert "run_manager" not in captured
+
+
+def test_no_filtering_for_string_input() -> None:
+ """Test that string inputs are not filtered (passed as None)."""
+
+ @tool
+ def simple_tool(query: str) -> str:
+ """Simple tool with string input.
+
+ Args:
+ query: The query string.
+ """
+ return f"Result: {query}"
+
+ handler = CallbackHandlerWithInputCapture(captured_inputs=[])
+ result = simple_tool.invoke("test query", config={"callbacks": [handler]})
+
+ assert result == "Result: test query"
+ assert handler.tool_starts == 1
+ assert len(handler.captured_inputs) == 1
+
+ # String inputs should result in None for the inputs parameter
+ assert handler.captured_inputs[0] is None
+
+
+async def test_filter_injected_args_async() -> None:
+ """Test that injected args are filtered in async tool execution."""
+
+ @tool
+ async def async_search_tool(
+ query: str,
+ state: Annotated[dict, InjectedToolArg()],
+ ) -> str:
+ """Async search with injected state.
+
+ Args:
+ query: The search query.
+ state: Injected state context.
+ """
+ return f"Async results for: {query}"
+
+ handler = CallbackHandlerWithInputCapture(captured_inputs=[])
+ result = await async_search_tool.ainvoke(
+ {"query": "async test", "state": {"user_id": 456}},
+ config={"callbacks": [handler]},
+ )
+
+ assert result == "Async results for: async test"
+ assert handler.tool_starts == 1
+ assert len(handler.captured_inputs) == 1
+
+ # Verify filtering in async execution
+ captured = handler.captured_inputs[0]
+ assert captured is not None
+ assert "query" in captured
+ assert "state" not in captured
+ assert captured["query"] == "async test"
+
+
+@pytest.mark.skipif(not HAS_LANGGRAPH, reason="langgraph not installed")
+def test_filter_tool_runtime_directly_injected_arg() -> None:
+ """Test that ToolRuntime (a _DirectlyInjectedToolArg) is filtered."""
+
+ @tool
+ def tool_with_runtime(query: str, limit: int, runtime: ToolRuntime) -> str:
+ """Tool with ToolRuntime parameter.
+
+ Args:
+ query: The search query.
+ limit: Max results.
+ runtime: The tool runtime (directly injected).
+ """
+ return f"Query: {query}, Limit: {limit}"
+
+ handler = CallbackHandlerWithInputCapture(captured_inputs=[])
+
+ # Create a mock ToolRuntime instance
+ class MockRuntime:
+ """Mock ToolRuntime for testing."""
+
+ agent_name = "test_agent"
+ context: dict[str, Any] = {}
+ state: dict[str, Any] = {}
+
+ result = tool_with_runtime.invoke(
+ {
+ "query": "test",
+ "limit": 5,
+ "runtime": MockRuntime(),
+ },
+ config={"callbacks": [handler]},
+ )
+
+ assert result == "Query: test, Limit: 5"
+ assert handler.tool_starts == 1
+ assert len(handler.captured_inputs) == 1
+
+ # Verify that ToolRuntime is filtered out
+ captured = handler.captured_inputs[0]
+ assert captured is not None
+ assert captured == {"query": "test", "limit": 5}
+ assert "runtime" not in captured
diff --git a/libs/core/tests/unit_tests/tracers/test_memory_stream.py b/libs/core/tests/unit_tests/tracers/test_memory_stream.py
index 284e7035299..da96e6d7bb3 100644
--- a/libs/core/tests/unit_tests/tracers/test_memory_stream.py
+++ b/libs/core/tests/unit_tests/tracers/test_memory_stream.py
@@ -120,7 +120,7 @@ def test_send_to_closed_stream() -> None:
We may want to handle this in a better way in the future.
"""
- event_loop = asyncio.get_event_loop()
+ event_loop = asyncio.new_event_loop()
channel = _MemoryStream[str](event_loop)
writer = channel.get_send_stream()
# send with an open even loop
diff --git a/libs/core/tests/unit_tests/tracers/test_run_collector.py b/libs/core/tests/unit_tests/tracers/test_run_collector.py
index 95c0052cf21..17f6a973fce 100644
--- a/libs/core/tests/unit_tests/tracers/test_run_collector.py
+++ b/libs/core/tests/unit_tests/tracers/test_run_collector.py
@@ -7,9 +7,9 @@ from langchain_core.tracers.context import collect_runs
def test_collect_runs() -> None:
- llm = FakeListLLM(responses=["hello"])
+ model = FakeListLLM(responses=["hello"])
with collect_runs() as cb:
- llm.invoke("hi")
+ model.invoke("hi")
assert cb.traced_runs
assert len(cb.traced_runs) == 1
assert isinstance(cb.traced_runs[0].id, uuid.UUID)
diff --git a/libs/core/tests/unit_tests/tracers/test_schemas.py b/libs/core/tests/unit_tests/tracers/test_schemas.py
index 4f86d0fe541..97f476059f5 100644
--- a/libs/core/tests/unit_tests/tracers/test_schemas.py
+++ b/libs/core/tests/unit_tests/tracers/test_schemas.py
@@ -5,17 +5,7 @@ from langchain_core.tracers.schemas import __all__ as schemas_all
def test_public_api() -> None:
"""Test for changes in the public API."""
expected_all = [
- "BaseRun",
- "ChainRun",
- "LLMRun",
"Run",
- "RunTypeEnum",
- "ToolRun",
- "TracerSession",
- "TracerSessionBase",
- "TracerSessionV1",
- "TracerSessionV1Base",
- "TracerSessionV1Create",
]
assert sorted(schemas_all) == expected_all
diff --git a/libs/core/tests/unit_tests/utils/test_function_calling.py b/libs/core/tests/unit_tests/utils/test_function_calling.py
index 3fdfd63e087..c4edce261be 100644
--- a/libs/core/tests/unit_tests/utils/test_function_calling.py
+++ b/libs/core/tests/unit_tests/utils/test_function_calling.py
@@ -1155,3 +1155,16 @@ def test_convert_to_openai_function_nested_strict_2() -> None:
actual = convert_to_openai_function(my_function, strict=True)
assert actual == expected
+
+
+def test_convert_to_openai_function_strict_required() -> None:
+ class MyModel(BaseModel):
+ """Dummy schema."""
+
+ arg1: int = Field(..., description="foo")
+ arg2: str | None = Field(None, description="bar")
+
+ expected = ["arg1", "arg2"]
+ func = convert_to_openai_function(MyModel, strict=True)
+ actual = func["parameters"]["required"]
+ assert actual == expected
diff --git a/libs/core/tests/unit_tests/utils/test_pydantic.py b/libs/core/tests/unit_tests/utils/test_pydantic.py
index 0793c752108..ef4775c0e41 100644
--- a/libs/core/tests/unit_tests/utils/test_pydantic.py
+++ b/libs/core/tests/unit_tests/utils/test_pydantic.py
@@ -1,8 +1,10 @@
"""Test for some custom pydantic decorators."""
+import sys
import warnings
from typing import Any
+import pytest
from pydantic import BaseModel, ConfigDict, Field
from pydantic.v1 import BaseModel as BaseModelV1
@@ -139,6 +141,10 @@ def test_fields_pydantic_v2_proper() -> None:
assert fields == {"x": Foo.model_fields["x"]}
+@pytest.mark.skipif(
+ sys.version_info >= (3, 14),
+ reason="pydantic.v1 namespace not supported with Python 3.14+",
+)
def test_fields_pydantic_v1_from_2() -> None:
class Foo(BaseModelV1):
x: int
diff --git a/libs/core/tests/unit_tests/utils/test_strings.py b/libs/core/tests/unit_tests/utils/test_strings.py
index 2cf8aa14b4c..9e94517cec9 100644
--- a/libs/core/tests/unit_tests/utils/test_strings.py
+++ b/libs/core/tests/unit_tests/utils/test_strings.py
@@ -62,7 +62,7 @@ def test_stringify_value_nested_structures() -> None:
result = stringify_value(nested_data)
- # Shoudl contain all the nested values
+ # Should contain all the nested values
assert "users:" in result
assert "name: Alice" in result
assert "name: Bob" in result
diff --git a/libs/core/tests/unit_tests/utils/test_utils.py b/libs/core/tests/unit_tests/utils/test_utils.py
index 8cc1c559ecc..f70063c421e 100644
--- a/libs/core/tests/unit_tests/utils/test_utils.py
+++ b/libs/core/tests/unit_tests/utils/test_utils.py
@@ -1,5 +1,6 @@
import os
import re
+import sys
from collections.abc import Callable
from contextlib import AbstractContextManager, nullcontext
from copy import deepcopy
@@ -214,6 +215,10 @@ def test_guard_import_failure(
guard_import(module_name, pip_name=pip_name, package=package)
+@pytest.mark.skipif(
+ sys.version_info >= (3, 14),
+ reason="pydantic.v1 namespace not supported with Python 3.14+",
+)
def test_get_pydantic_field_names_v1_in_2() -> None:
class PydanticV1Model(PydanticV1BaseModel):
field1: str
diff --git a/libs/core/tests/unit_tests/vectorstores/test_vectorstore.py b/libs/core/tests/unit_tests/vectorstores/test_vectorstore.py
index e3904b7ad69..f8af00ee79b 100644
--- a/libs/core/tests/unit_tests/vectorstores/test_vectorstore.py
+++ b/libs/core/tests/unit_tests/vectorstores/test_vectorstore.py
@@ -21,7 +21,7 @@ if TYPE_CHECKING:
class CustomAddTextsVectorstore(VectorStore):
- """A vectorstore that only implements add texts."""
+ """A VectorStore that only implements add texts."""
def __init__(self) -> None:
self.store: dict[str, Document] = {}
@@ -72,7 +72,7 @@ class CustomAddTextsVectorstore(VectorStore):
class CustomAddDocumentsVectorstore(VectorStore):
- """A vectorstore that only implements add documents."""
+ """A VectorStore that only implements add documents."""
def __init__(self) -> None:
self.store: dict[str, Document] = {}
@@ -249,7 +249,7 @@ def test_default_from_documents(vs_class: type[VectorStore]) -> None:
Document(id="1", page_content="hello", metadata={"foo": "bar"})
]
- # from_documents with ids in args
+ # from_documents with IDs in args
store = vs_class.from_documents(
[Document(page_content="hello", metadata={"foo": "bar"})], embeddings, ids=["1"]
)
@@ -278,7 +278,7 @@ async def test_default_afrom_documents(vs_class: type[VectorStore]) -> None:
Document(id="1", page_content="hello", metadata={"foo": "bar"})
]
- # from_documents with ids in args
+ # from_documents with IDs in args
store = await vs_class.afrom_documents(
[Document(page_content="hello", metadata={"foo": "bar"})], embeddings, ids=["1"]
)
@@ -287,7 +287,7 @@ async def test_default_afrom_documents(vs_class: type[VectorStore]) -> None:
Document(id="1", page_content="hello", metadata={"foo": "bar"})
]
- # Test afrom_documents with id specified in both document and ids
+ # Test afrom_documents with id specified in both document and IDs
original_document = Document(id="7", page_content="baz")
store = await vs_class.afrom_documents([original_document], embeddings, ids=["6"])
assert original_document.id == "7" # original document should not be modified
diff --git a/libs/core/uv.lock b/libs/core/uv.lock
index 85d27ad99bd..736a129bdd7 100644
--- a/libs/core/uv.lock
+++ b/libs/core/uv.lock
@@ -1,5 +1,5 @@
version = 1
-revision = 2
+revision = 3
requires-python = ">=3.10.0, <4.0.0"
resolution-markers = [
"python_full_version >= '3.14' and platform_python_implementation == 'PyPy'",
@@ -960,7 +960,7 @@ wheels = [
[[package]]
name = "langchain-core"
-version = "1.0.0a8"
+version = "1.0.3"
source = { editable = "." }
dependencies = [
{ name = "jsonpatch" },
@@ -985,6 +985,7 @@ test = [
{ name = "blockbuster" },
{ name = "freezegun" },
{ name = "grandalf" },
+ { name = "langchain-model-profiles" },
{ name = "langchain-tests" },
{ name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" },
{ name = "numpy", version = "2.3.3", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
@@ -1000,6 +1001,7 @@ test = [
{ name = "syrupy" },
]
typing = [
+ { name = "langchain-model-profiles" },
{ name = "langchain-text-splitters" },
{ name = "mypy" },
{ name = "types-pyyaml" },
@@ -1029,6 +1031,7 @@ test = [
{ name = "blockbuster", specifier = ">=1.5.18,<1.6.0" },
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
{ name = "grandalf", specifier = ">=0.8.0,<1.0.0" },
+ { name = "langchain-model-profiles", directory = "../model-profiles" },
{ name = "langchain-tests", directory = "../standard-tests" },
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
@@ -1045,15 +1048,56 @@ test = [
]
test-integration = []
typing = [
+ { name = "langchain-model-profiles", directory = "../model-profiles" },
{ name = "langchain-text-splitters", directory = "../text-splitters" },
{ name = "mypy", specifier = ">=1.18.1,<1.19.0" },
{ name = "types-pyyaml", specifier = ">=6.0.12.2,<7.0.0.0" },
{ name = "types-requests", specifier = ">=2.28.11.5,<3.0.0.0" },
]
+[[package]]
+name = "langchain-model-profiles"
+version = "0.0.3"
+source = { directory = "../model-profiles" }
+dependencies = [
+ { name = "tomli", marker = "python_full_version < '3.11'" },
+ { name = "typing-extensions" },
+]
+
+[package.metadata]
+requires-dist = [
+ { name = "tomli", marker = "python_full_version < '3.11'", specifier = ">=2.0.0,<3.0.0" },
+ { name = "typing-extensions", specifier = ">=4.7.0,<5.0.0" },
+]
+
+[package.metadata.requires-dev]
+dev = [{ name = "httpx", specifier = ">=0.23.0,<1" }]
+lint = [
+ { name = "langchain", editable = "../langchain_v1" },
+ { name = "ruff", specifier = ">=0.12.2,<0.13.0" },
+]
+test = [
+ { name = "langchain", extras = ["openai"], editable = "../langchain_v1" },
+ { name = "langchain-core", editable = "." },
+ { name = "pytest", specifier = ">=8.0.0,<9.0.0" },
+ { name = "pytest-asyncio", specifier = ">=0.23.2,<2.0.0" },
+ { name = "pytest-cov", specifier = ">=4.0.0,<8.0.0" },
+ { name = "pytest-mock" },
+ { name = "pytest-socket", specifier = ">=0.6.0,<1.0.0" },
+ { name = "pytest-watcher", specifier = ">=0.2.6,<1.0.0" },
+ { name = "pytest-xdist", specifier = ">=3.6.1,<4.0.0" },
+ { name = "syrupy", specifier = ">=4.0.2,<5.0.0" },
+ { name = "toml", specifier = ">=0.10.2,<1.0.0" },
+]
+test-integration = [{ name = "langchain-core", editable = "." }]
+typing = [
+ { name = "mypy", specifier = ">=1.18.1,<1.19.0" },
+ { name = "types-toml", specifier = ">=0.10.8.20240310,<1.0.0.0" },
+]
+
[[package]]
name = "langchain-tests"
-version = "1.0.0a2"
+version = "1.0.1"
source = { directory = "../standard-tests" }
dependencies = [
{ name = "httpx" },
@@ -1098,7 +1142,7 @@ typing = [
[[package]]
name = "langchain-text-splitters"
-version = "1.0.0a1"
+version = "1.0.0"
source = { directory = "../text-splitters" }
dependencies = [
{ name = "langchain-core" },
@@ -1131,8 +1175,8 @@ test-integration = [
{ name = "nltk", specifier = ">=3.9.1,<4.0.0" },
{ name = "scipy", marker = "python_full_version == '3.12.*'", specifier = ">=1.7.0,<2.0.0" },
{ name = "scipy", marker = "python_full_version >= '3.13'", specifier = ">=1.14.1,<2.0.0" },
- { name = "sentence-transformers", specifier = ">=3.0.1,<4.0.0" },
- { name = "spacy", specifier = ">=3.8.7,<4.0.0" },
+ { name = "sentence-transformers", marker = "python_full_version < '3.14'", specifier = ">=3.0.1,<4.0.0" },
+ { name = "spacy", marker = "python_full_version < '3.14'", specifier = ">=3.8.7,<4.0.0" },
{ name = "thinc", specifier = ">=8.3.6,<9.0.0" },
{ name = "tiktoken", specifier = ">=0.8.0,<1.0.0" },
{ name = "transformers", specifier = ">=4.51.3,<5.0.0" },
@@ -2021,7 +2065,7 @@ wheels = [
[[package]]
name = "pydantic"
-version = "2.11.9"
+version = "2.12.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
@@ -2029,96 +2073,119 @@ dependencies = [
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/ff/5d/09a551ba512d7ca404d785072700d3f6727a02f6f3c24ecfd081c7cf0aa8/pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2", size = 788495, upload-time = "2025-09-13T11:26:39.325Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/c3/da/b8a7ee04378a53f6fefefc0c5e05570a3ebfdfa0523a878bcd3b475683ee/pydantic-2.12.0.tar.gz", hash = "sha256:c1a077e6270dbfb37bfd8b498b3981e2bb18f68103720e51fa6c306a5a9af563", size = 814760, upload-time = "2025-10-07T15:58:03.467Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/3e/d3/108f2006987c58e76691d5ae5d200dd3e0f532cb4e5fa3560751c3a1feba/pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2", size = 444855, upload-time = "2025-09-13T11:26:36.909Z" },
+ { url = "https://files.pythonhosted.org/packages/f4/9d/d5c855424e2e5b6b626fbc6ec514d8e655a600377ce283008b115abb7445/pydantic-2.12.0-py3-none-any.whl", hash = "sha256:f6a1da352d42790537e95e83a8bdfb91c7efbae63ffd0b86fa823899e807116f", size = 459730, upload-time = "2025-10-07T15:58:01.576Z" },
]
[[package]]
name = "pydantic-core"
-version = "2.33.2"
+version = "2.41.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195, upload-time = "2025-04-23T18:33:52.104Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/7d/14/12b4a0d2b0b10d8e1d9a24ad94e7bbb43335eaf29c0c4e57860e8a30734a/pydantic_core-2.41.1.tar.gz", hash = "sha256:1ad375859a6d8c356b7704ec0f547a58e82ee80bb41baa811ad710e124bc8f2f", size = 454870, upload-time = "2025-10-07T10:50:45.974Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/e5/92/b31726561b5dae176c2d2c2dc43a9c5bfba5d32f96f8b4c0a600dd492447/pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8", size = 2028817, upload-time = "2025-04-23T18:30:43.919Z" },
- { url = "https://files.pythonhosted.org/packages/a3/44/3f0b95fafdaca04a483c4e685fe437c6891001bf3ce8b2fded82b9ea3aa1/pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d", size = 1861357, upload-time = "2025-04-23T18:30:46.372Z" },
- { url = "https://files.pythonhosted.org/packages/30/97/e8f13b55766234caae05372826e8e4b3b96e7b248be3157f53237682e43c/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d", size = 1898011, upload-time = "2025-04-23T18:30:47.591Z" },
- { url = "https://files.pythonhosted.org/packages/9b/a3/99c48cf7bafc991cc3ee66fd544c0aae8dc907b752f1dad2d79b1b5a471f/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572", size = 1982730, upload-time = "2025-04-23T18:30:49.328Z" },
- { url = "https://files.pythonhosted.org/packages/de/8e/a5b882ec4307010a840fb8b58bd9bf65d1840c92eae7534c7441709bf54b/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02", size = 2136178, upload-time = "2025-04-23T18:30:50.907Z" },
- { url = "https://files.pythonhosted.org/packages/e4/bb/71e35fc3ed05af6834e890edb75968e2802fe98778971ab5cba20a162315/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b", size = 2736462, upload-time = "2025-04-23T18:30:52.083Z" },
- { url = "https://files.pythonhosted.org/packages/31/0d/c8f7593e6bc7066289bbc366f2235701dcbebcd1ff0ef8e64f6f239fb47d/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2", size = 2005652, upload-time = "2025-04-23T18:30:53.389Z" },
- { url = "https://files.pythonhosted.org/packages/d2/7a/996d8bd75f3eda405e3dd219ff5ff0a283cd8e34add39d8ef9157e722867/pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a", size = 2113306, upload-time = "2025-04-23T18:30:54.661Z" },
- { url = "https://files.pythonhosted.org/packages/ff/84/daf2a6fb2db40ffda6578a7e8c5a6e9c8affb251a05c233ae37098118788/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac", size = 2073720, upload-time = "2025-04-23T18:30:56.11Z" },
- { url = "https://files.pythonhosted.org/packages/77/fb/2258da019f4825128445ae79456a5499c032b55849dbd5bed78c95ccf163/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a", size = 2244915, upload-time = "2025-04-23T18:30:57.501Z" },
- { url = "https://files.pythonhosted.org/packages/d8/7a/925ff73756031289468326e355b6fa8316960d0d65f8b5d6b3a3e7866de7/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b", size = 2241884, upload-time = "2025-04-23T18:30:58.867Z" },
- { url = "https://files.pythonhosted.org/packages/0b/b0/249ee6d2646f1cdadcb813805fe76265745c4010cf20a8eba7b0e639d9b2/pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22", size = 1910496, upload-time = "2025-04-23T18:31:00.078Z" },
- { url = "https://files.pythonhosted.org/packages/66/ff/172ba8f12a42d4b552917aa65d1f2328990d3ccfc01d5b7c943ec084299f/pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640", size = 1955019, upload-time = "2025-04-23T18:31:01.335Z" },
- { url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584, upload-time = "2025-04-23T18:31:03.106Z" },
- { url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071, upload-time = "2025-04-23T18:31:04.621Z" },
- { url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823, upload-time = "2025-04-23T18:31:06.377Z" },
- { url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792, upload-time = "2025-04-23T18:31:07.93Z" },
- { url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338, upload-time = "2025-04-23T18:31:09.283Z" },
- { url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998, upload-time = "2025-04-23T18:31:11.7Z" },
- { url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200, upload-time = "2025-04-23T18:31:13.536Z" },
- { url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890, upload-time = "2025-04-23T18:31:15.011Z" },
- { url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359, upload-time = "2025-04-23T18:31:16.393Z" },
- { url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883, upload-time = "2025-04-23T18:31:17.892Z" },
- { url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074, upload-time = "2025-04-23T18:31:19.205Z" },
- { url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538, upload-time = "2025-04-23T18:31:20.541Z" },
- { url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909, upload-time = "2025-04-23T18:31:22.371Z" },
- { url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786, upload-time = "2025-04-23T18:31:24.161Z" },
- { url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000, upload-time = "2025-04-23T18:31:25.863Z" },
- { url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996, upload-time = "2025-04-23T18:31:27.341Z" },
- { url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957, upload-time = "2025-04-23T18:31:28.956Z" },
- { url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199, upload-time = "2025-04-23T18:31:31.025Z" },
- { url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296, upload-time = "2025-04-23T18:31:32.514Z" },
- { url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109, upload-time = "2025-04-23T18:31:33.958Z" },
- { url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028, upload-time = "2025-04-23T18:31:39.095Z" },
- { url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044, upload-time = "2025-04-23T18:31:41.034Z" },
- { url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881, upload-time = "2025-04-23T18:31:42.757Z" },
- { url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034, upload-time = "2025-04-23T18:31:44.304Z" },
- { url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187, upload-time = "2025-04-23T18:31:45.891Z" },
- { url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628, upload-time = "2025-04-23T18:31:47.819Z" },
- { url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866, upload-time = "2025-04-23T18:31:49.635Z" },
- { url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894, upload-time = "2025-04-23T18:31:51.609Z" },
- { url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688, upload-time = "2025-04-23T18:31:53.175Z" },
- { url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808, upload-time = "2025-04-23T18:31:54.79Z" },
- { url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580, upload-time = "2025-04-23T18:31:57.393Z" },
- { url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859, upload-time = "2025-04-23T18:31:59.065Z" },
- { url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810, upload-time = "2025-04-23T18:32:00.78Z" },
- { url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498, upload-time = "2025-04-23T18:32:02.418Z" },
- { url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611, upload-time = "2025-04-23T18:32:04.152Z" },
- { url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924, upload-time = "2025-04-23T18:32:06.129Z" },
- { url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196, upload-time = "2025-04-23T18:32:08.178Z" },
- { url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389, upload-time = "2025-04-23T18:32:10.242Z" },
- { url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223, upload-time = "2025-04-23T18:32:12.382Z" },
- { url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473, upload-time = "2025-04-23T18:32:14.034Z" },
- { url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269, upload-time = "2025-04-23T18:32:15.783Z" },
- { url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921, upload-time = "2025-04-23T18:32:18.473Z" },
- { url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162, upload-time = "2025-04-23T18:32:20.188Z" },
- { url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560, upload-time = "2025-04-23T18:32:22.354Z" },
- { url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777, upload-time = "2025-04-23T18:32:25.088Z" },
- { url = "https://files.pythonhosted.org/packages/30/68/373d55e58b7e83ce371691f6eaa7175e3a24b956c44628eb25d7da007917/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa", size = 2023982, upload-time = "2025-04-23T18:32:53.14Z" },
- { url = "https://files.pythonhosted.org/packages/a4/16/145f54ac08c96a63d8ed6442f9dec17b2773d19920b627b18d4f10a061ea/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29", size = 1858412, upload-time = "2025-04-23T18:32:55.52Z" },
- { url = "https://files.pythonhosted.org/packages/41/b1/c6dc6c3e2de4516c0bb2c46f6a373b91b5660312342a0cf5826e38ad82fa/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d", size = 1892749, upload-time = "2025-04-23T18:32:57.546Z" },
- { url = "https://files.pythonhosted.org/packages/12/73/8cd57e20afba760b21b742106f9dbdfa6697f1570b189c7457a1af4cd8a0/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e", size = 2067527, upload-time = "2025-04-23T18:32:59.771Z" },
- { url = "https://files.pythonhosted.org/packages/e3/d5/0bb5d988cc019b3cba4a78f2d4b3854427fc47ee8ec8e9eaabf787da239c/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c", size = 2108225, upload-time = "2025-04-23T18:33:04.51Z" },
- { url = "https://files.pythonhosted.org/packages/f1/c5/00c02d1571913d496aabf146106ad8239dc132485ee22efe08085084ff7c/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec", size = 2069490, upload-time = "2025-04-23T18:33:06.391Z" },
- { url = "https://files.pythonhosted.org/packages/22/a8/dccc38768274d3ed3a59b5d06f59ccb845778687652daa71df0cab4040d7/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052", size = 2237525, upload-time = "2025-04-23T18:33:08.44Z" },
- { url = "https://files.pythonhosted.org/packages/d4/e7/4f98c0b125dda7cf7ccd14ba936218397b44f50a56dd8c16a3091df116c3/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c", size = 2238446, upload-time = "2025-04-23T18:33:10.313Z" },
- { url = "https://files.pythonhosted.org/packages/ce/91/2ec36480fdb0b783cd9ef6795753c1dea13882f2e68e73bce76ae8c21e6a/pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808", size = 2066678, upload-time = "2025-04-23T18:33:12.224Z" },
- { url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200, upload-time = "2025-04-23T18:33:14.199Z" },
- { url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123, upload-time = "2025-04-23T18:33:16.555Z" },
- { url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852, upload-time = "2025-04-23T18:33:18.513Z" },
- { url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484, upload-time = "2025-04-23T18:33:20.475Z" },
- { url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896, upload-time = "2025-04-23T18:33:22.501Z" },
- { url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475, upload-time = "2025-04-23T18:33:24.528Z" },
- { url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013, upload-time = "2025-04-23T18:33:26.621Z" },
- { url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715, upload-time = "2025-04-23T18:33:28.656Z" },
- { url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757, upload-time = "2025-04-23T18:33:30.645Z" },
+ { url = "https://files.pythonhosted.org/packages/b3/2c/a5c4640dc7132540109f67fe83b566fbc7512ccf2a068cfa22a243df70c7/pydantic_core-2.41.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:e63036298322e9aea1c8b7c0a6c1204d615dbf6ec0668ce5b83ff27f07404a61", size = 2113814, upload-time = "2025-10-06T21:09:50.892Z" },
+ { url = "https://files.pythonhosted.org/packages/e3/e7/a8694c3454a57842095d69c7a4ab3cf81c3c7b590f052738eabfdfc2e234/pydantic_core-2.41.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:241299ca91fc77ef64f11ed909d2d9220a01834e8e6f8de61275c4dd16b7c936", size = 1916660, upload-time = "2025-10-06T21:09:52.783Z" },
+ { url = "https://files.pythonhosted.org/packages/9c/58/29f12e65b19c1877a0269eb4f23c5d2267eded6120a7d6762501ab843dc9/pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1ab7e594a2a5c24ab8013a7dc8cfe5f2260e80e490685814122081705c2cf2b0", size = 1975071, upload-time = "2025-10-06T21:09:54.009Z" },
+ { url = "https://files.pythonhosted.org/packages/98/26/4e677f2b7ec3fbdd10be6b586a82a814c8ebe3e474024c8df2d4260e564e/pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b054ef1a78519cb934b58e9c90c09e93b837c935dcd907b891f2b265b129eb6e", size = 2067271, upload-time = "2025-10-06T21:09:55.175Z" },
+ { url = "https://files.pythonhosted.org/packages/29/50/50614bd906089904d7ca1be3b9ecf08c00a327143d48f1decfdc21b3c302/pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f2ab7d10d0ab2ed6da54c757233eb0f48ebfb4f86e9b88ccecb3f92bbd61a538", size = 2253207, upload-time = "2025-10-06T21:09:56.709Z" },
+ { url = "https://files.pythonhosted.org/packages/ea/58/b1e640b4ca559273cca7c28e0fe8891d5d8e9a600f5ab4882670ec107549/pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2757606b7948bb853a27e4040820306eaa0ccb9e8f9f8a0fa40cb674e170f350", size = 2375052, upload-time = "2025-10-06T21:09:57.97Z" },
+ { url = "https://files.pythonhosted.org/packages/53/25/cd47df3bfb24350e03835f0950288d1054f1cc9a8023401dabe6d4ff2834/pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cec0e75eb61f606bad0a32f2be87507087514e26e8c73db6cbdb8371ccd27917", size = 2076834, upload-time = "2025-10-06T21:09:59.58Z" },
+ { url = "https://files.pythonhosted.org/packages/ec/b4/71b2c77e5df527fbbc1a03e72c3fd96c44cd10d4241a81befef8c12b9fc4/pydantic_core-2.41.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0234236514f44a5bf552105cfe2543a12f48203397d9d0f866affa569345a5b5", size = 2195374, upload-time = "2025-10-06T21:10:01.18Z" },
+ { url = "https://files.pythonhosted.org/packages/aa/08/4b8a50733005865efde284fec45da75fe16a258f706e16323c5ace4004eb/pydantic_core-2.41.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:1b974e41adfbb4ebb0f65fc4ca951347b17463d60893ba7d5f7b9bb087c83897", size = 2156060, upload-time = "2025-10-06T21:10:02.74Z" },
+ { url = "https://files.pythonhosted.org/packages/83/c3/1037cb603ef2130c210150a51b1710d86825b5c28df54a55750099f91196/pydantic_core-2.41.1-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:248dafb3204136113c383e91a4d815269f51562b6659b756cf3df14eefc7d0bb", size = 2331640, upload-time = "2025-10-06T21:10:04.39Z" },
+ { url = "https://files.pythonhosted.org/packages/56/4c/52d111869610e6b1a46e1f1035abcdc94d0655587e39104433a290e9f377/pydantic_core-2.41.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:678f9d76a91d6bcedd7568bbf6beb77ae8447f85d1aeebaab7e2f0829cfc3a13", size = 2329844, upload-time = "2025-10-06T21:10:05.68Z" },
+ { url = "https://files.pythonhosted.org/packages/32/5d/4b435f0b52ab543967761aca66b84ad3f0026e491e57de47693d15d0a8db/pydantic_core-2.41.1-cp310-cp310-win32.whl", hash = "sha256:dff5bee1d21ee58277900692a641925d2dddfde65182c972569b1a276d2ac8fb", size = 1991289, upload-time = "2025-10-06T21:10:07.199Z" },
+ { url = "https://files.pythonhosted.org/packages/88/52/31b4deafc1d3cb96d0e7c0af70f0dc05454982d135d07f5117e6336153e8/pydantic_core-2.41.1-cp310-cp310-win_amd64.whl", hash = "sha256:5042da12e5d97d215f91567110fdfa2e2595a25f17c19b9ff024f31c34f9b53e", size = 2027747, upload-time = "2025-10-06T21:10:08.503Z" },
+ { url = "https://files.pythonhosted.org/packages/f6/a9/ec440f02e57beabdfd804725ef1e38ac1ba00c49854d298447562e119513/pydantic_core-2.41.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4f276a6134fe1fc1daa692642a3eaa2b7b858599c49a7610816388f5e37566a1", size = 2111456, upload-time = "2025-10-06T21:10:09.824Z" },
+ { url = "https://files.pythonhosted.org/packages/f0/f9/6bc15bacfd8dcfc073a1820a564516d9c12a435a9a332d4cbbfd48828ddd/pydantic_core-2.41.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:07588570a805296ece009c59d9a679dc08fab72fb337365afb4f3a14cfbfc176", size = 1915012, upload-time = "2025-10-06T21:10:11.599Z" },
+ { url = "https://files.pythonhosted.org/packages/38/8a/d9edcdcdfe80bade17bed424284427c08bea892aaec11438fa52eaeaf79c/pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28527e4b53400cd60ffbd9812ccb2b5135d042129716d71afd7e45bf42b855c0", size = 1973762, upload-time = "2025-10-06T21:10:13.154Z" },
+ { url = "https://files.pythonhosted.org/packages/d5/b3/ff225c6d49fba4279de04677c1c876fc3dc6562fd0c53e9bfd66f58c51a8/pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:46a1c935c9228bad738c8a41de06478770927baedf581d172494ab36a6b96575", size = 2065386, upload-time = "2025-10-06T21:10:14.436Z" },
+ { url = "https://files.pythonhosted.org/packages/47/ba/183e8c0be4321314af3fd1ae6bfc7eafdd7a49bdea5da81c56044a207316/pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:447ddf56e2b7d28d200d3e9eafa936fe40485744b5a824b67039937580b3cb20", size = 2252317, upload-time = "2025-10-06T21:10:15.719Z" },
+ { url = "https://files.pythonhosted.org/packages/57/c5/aab61e94fd02f45c65f1f8c9ec38bb3b33fbf001a1837c74870e97462572/pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:63892ead40c1160ac860b5debcc95c95c5a0035e543a8b5a4eac70dd22e995f4", size = 2373405, upload-time = "2025-10-06T21:10:17.017Z" },
+ { url = "https://files.pythonhosted.org/packages/e5/4f/3aaa3bd1ea420a15acc42d7d3ccb3b0bbc5444ae2f9dbc1959f8173e16b8/pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4a9543ca355e6df8fbe9c83e9faab707701e9103ae857ecb40f1c0cf8b0e94d", size = 2073794, upload-time = "2025-10-06T21:10:18.383Z" },
+ { url = "https://files.pythonhosted.org/packages/58/bd/e3975cdebe03ec080ef881648de316c73f2a6be95c14fc4efb2f7bdd0d41/pydantic_core-2.41.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f2611bdb694116c31e551ed82e20e39a90bea9b7ad9e54aaf2d045ad621aa7a1", size = 2194430, upload-time = "2025-10-06T21:10:19.638Z" },
+ { url = "https://files.pythonhosted.org/packages/2b/b8/6b7e7217f147d3b3105b57fb1caec3c4f667581affdfaab6d1d277e1f749/pydantic_core-2.41.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fecc130893a9b5f7bfe230be1bb8c61fe66a19db8ab704f808cb25a82aad0bc9", size = 2154611, upload-time = "2025-10-06T21:10:21.28Z" },
+ { url = "https://files.pythonhosted.org/packages/fe/7b/239c2fe76bd8b7eef9ae2140d737368a3c6fea4fd27f8f6b4cde6baa3ce9/pydantic_core-2.41.1-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:1e2df5f8344c99b6ea5219f00fdc8950b8e6f2c422fbc1cc122ec8641fac85a1", size = 2329809, upload-time = "2025-10-06T21:10:22.678Z" },
+ { url = "https://files.pythonhosted.org/packages/bd/2e/77a821a67ff0786f2f14856d6bd1348992f695ee90136a145d7a445c1ff6/pydantic_core-2.41.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:35291331e9d8ed94c257bab6be1cb3a380b5eee570a2784bffc055e18040a2ea", size = 2327907, upload-time = "2025-10-06T21:10:24.447Z" },
+ { url = "https://files.pythonhosted.org/packages/fd/9a/b54512bb9df7f64c586b369328c30481229b70ca6a5fcbb90b715e15facf/pydantic_core-2.41.1-cp311-cp311-win32.whl", hash = "sha256:2876a095292668d753f1a868c4a57c4ac9f6acbd8edda8debe4218d5848cf42f", size = 1989964, upload-time = "2025-10-06T21:10:25.676Z" },
+ { url = "https://files.pythonhosted.org/packages/9d/72/63c9a4f1a5c950e65dd522d7dd67f167681f9d4f6ece3b80085a0329f08f/pydantic_core-2.41.1-cp311-cp311-win_amd64.whl", hash = "sha256:b92d6c628e9a338846a28dfe3fcdc1a3279388624597898b105e078cdfc59298", size = 2025158, upload-time = "2025-10-06T21:10:27.522Z" },
+ { url = "https://files.pythonhosted.org/packages/d8/16/4e2706184209f61b50c231529257c12eb6bd9eb36e99ea1272e4815d2200/pydantic_core-2.41.1-cp311-cp311-win_arm64.whl", hash = "sha256:7d82ae99409eb69d507a89835488fb657faa03ff9968a9379567b0d2e2e56bc5", size = 1972297, upload-time = "2025-10-06T21:10:28.814Z" },
+ { url = "https://files.pythonhosted.org/packages/ee/bc/5f520319ee1c9e25010412fac4154a72e0a40d0a19eb00281b1f200c0947/pydantic_core-2.41.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:db2f82c0ccbce8f021ad304ce35cbe02aa2f95f215cac388eed542b03b4d5eb4", size = 2099300, upload-time = "2025-10-06T21:10:30.463Z" },
+ { url = "https://files.pythonhosted.org/packages/31/14/010cd64c5c3814fb6064786837ec12604be0dd46df3327cf8474e38abbbd/pydantic_core-2.41.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:47694a31c710ced9205d5f1e7e8af3ca57cbb8a503d98cb9e33e27c97a501601", size = 1910179, upload-time = "2025-10-06T21:10:31.782Z" },
+ { url = "https://files.pythonhosted.org/packages/8e/2e/23fc2a8a93efad52df302fdade0a60f471ecc0c7aac889801ac24b4c07d6/pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:93e9decce94daf47baf9e9d392f5f2557e783085f7c5e522011545d9d6858e00", size = 1957225, upload-time = "2025-10-06T21:10:33.11Z" },
+ { url = "https://files.pythonhosted.org/packages/b9/b6/6db08b2725b2432b9390844852e11d320281e5cea8a859c52c68001975fa/pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ab0adafdf2b89c8b84f847780a119437a0931eca469f7b44d356f2b426dd9741", size = 2053315, upload-time = "2025-10-06T21:10:34.87Z" },
+ { url = "https://files.pythonhosted.org/packages/61/d9/4de44600f2d4514b44f3f3aeeda2e14931214b6b5bf52479339e801ce748/pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5da98cc81873f39fd56882e1569c4677940fbc12bce6213fad1ead784192d7c8", size = 2224298, upload-time = "2025-10-06T21:10:36.233Z" },
+ { url = "https://files.pythonhosted.org/packages/7a/ae/dbe51187a7f35fc21b283c5250571a94e36373eb557c1cba9f29a9806dcf/pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:209910e88afb01fd0fd403947b809ba8dba0e08a095e1f703294fda0a8fdca51", size = 2351797, upload-time = "2025-10-06T21:10:37.601Z" },
+ { url = "https://files.pythonhosted.org/packages/b5/a7/975585147457c2e9fb951c7c8dab56deeb6aa313f3aa72c2fc0df3f74a49/pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:365109d1165d78d98e33c5bfd815a9b5d7d070f578caefaabcc5771825b4ecb5", size = 2074921, upload-time = "2025-10-06T21:10:38.927Z" },
+ { url = "https://files.pythonhosted.org/packages/62/37/ea94d1d0c01dec1b7d236c7cec9103baab0021f42500975de3d42522104b/pydantic_core-2.41.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:706abf21e60a2857acdb09502bc853ee5bce732955e7b723b10311114f033115", size = 2187767, upload-time = "2025-10-06T21:10:40.651Z" },
+ { url = "https://files.pythonhosted.org/packages/d3/fe/694cf9fdd3a777a618c3afd210dba7b414cb8a72b1bd29b199c2e5765fee/pydantic_core-2.41.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:bf0bd5417acf7f6a7ec3b53f2109f587be176cb35f9cf016da87e6017437a72d", size = 2136062, upload-time = "2025-10-06T21:10:42.09Z" },
+ { url = "https://files.pythonhosted.org/packages/0f/ae/174aeabd89916fbd2988cc37b81a59e1186e952afd2a7ed92018c22f31ca/pydantic_core-2.41.1-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:2e71b1c6ceb9c78424ae9f63a07292fb769fb890a4e7efca5554c47f33a60ea5", size = 2317819, upload-time = "2025-10-06T21:10:43.974Z" },
+ { url = "https://files.pythonhosted.org/packages/65/e8/e9aecafaebf53fc456314f72886068725d6fba66f11b013532dc21259343/pydantic_core-2.41.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:80745b9770b4a38c25015b517451c817799bfb9d6499b0d13d8227ec941cb513", size = 2312267, upload-time = "2025-10-06T21:10:45.34Z" },
+ { url = "https://files.pythonhosted.org/packages/35/2f/1c2e71d2a052f9bb2f2df5a6a05464a0eb800f9e8d9dd800202fe31219e1/pydantic_core-2.41.1-cp312-cp312-win32.whl", hash = "sha256:83b64d70520e7890453f1aa21d66fda44e7b35f1cfea95adf7b4289a51e2b479", size = 1990927, upload-time = "2025-10-06T21:10:46.738Z" },
+ { url = "https://files.pythonhosted.org/packages/b1/78/562998301ff2588b9c6dcc5cb21f52fa919d6e1decc75a35055feb973594/pydantic_core-2.41.1-cp312-cp312-win_amd64.whl", hash = "sha256:377defd66ee2003748ee93c52bcef2d14fde48fe28a0b156f88c3dbf9bc49a50", size = 2034703, upload-time = "2025-10-06T21:10:48.524Z" },
+ { url = "https://files.pythonhosted.org/packages/b2/53/d95699ce5a5cdb44bb470bd818b848b9beadf51459fd4ea06667e8ede862/pydantic_core-2.41.1-cp312-cp312-win_arm64.whl", hash = "sha256:c95caff279d49c1d6cdfe2996e6c2ad712571d3b9caaa209a404426c326c4bde", size = 1972719, upload-time = "2025-10-06T21:10:50.256Z" },
+ { url = "https://files.pythonhosted.org/packages/27/8a/6d54198536a90a37807d31a156642aae7a8e1263ed9fe6fc6245defe9332/pydantic_core-2.41.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:70e790fce5f05204ef4403159857bfcd587779da78627b0babb3654f75361ebf", size = 2105825, upload-time = "2025-10-06T21:10:51.719Z" },
+ { url = "https://files.pythonhosted.org/packages/4f/2e/4784fd7b22ac9c8439db25bf98ffed6853d01e7e560a346e8af821776ccc/pydantic_core-2.41.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9cebf1ca35f10930612d60bd0f78adfacee824c30a880e3534ba02c207cceceb", size = 1910126, upload-time = "2025-10-06T21:10:53.145Z" },
+ { url = "https://files.pythonhosted.org/packages/f3/92/31eb0748059ba5bd0aa708fb4bab9fcb211461ddcf9e90702a6542f22d0d/pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:170406a37a5bc82c22c3274616bf6f17cc7df9c4a0a0a50449e559cb755db669", size = 1961472, upload-time = "2025-10-06T21:10:55.754Z" },
+ { url = "https://files.pythonhosted.org/packages/ab/91/946527792275b5c4c7dde4cfa3e81241bf6900e9fee74fb1ba43e0c0f1ab/pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:12d4257fc9187a0ccd41b8b327d6a4e57281ab75e11dda66a9148ef2e1fb712f", size = 2063230, upload-time = "2025-10-06T21:10:57.179Z" },
+ { url = "https://files.pythonhosted.org/packages/31/5d/a35c5d7b414e5c0749f1d9f0d159ee2ef4bab313f499692896b918014ee3/pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a75a33b4db105dd1c8d57839e17ee12db8d5ad18209e792fa325dbb4baeb00f4", size = 2229469, upload-time = "2025-10-06T21:10:59.409Z" },
+ { url = "https://files.pythonhosted.org/packages/21/4d/8713737c689afa57ecfefe38db78259d4484c97aa494979e6a9d19662584/pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08a589f850803a74e0fcb16a72081cafb0d72a3cdda500106942b07e76b7bf62", size = 2347986, upload-time = "2025-10-06T21:11:00.847Z" },
+ { url = "https://files.pythonhosted.org/packages/f6/ec/929f9a3a5ed5cda767081494bacd32f783e707a690ce6eeb5e0730ec4986/pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a97939d6ea44763c456bd8a617ceada2c9b96bb5b8ab3dfa0d0827df7619014", size = 2072216, upload-time = "2025-10-06T21:11:02.43Z" },
+ { url = "https://files.pythonhosted.org/packages/26/55/a33f459d4f9cc8786d9db42795dbecc84fa724b290d7d71ddc3d7155d46a/pydantic_core-2.41.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d2ae423c65c556f09569524b80ffd11babff61f33055ef9773d7c9fabc11ed8d", size = 2193047, upload-time = "2025-10-06T21:11:03.787Z" },
+ { url = "https://files.pythonhosted.org/packages/77/af/d5c6959f8b089f2185760a2779079e3c2c411bfc70ea6111f58367851629/pydantic_core-2.41.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:4dc703015fbf8764d6a8001c327a87f1823b7328d40b47ce6000c65918ad2b4f", size = 2140613, upload-time = "2025-10-06T21:11:05.607Z" },
+ { url = "https://files.pythonhosted.org/packages/58/e5/2c19bd2a14bffe7fabcf00efbfbd3ac430aaec5271b504a938ff019ac7be/pydantic_core-2.41.1-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:968e4ffdfd35698a5fe659e5e44c508b53664870a8e61c8f9d24d3d145d30257", size = 2327641, upload-time = "2025-10-06T21:11:07.143Z" },
+ { url = "https://files.pythonhosted.org/packages/93/ef/e0870ccda798c54e6b100aff3c4d49df5458fd64217e860cb9c3b0a403f4/pydantic_core-2.41.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:fff2b76c8e172d34771cd4d4f0ade08072385310f214f823b5a6ad4006890d32", size = 2318229, upload-time = "2025-10-06T21:11:08.73Z" },
+ { url = "https://files.pythonhosted.org/packages/b1/4b/c3b991d95f5deb24d0bd52e47bcf716098fa1afe0ce2d4bd3125b38566ba/pydantic_core-2.41.1-cp313-cp313-win32.whl", hash = "sha256:a38a5263185407ceb599f2f035faf4589d57e73c7146d64f10577f6449e8171d", size = 1997911, upload-time = "2025-10-06T21:11:10.329Z" },
+ { url = "https://files.pythonhosted.org/packages/a7/ce/5c316fd62e01f8d6be1b7ee6b54273214e871772997dc2c95e204997a055/pydantic_core-2.41.1-cp313-cp313-win_amd64.whl", hash = "sha256:b42ae7fd6760782c975897e1fdc810f483b021b32245b0105d40f6e7a3803e4b", size = 2034301, upload-time = "2025-10-06T21:11:12.113Z" },
+ { url = "https://files.pythonhosted.org/packages/29/41/902640cfd6a6523194123e2c3373c60f19006447f2fb06f76de4e8466c5b/pydantic_core-2.41.1-cp313-cp313-win_arm64.whl", hash = "sha256:ad4111acc63b7384e205c27a2f15e23ac0ee21a9d77ad6f2e9cb516ec90965fb", size = 1977238, upload-time = "2025-10-06T21:11:14.1Z" },
+ { url = "https://files.pythonhosted.org/packages/04/04/28b040e88c1b89d851278478842f0bdf39c7a05da9e850333c6c8cbe7dfa/pydantic_core-2.41.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:440d0df7415b50084a4ba9d870480c16c5f67c0d1d4d5119e3f70925533a0edc", size = 1875626, upload-time = "2025-10-06T21:11:15.69Z" },
+ { url = "https://files.pythonhosted.org/packages/d6/58/b41dd3087505220bb58bc81be8c3e8cbc037f5710cd3c838f44f90bdd704/pydantic_core-2.41.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:71eaa38d342099405dae6484216dcf1e8e4b0bebd9b44a4e08c9b43db6a2ab67", size = 2045708, upload-time = "2025-10-06T21:11:17.258Z" },
+ { url = "https://files.pythonhosted.org/packages/d7/b8/760f23754e40bf6c65b94a69b22c394c24058a0ef7e2aa471d2e39219c1a/pydantic_core-2.41.1-cp313-cp313t-win_amd64.whl", hash = "sha256:555ecf7e50f1161d3f693bc49f23c82cf6cdeafc71fa37a06120772a09a38795", size = 1997171, upload-time = "2025-10-06T21:11:18.822Z" },
+ { url = "https://files.pythonhosted.org/packages/41/12/cec246429ddfa2778d2d6301eca5362194dc8749ecb19e621f2f65b5090f/pydantic_core-2.41.1-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:05226894a26f6f27e1deb735d7308f74ef5fa3a6de3e0135bb66cdcaee88f64b", size = 2107836, upload-time = "2025-10-06T21:11:20.432Z" },
+ { url = "https://files.pythonhosted.org/packages/20/39/baba47f8d8b87081302498e610aefc37142ce6a1cc98b2ab6b931a162562/pydantic_core-2.41.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:85ff7911c6c3e2fd8d3779c50925f6406d770ea58ea6dde9c230d35b52b16b4a", size = 1904449, upload-time = "2025-10-06T21:11:22.185Z" },
+ { url = "https://files.pythonhosted.org/packages/50/32/9a3d87cae2c75a5178334b10358d631bd094b916a00a5993382222dbfd92/pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47f1f642a205687d59b52dc1a9a607f45e588f5a2e9eeae05edd80c7a8c47674", size = 1961750, upload-time = "2025-10-06T21:11:24.348Z" },
+ { url = "https://files.pythonhosted.org/packages/27/42/a96c9d793a04cf2a9773bff98003bb154087b94f5530a2ce6063ecfec583/pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:df11c24e138876ace5ec6043e5cae925e34cf38af1a1b3d63589e8f7b5f5cdc4", size = 2063305, upload-time = "2025-10-06T21:11:26.556Z" },
+ { url = "https://files.pythonhosted.org/packages/3e/8d/028c4b7d157a005b1f52c086e2d4b0067886b213c86220c1153398dbdf8f/pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7f0bf7f5c8f7bf345c527e8a0d72d6b26eda99c1227b0c34e7e59e181260de31", size = 2228959, upload-time = "2025-10-06T21:11:28.426Z" },
+ { url = "https://files.pythonhosted.org/packages/08/f7/ee64cda8fcc9ca3f4716e6357144f9ee71166775df582a1b6b738bf6da57/pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:82b887a711d341c2c47352375d73b029418f55b20bd7815446d175a70effa706", size = 2345421, upload-time = "2025-10-06T21:11:30.226Z" },
+ { url = "https://files.pythonhosted.org/packages/13/c0/e8ec05f0f5ee7a3656973ad9cd3bc73204af99f6512c1a4562f6fb4b3f7d/pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b5f1d5d6bbba484bdf220c72d8ecd0be460f4bd4c5e534a541bb2cd57589fb8b", size = 2065288, upload-time = "2025-10-06T21:11:32.019Z" },
+ { url = "https://files.pythonhosted.org/packages/0a/25/d77a73ff24e2e4fcea64472f5e39b0402d836da9b08b5361a734d0153023/pydantic_core-2.41.1-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2bf1917385ebe0f968dc5c6ab1375886d56992b93ddfe6bf52bff575d03662be", size = 2189759, upload-time = "2025-10-06T21:11:33.753Z" },
+ { url = "https://files.pythonhosted.org/packages/66/45/4a4ebaaae12a740552278d06fe71418c0f2869537a369a89c0e6723b341d/pydantic_core-2.41.1-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:4f94f3ab188f44b9a73f7295663f3ecb8f2e2dd03a69c8f2ead50d37785ecb04", size = 2140747, upload-time = "2025-10-06T21:11:35.781Z" },
+ { url = "https://files.pythonhosted.org/packages/da/6d/b727ce1022f143194a36593243ff244ed5a1eb3c9122296bf7e716aa37ba/pydantic_core-2.41.1-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:3925446673641d37c30bd84a9d597e49f72eacee8b43322c8999fa17d5ae5bc4", size = 2327416, upload-time = "2025-10-06T21:11:37.75Z" },
+ { url = "https://files.pythonhosted.org/packages/6f/8c/02df9d8506c427787059f87c6c7253435c6895e12472a652d9616ee0fc95/pydantic_core-2.41.1-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:49bd51cc27adb980c7b97357ae036ce9b3c4d0bb406e84fbe16fb2d368b602a8", size = 2318138, upload-time = "2025-10-06T21:11:39.463Z" },
+ { url = "https://files.pythonhosted.org/packages/98/67/0cf429a7d6802536941f430e6e3243f6d4b68f41eeea4b242372f1901794/pydantic_core-2.41.1-cp314-cp314-win32.whl", hash = "sha256:a31ca0cd0e4d12ea0df0077df2d487fc3eb9d7f96bbb13c3c5b88dcc21d05159", size = 1998429, upload-time = "2025-10-06T21:11:41.989Z" },
+ { url = "https://files.pythonhosted.org/packages/38/60/742fef93de5d085022d2302a6317a2b34dbfe15258e9396a535c8a100ae7/pydantic_core-2.41.1-cp314-cp314-win_amd64.whl", hash = "sha256:1b5c4374a152e10a22175d7790e644fbd8ff58418890e07e2073ff9d4414efae", size = 2028870, upload-time = "2025-10-06T21:11:43.66Z" },
+ { url = "https://files.pythonhosted.org/packages/31/38/cdd8ccb8555ef7720bd7715899bd6cfbe3c29198332710e1b61b8f5dd8b8/pydantic_core-2.41.1-cp314-cp314-win_arm64.whl", hash = "sha256:4fee76d757639b493eb600fba668f1e17475af34c17dd61db7a47e824d464ca9", size = 1974275, upload-time = "2025-10-06T21:11:45.476Z" },
+ { url = "https://files.pythonhosted.org/packages/e7/7e/8ac10ccb047dc0221aa2530ec3c7c05ab4656d4d4bd984ee85da7f3d5525/pydantic_core-2.41.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f9b9c968cfe5cd576fdd7361f47f27adeb120517e637d1b189eea1c3ece573f4", size = 1875124, upload-time = "2025-10-06T21:11:47.591Z" },
+ { url = "https://files.pythonhosted.org/packages/c3/e4/7d9791efeb9c7d97e7268f8d20e0da24d03438a7fa7163ab58f1073ba968/pydantic_core-2.41.1-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f1ebc7ab67b856384aba09ed74e3e977dded40e693de18a4f197c67d0d4e6d8e", size = 2043075, upload-time = "2025-10-06T21:11:49.542Z" },
+ { url = "https://files.pythonhosted.org/packages/2d/c3/3f6e6b2342ac11ac8cd5cb56e24c7b14afa27c010e82a765ffa5f771884a/pydantic_core-2.41.1-cp314-cp314t-win_amd64.whl", hash = "sha256:8ae0dc57b62a762985bc7fbf636be3412394acc0ddb4ade07fe104230f1b9762", size = 1995341, upload-time = "2025-10-06T21:11:51.497Z" },
+ { url = "https://files.pythonhosted.org/packages/16/89/d0afad37ba25f5801735af1472e650b86baad9fe807a42076508e4824a2a/pydantic_core-2.41.1-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:68f2251559b8efa99041bb63571ec7cdd2d715ba74cc82b3bc9eff824ebc8bf0", size = 2124001, upload-time = "2025-10-07T10:49:54.369Z" },
+ { url = "https://files.pythonhosted.org/packages/8e/c4/08609134b34520568ddebb084d9ed0a2a3f5f52b45739e6e22cb3a7112eb/pydantic_core-2.41.1-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:c7bc140c596097cb53b30546ca257dbe3f19282283190b1b5142928e5d5d3a20", size = 1941841, upload-time = "2025-10-07T10:49:56.248Z" },
+ { url = "https://files.pythonhosted.org/packages/2a/43/94a4877094e5fe19a3f37e7e817772263e2c573c94f1e3fa2b1eee56ef3b/pydantic_core-2.41.1-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2896510fce8f4725ec518f8b9d7f015a00db249d2fd40788f442af303480063d", size = 1961129, upload-time = "2025-10-07T10:49:58.298Z" },
+ { url = "https://files.pythonhosted.org/packages/a2/30/23a224d7e25260eb5f69783a63667453037e07eb91ff0e62dabaadd47128/pydantic_core-2.41.1-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ced20e62cfa0f496ba68fa5d6c7ee71114ea67e2a5da3114d6450d7f4683572a", size = 2148770, upload-time = "2025-10-07T10:49:59.959Z" },
+ { url = "https://files.pythonhosted.org/packages/2b/3e/a51c5f5d37b9288ba30683d6e96f10fa8f1defad1623ff09f1020973b577/pydantic_core-2.41.1-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:b04fa9ed049461a7398138c604b00550bc89e3e1151d84b81ad6dc93e39c4c06", size = 2115344, upload-time = "2025-10-07T10:50:02.466Z" },
+ { url = "https://files.pythonhosted.org/packages/5a/bd/389504c9e0600ef4502cd5238396b527afe6ef8981a6a15cd1814fc7b434/pydantic_core-2.41.1-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:b3b7d9cfbfdc43c80a16638c6dc2768e3956e73031fca64e8e1a3ae744d1faeb", size = 1927994, upload-time = "2025-10-07T10:50:04.379Z" },
+ { url = "https://files.pythonhosted.org/packages/ff/9c/5111c6b128861cb792a4c082677e90dac4f2e090bb2e2fe06aa5b2d39027/pydantic_core-2.41.1-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eec83fc6abef04c7f9bec616e2d76ee9a6a4ae2a359b10c21d0f680e24a247ca", size = 1959394, upload-time = "2025-10-07T10:50:06.335Z" },
+ { url = "https://files.pythonhosted.org/packages/14/3f/cfec8b9a0c48ce5d64409ec5e1903cb0b7363da38f14b41de2fcb3712700/pydantic_core-2.41.1-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6771a2d9f83c4038dfad5970a3eef215940682b2175e32bcc817bdc639019b28", size = 2147365, upload-time = "2025-10-07T10:50:07.978Z" },
+ { url = "https://files.pythonhosted.org/packages/d4/31/f403d7ca8352e3e4df352ccacd200f5f7f7fe81cef8e458515f015091625/pydantic_core-2.41.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:fabcbdb12de6eada8d6e9a759097adb3c15440fafc675b3e94ae5c9cb8d678a0", size = 2114268, upload-time = "2025-10-07T10:50:10.257Z" },
+ { url = "https://files.pythonhosted.org/packages/6e/b5/334473b6d2810df84db67f03d4f666acacfc538512c2d2a254074fee0889/pydantic_core-2.41.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:80e97ccfaf0aaf67d55de5085b0ed0d994f57747d9d03f2de5cc9847ca737b08", size = 1935786, upload-time = "2025-10-07T10:50:12.333Z" },
+ { url = "https://files.pythonhosted.org/packages/ea/5e/45513e4dc621f47397cfa5fef12ba8fa5e8b1c4c07f2ff2a5fef8ff81b25/pydantic_core-2.41.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34df1fe8fea5d332484a763702e8b6a54048a9d4fe6ccf41e34a128238e01f52", size = 1971995, upload-time = "2025-10-07T10:50:14.071Z" },
+ { url = "https://files.pythonhosted.org/packages/22/e3/f1797c168e5f52b973bed1c585e99827a22d5e579d1ed57d51bc15b14633/pydantic_core-2.41.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:421b5595f845842fc093f7250e24ee395f54ca62d494fdde96f43ecf9228ae01", size = 2191264, upload-time = "2025-10-07T10:50:15.788Z" },
+ { url = "https://files.pythonhosted.org/packages/bb/e1/24ef4c3b4ab91c21c3a09a966c7d2cffe101058a7bfe5cc8b2c7c7d574e2/pydantic_core-2.41.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:dce8b22663c134583aaad24827863306a933f576c79da450be3984924e2031d1", size = 2152430, upload-time = "2025-10-07T10:50:18.018Z" },
+ { url = "https://files.pythonhosted.org/packages/35/74/70c1e225d67f7ef3fdba02c506d9011efaf734020914920b2aa3d1a45e61/pydantic_core-2.41.1-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:300a9c162fea9906cc5c103893ca2602afd84f0ec90d3be36f4cc360125d22e1", size = 2324691, upload-time = "2025-10-07T10:50:19.801Z" },
+ { url = "https://files.pythonhosted.org/packages/c8/bf/dd4d21037c8bef0d8cce90a86a3f2dcb011c30086db2a10113c3eea23eba/pydantic_core-2.41.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:e019167628f6e6161ae7ab9fb70f6d076a0bf0d55aa9b20833f86a320c70dd65", size = 2324493, upload-time = "2025-10-07T10:50:21.568Z" },
+ { url = "https://files.pythonhosted.org/packages/7e/78/3093b334e9c9796c8236a4701cd2ddef1c56fb0928fe282a10c797644380/pydantic_core-2.41.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:13ab9cc2de6f9d4ab645a050ae5aee61a2424ac4d3a16ba23d4c2027705e0301", size = 2146156, upload-time = "2025-10-07T10:50:23.475Z" },
+ { url = "https://files.pythonhosted.org/packages/e6/6c/fa3e45c2b054a1e627a89a364917f12cbe3abc3e91b9004edaae16e7b3c5/pydantic_core-2.41.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:af2385d3f98243fb733862f806c5bb9122e5fba05b373e3af40e3c82d711cef1", size = 2112094, upload-time = "2025-10-07T10:50:25.513Z" },
+ { url = "https://files.pythonhosted.org/packages/e5/17/7eebc38b4658cc8e6902d0befc26388e4c2a5f2e179c561eeb43e1922c7b/pydantic_core-2.41.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:6550617a0c2115be56f90c31a5370261d8ce9dbf051c3ed53b51172dd34da696", size = 1935300, upload-time = "2025-10-07T10:50:27.715Z" },
+ { url = "https://files.pythonhosted.org/packages/2b/00/9fe640194a1717a464ab861d43595c268830f98cb1e2705aa134b3544b70/pydantic_core-2.41.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc17b6ecf4983d298686014c92ebc955a9f9baf9f57dad4065e7906e7bee6222", size = 1970417, upload-time = "2025-10-07T10:50:29.573Z" },
+ { url = "https://files.pythonhosted.org/packages/b2/ad/f4cdfaf483b78ee65362363e73b6b40c48e067078d7b146e8816d5945ad6/pydantic_core-2.41.1-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:42ae9352cf211f08b04ea110563d6b1e415878eea5b4c70f6bdb17dca3b932d2", size = 2190745, upload-time = "2025-10-07T10:50:31.48Z" },
+ { url = "https://files.pythonhosted.org/packages/cb/c1/18f416d40a10f44e9387497ba449f40fdb1478c61ba05c4b6bdb82300362/pydantic_core-2.41.1-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:e82947de92068b0a21681a13dd2102387197092fbe7defcfb8453e0913866506", size = 2150888, upload-time = "2025-10-07T10:50:33.477Z" },
+ { url = "https://files.pythonhosted.org/packages/42/30/134c8a921630d8a88d6f905a562495a6421e959a23c19b0f49b660801d67/pydantic_core-2.41.1-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:e244c37d5471c9acdcd282890c6c4c83747b77238bfa19429b8473586c907656", size = 2324489, upload-time = "2025-10-07T10:50:36.48Z" },
+ { url = "https://files.pythonhosted.org/packages/9c/48/a9263aeaebdec81e941198525b43edb3b44f27cfa4cb8005b8d3eb8dec72/pydantic_core-2.41.1-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:1e798b4b304a995110d41ec93653e57975620ccb2842ba9420037985e7d7284e", size = 2322763, upload-time = "2025-10-07T10:50:38.751Z" },
+ { url = "https://files.pythonhosted.org/packages/1d/62/755d2bd2593f701c5839fc084e9c2c5e2418f460383ad04e3b5d0befc3ca/pydantic_core-2.41.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:f1fc716c0eb1663c59699b024428ad5ec2bcc6b928527b8fe28de6cb89f47efb", size = 2144046, upload-time = "2025-10-07T10:50:40.686Z" },
]
[[package]]
diff --git a/libs/langchain/README.md b/libs/langchain/README.md
index 90d6129b784..32c6b513f75 100644
--- a/libs/langchain/README.md
+++ b/libs/langchain/README.md
@@ -1,9 +1,8 @@
-# π¦οΈπ LangChain
+# π¦οΈπ LangChain Classic
-β‘ Building applications with LLMs through composability β‘
-
-[](https://opensource.org/licenses/MIT)
-[](https://pypistats.org/packages/langchain)
+[](https://pypi.org/project/langchain-classic/#history)
+[](https://opensource.org/licenses/MIT)
+[](https://pypistats.org/packages/langchain-classic)
[](https://twitter.com/langchainai)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -13,67 +12,26 @@ To help you ship LangChain apps to production faster, check out [LangSmith](http
## Quick Install
-`pip install langchain-classic`
+```bash
+pip install langchain-classic
+```
## π€ What is this?
-Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
+Legacy chains, `langchain-community` re-exports, indexing API, deprecated functionality, and more.
-This library aims to assist in the development of those types of applications. Common examples of these applications include:
-
-**β Question answering with RAG**
-
-- [Documentation](https://python.langchain.com/docs/tutorials/rag/)
-- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
-
-**π§± Extracting structured output**
-
-- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
-- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
-
-**π€ Chatbots**
-
-- [Documentation](https://python.langchain.com/docs/tutorials/chatbot/)
-- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
+In most cases, you should be using the main [`langchain`](https://pypi.org/project/langchain/) package.
## π Documentation
-Please see [our full documentation](https://python.langchain.com) on:
+For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_classic).
-- Getting started (installation, setting up the environment, simple examples)
-- How-To examples (demos, integrations, helper functions)
-- Reference (full API docs)
-- Resources (high-level explanation of core concepts)
+## π Releases & Versioning
-## π What can this help with?
-
-There are five main areas that LangChain is designed to help with.
-These are, in increasing order of complexity:
-
-**π€ Agents:**
-
-Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
-
-**π Retrieval Augmented Generation:**
-
-Retrieval Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
-
-**π§ Evaluation:**
-
-Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
-
-**π Models and Prompts:**
-
-This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with chat models and LLMs.
-
-**π Chains:**
-
-Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
-
-For more information on these concepts, please see our [full documentation](https://python.langchain.com).
+See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.
## π Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
-For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
+For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
diff --git a/libs/langchain/langchain_classic/_api/module_import.py b/libs/langchain/langchain_classic/_api/module_import.py
index b19ef5d3590..90520bb1a6c 100644
--- a/libs/langchain/langchain_classic/_api/module_import.py
+++ b/libs/langchain/langchain_classic/_api/module_import.py
@@ -26,7 +26,7 @@ def create_importer(
imports to new imports.
The function will raise deprecation warning on loops using
- deprecated_lookups or fallback_module.
+ `deprecated_lookups` or `fallback_module`.
Module lookups will import without deprecation warnings (used to speed
up imports from large namespaces like llms or chat models).
@@ -37,18 +37,20 @@ def create_importer(
loss of type information, IDE support for going to definition etc).
Args:
- package: current package. Use __package__
- module_lookup: maps name of object to the module where it is defined.
+ package: Current package. Use `__package__`
+ module_lookup: Maps name of object to the module where it is defined.
e.g.,
+ ```json
{
"MyDocumentLoader": (
"langchain_community.document_loaders.my_document_loader"
)
}
- deprecated_lookups: same as module look up, but will raise
+ ```
+ deprecated_lookups: Same as module look up, but will raise
deprecation warnings.
- fallback_module: module to import from if the object is not found in
- module_lookup or if module_lookup is not provided.
+ fallback_module: Module to import from if the object is not found in
+ `module_lookup` or if `module_lookup` is not provided.
Returns:
A function that imports objects from the specified modules.
@@ -56,7 +58,7 @@ def create_importer(
all_module_lookup = {**(deprecated_lookups or {}), **(module_lookup or {})}
def import_by_name(name: str) -> Any:
- """Import stores from langchain_community."""
+ """Import stores from `langchain_community`."""
# If not in interactive env, raise warning.
if all_module_lookup and name in all_module_lookup:
new_module = all_module_lookup[name]
diff --git a/libs/langchain/langchain_classic/agents/agent.py b/libs/langchain/langchain_classic/agents/agent.py
index 4984573fa08..6c98fa13809 100644
--- a/libs/langchain/langchain_classic/agents/agent.py
+++ b/libs/langchain/langchain_classic/agents/agent.py
@@ -105,10 +105,7 @@ class BaseSingleActionAgent(BaseModel):
@property
@abstractmethod
def input_keys(self) -> list[str]:
- """Return the input keys.
-
- :meta private:
- """
+ """Return the input keys."""
def return_stopped_response(
self,
@@ -124,7 +121,7 @@ class BaseSingleActionAgent(BaseModel):
along with observations.
Returns:
- AgentFinish: Agent finish object.
+ Agent finish object.
Raises:
ValueError: If `early_stopping_method` is not supported.
@@ -155,7 +152,7 @@ class BaseSingleActionAgent(BaseModel):
kwargs: Additional arguments.
Returns:
- BaseSingleActionAgent: Agent object.
+ Agent object.
"""
raise NotImplementedError
@@ -169,7 +166,7 @@ class BaseSingleActionAgent(BaseModel):
"""Return dictionary representation of agent.
Returns:
- Dict: Dictionary representation of agent.
+ Dictionary representation of agent.
"""
_dict = super().model_dump()
try:
@@ -233,7 +230,7 @@ class BaseMultiActionAgent(BaseModel):
"""Get allowed tools.
Returns:
- list[str] | None: Allowed tools.
+ Allowed tools.
"""
return None
@@ -278,10 +275,7 @@ class BaseMultiActionAgent(BaseModel):
@property
@abstractmethod
def input_keys(self) -> list[str]:
- """Return the input keys.
-
- :meta private:
- """
+ """Return the input keys."""
def return_stopped_response(
self,
@@ -297,7 +291,7 @@ class BaseMultiActionAgent(BaseModel):
along with observations.
Returns:
- AgentFinish: Agent finish object.
+ Agent finish object.
Raises:
ValueError: If `early_stopping_method` is not supported.
@@ -329,7 +323,7 @@ class BaseMultiActionAgent(BaseModel):
Raises:
NotImplementedError: If agent does not support saving.
- ValueError: If file_path is not json or yaml.
+ ValueError: If `file_path` is not json or yaml.
Example:
```python
@@ -388,8 +382,7 @@ class MultiActionAgentOutputParser(
text: Text to parse.
Returns:
- Union[List[AgentAction], AgentFinish]:
- List of agent actions or agent finish.
+ List of agent actions or agent finish.
"""
@@ -404,8 +397,8 @@ class RunnableAgent(BaseSingleActionAgent):
"""Whether to stream from the runnable or not.
If `True` then underlying LLM is invoked in a streaming fashion to make it possible
- to get access to the individual LLM tokens when using stream_log with the Agent
- Executor. If `False` then LLM is invoked in a non-streaming fashion and
+ to get access to the individual LLM tokens when using stream_log with the
+ `AgentExecutor`. If `False` then LLM is invoked in a non-streaming fashion and
individual LLM tokens will not be available in stream_log.
"""
@@ -446,7 +439,7 @@ class RunnableAgent(BaseSingleActionAgent):
# Use streaming to make sure that the underlying LLM is invoked in a
# streaming
# fashion to make it possible to get access to the individual LLM tokens
- # when using stream_log with the Agent Executor.
+ # when using stream_log with the AgentExecutor.
# Because the response from the plan is not a generator, we need to
# accumulate the output into final output and return that.
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
@@ -482,7 +475,7 @@ class RunnableAgent(BaseSingleActionAgent):
# Use streaming to make sure that the underlying LLM is invoked in a
# streaming
# fashion to make it possible to get access to the individual LLM tokens
- # when using stream_log with the Agent Executor.
+ # when using stream_log with the AgentExecutor.
# Because the response from the plan is not a generator, we need to
# accumulate the output into final output and return that.
async for chunk in self.runnable.astream(
@@ -512,8 +505,8 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
"""Whether to stream from the runnable or not.
If `True` then underlying LLM is invoked in a streaming fashion to make it possible
- to get access to the individual LLM tokens when using stream_log with the Agent
- Executor. If `False` then LLM is invoked in a non-streaming fashion and
+ to get access to the individual LLM tokens when using stream_log with the
+ `AgentExecutor`. If `False` then LLM is invoked in a non-streaming fashion and
individual LLM tokens will not be available in stream_log.
"""
@@ -558,7 +551,7 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
# Use streaming to make sure that the underlying LLM is invoked in a
# streaming
# fashion to make it possible to get access to the individual LLM tokens
- # when using stream_log with the Agent Executor.
+ # when using stream_log with the AgentExecutor.
# Because the response from the plan is not a generator, we need to
# accumulate the output into final output and return that.
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
@@ -594,7 +587,7 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
# Use streaming to make sure that the underlying LLM is invoked in a
# streaming
# fashion to make it possible to get access to the individual LLM tokens
- # when using stream_log with the Agent Executor.
+ # when using stream_log with the AgentExecutor.
# Because the response from the plan is not a generator, we need to
# accumulate the output into final output and return that.
async for chunk in self.runnable.astream(
@@ -812,7 +805,7 @@ class Agent(BaseSingleActionAgent):
**kwargs: User inputs.
Returns:
- Dict[str, Any]: Full inputs for the LLMChain.
+ Full inputs for the LLMChain.
"""
thoughts = self._construct_scratchpad(intermediate_steps)
new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
@@ -820,10 +813,7 @@ class Agent(BaseSingleActionAgent):
@property
def input_keys(self) -> list[str]:
- """Return the input keys.
-
- :meta private:
- """
+ """Return the input keys."""
return list(set(self.llm_chain.input_keys) - {"agent_scratchpad"})
@model_validator(mode="after")
@@ -834,11 +824,11 @@ class Agent(BaseSingleActionAgent):
values: Values to validate.
Returns:
- Dict: Validated values.
+ Validated values.
Raises:
ValueError: If `agent_scratchpad` is not in prompt.input_variables
- and prompt is not a FewShotPromptTemplate or a PromptTemplate.
+ and prompt is not a FewShotPromptTemplate or a PromptTemplate.
"""
prompt = self.llm_chain.prompt
if "agent_scratchpad" not in prompt.input_variables:
@@ -875,7 +865,7 @@ class Agent(BaseSingleActionAgent):
tools: Tools to use.
Returns:
- BasePromptTemplate: Prompt template.
+ Prompt template.
"""
@classmethod
@@ -910,7 +900,7 @@ class Agent(BaseSingleActionAgent):
kwargs: Additional arguments.
Returns:
- Agent: Agent object.
+ Agent object.
"""
cls._validate_tools(tools)
llm_chain = LLMChain(
@@ -942,7 +932,7 @@ class Agent(BaseSingleActionAgent):
**kwargs: User inputs.
Returns:
- AgentFinish: Agent finish object.
+ Agent finish object.
Raises:
ValueError: If `early_stopping_method` is not in ['force', 'generate'].
@@ -1054,9 +1044,9 @@ class AgentExecutor(Chain):
Defaults to `False`, which raises the error.
If `true`, the error will be sent back to the LLM as an observation.
If a string, the string itself will be sent to the LLM as an observation.
- If a callable function, the function will be called with the exception
- as an argument, and the result of that function will be passed to the agent
- as an observation.
+ If a callable function, the function will be called with the exception as an
+ argument, and the result of that function will be passed to the agent as an
+ observation.
"""
trim_intermediate_steps: (
int | Callable[[list[tuple[AgentAction, str]]], list[tuple[AgentAction, str]]]
@@ -1082,7 +1072,7 @@ class AgentExecutor(Chain):
kwargs: Additional arguments.
Returns:
- AgentExecutor: Agent executor object.
+ Agent executor object.
"""
return cls(
agent=agent,
@@ -1099,7 +1089,7 @@ class AgentExecutor(Chain):
values: Values to validate.
Returns:
- Dict: Validated values.
+ Validated values.
Raises:
ValueError: If allowed tools are different than provided tools.
@@ -1126,7 +1116,7 @@ class AgentExecutor(Chain):
values: Values to validate.
Returns:
- Dict: Validated values.
+ Validated values.
"""
agent = values.get("agent")
if agent and isinstance(agent, Runnable):
@@ -1209,7 +1199,7 @@ class AgentExecutor(Chain):
async_: Whether to run async. (Ignored)
Returns:
- AgentExecutorIterator: Agent executor iterator object.
+ Agent executor iterator object.
"""
return AgentExecutorIterator(
self,
@@ -1221,18 +1211,12 @@ class AgentExecutor(Chain):
@property
def input_keys(self) -> list[str]:
- """Return the input keys.
-
- :meta private:
- """
+ """Return the input keys."""
return self._action_agent.input_keys
@property
def output_keys(self) -> list[str]:
- """Return the singular output key.
-
- :meta private:
- """
+ """Return the singular output key."""
if self.return_intermediate_steps:
return [*self._action_agent.return_values, "intermediate_steps"]
return self._action_agent.return_values
@@ -1244,7 +1228,7 @@ class AgentExecutor(Chain):
name: Name of tool.
Returns:
- BaseTool: Tool object.
+ Tool object.
"""
return {tool.name: tool for tool in self.tools}[name]
@@ -1759,7 +1743,7 @@ class AgentExecutor(Chain):
kwargs: Additional arguments.
Yields:
- AddableDict: Addable dictionary.
+ Addable dictionary.
"""
config = ensure_config(config)
iterator = AgentExecutorIterator(
@@ -1790,7 +1774,7 @@ class AgentExecutor(Chain):
kwargs: Additional arguments.
Yields:
- AddableDict: Addable dictionary.
+ Addable dictionary.
"""
config = ensure_config(config)
iterator = AgentExecutorIterator(
diff --git a/libs/langchain/langchain_classic/agents/agent_iterator.py b/libs/langchain/langchain_classic/agents/agent_iterator.py
index f9ed911c91e..138b58f69c2 100644
--- a/libs/langchain/langchain_classic/agents/agent_iterator.py
+++ b/libs/langchain/langchain_classic/agents/agent_iterator.py
@@ -53,9 +53,9 @@ class AgentExecutorIterator:
include_run_info: bool = False,
yield_actions: bool = False,
):
- """Initialize the AgentExecutorIterator.
+ """Initialize the `AgentExecutorIterator`.
- Initialize the AgentExecutorIterator with the given AgentExecutor,
+ Initialize the `AgentExecutorIterator` with the given `AgentExecutor`,
inputs, and optional callbacks.
Args:
@@ -91,7 +91,7 @@ class AgentExecutorIterator:
@property
def inputs(self) -> dict[str, str]:
- """The inputs to the AgentExecutor."""
+ """The inputs to the `AgentExecutor`."""
return self._inputs
@inputs.setter
@@ -100,7 +100,7 @@ class AgentExecutorIterator:
@property
def agent_executor(self) -> AgentExecutor:
- """The AgentExecutor to iterate over."""
+ """The `AgentExecutor` to iterate over."""
return self._agent_executor
@agent_executor.setter
@@ -171,7 +171,7 @@ class AgentExecutorIterator:
return prepared_outputs
def __iter__(self: AgentExecutorIterator) -> Iterator[AddableDict]:
- """Create an async iterator for the AgentExecutor."""
+ """Create an async iterator for the `AgentExecutor`."""
logger.debug("Initialising AgentExecutorIterator")
self.reset()
callback_manager = CallbackManager.configure(
@@ -235,7 +235,7 @@ class AgentExecutorIterator:
yield self._stop(run_manager)
async def __aiter__(self) -> AsyncIterator[AddableDict]:
- """Create an async iterator for the AgentExecutor.
+ """Create an async iterator for the `AgentExecutor`.
N.B. __aiter__ must be a normal method, so need to initialize async run manager
on first __anext__ call where we can await it.
diff --git a/libs/langchain/langchain_classic/agents/agent_toolkits/__init__.py b/libs/langchain/langchain_classic/agents/agent_toolkits/__init__.py
index 6275d049256..e09e993933b 100644
--- a/libs/langchain/langchain_classic/agents/agent_toolkits/__init__.py
+++ b/libs/langchain/langchain_classic/agents/agent_toolkits/__init__.py
@@ -11,7 +11,7 @@ When developing an application, developers should inspect the capabilities and
permissions of the tools that underlie the given agent toolkit, and determine
whether permissions of the given toolkit are appropriate for the application.
-See [Security](https://python.langchain.com/docs/security) for more information.
+See https://docs.langchain.com/oss/python/security-policy for more information.
"""
from pathlib import Path
diff --git a/libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py b/libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py
index e95a233ec2a..3c054a3ddb8 100644
--- a/libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py
+++ b/libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py
@@ -1,7 +1,6 @@
from typing import Any
from langchain_core.language_models import BaseLanguageModel
-from langchain_core.memory import BaseMemory
from langchain_core.messages import SystemMessage
from langchain_core.prompts.chat import MessagesPlaceholder
from langchain_core.tools import BaseTool
@@ -11,6 +10,7 @@ from langchain_classic.agents.openai_functions_agent.agent_token_buffer_memory i
AgentTokenBufferMemory,
)
from langchain_classic.agents.openai_functions_agent.base import OpenAIFunctionsAgent
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.memory.token_buffer import ConversationTokenBufferMemory
@@ -37,7 +37,7 @@ def create_conversational_retrieval_agent(
"""A convenience method for creating a conversational retrieval agent.
Args:
- llm: The language model to use, should be ChatOpenAI
+ llm: The language model to use, should be `ChatOpenAI`
tools: A list of tools the agent has access to
remember_intermediate_steps: Whether the agent should remember intermediate
steps or not. Intermediate steps refer to prior action/observation
@@ -47,11 +47,9 @@ def create_conversational_retrieval_agent(
memory_key: The name of the memory key in the prompt.
system_message: The system message to use. By default, a basic one will
be used.
- verbose: Whether or not the final AgentExecutor should be verbose or not,
- defaults to False.
+ verbose: Whether or not the final AgentExecutor should be verbose or not.
max_token_limit: The max number of tokens to keep around in memory.
- Defaults to 2000.
- **kwargs: Additional keyword arguments to pass to the AgentExecutor.
+ **kwargs: Additional keyword arguments to pass to the `AgentExecutor`.
Returns:
An agent executor initialized appropriately
diff --git a/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/base.py b/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/base.py
index 95587613af6..9a97eb32beb 100644
--- a/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/base.py
+++ b/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/base.py
@@ -58,7 +58,7 @@ def create_vectorstore_agent(
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langgraph.prebuilt import create_react_agent
- llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
+ model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
vector_store = InMemoryVectorStore.from_texts(
[
@@ -74,7 +74,7 @@ def create_vectorstore_agent(
"Fetches information about pets.",
)
- agent = create_react_agent(llm, [tool])
+ agent = create_react_agent(model, [tool])
for step in agent.stream(
{"messages": [("human", "What are dogs known for?")]},
@@ -86,13 +86,12 @@ def create_vectorstore_agent(
Args:
llm: LLM that will be used by the agent
toolkit: Set of tools for the agent
- callback_manager: Object to handle the callback [ Defaults to `None`. ]
- prefix: The prefix prompt for the agent. If not provided uses default PREFIX.
+ callback_manager: Object to handle the callback
+ prefix: The prefix prompt for the agent.
verbose: If you want to see the content of the scratchpad.
- [ Defaults to `False` ]
agent_executor_kwargs: If there is any other parameter you want to send to the
- agent. [ Defaults to `None` ]
- kwargs: Additional named parameters to pass to the ZeroShotAgent.
+ agent.
+ kwargs: Additional named parameters to pass to the `ZeroShotAgent`.
Returns:
Returns a callable AgentExecutor object.
@@ -156,7 +155,7 @@ def create_vectorstore_router_agent(
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langgraph.prebuilt import create_react_agent
- llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
+ model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
pet_vector_store = InMemoryVectorStore.from_texts(
[
@@ -187,7 +186,7 @@ def create_vectorstore_router_agent(
),
]
- agent = create_react_agent(llm, tools)
+ agent = create_react_agent(model, tools)
for step in agent.stream(
{"messages": [("human", "Tell me about carrots.")]},
@@ -200,17 +199,16 @@ def create_vectorstore_router_agent(
llm: LLM that will be used by the agent
toolkit: Set of tools for the agent which have routing capability with multiple
vector stores
- callback_manager: Object to handle the callback [ Defaults to `None`. ]
+ callback_manager: Object to handle the callback
prefix: The prefix prompt for the router agent.
- If not provided uses default ROUTER_PREFIX.
+ If not provided uses default `ROUTER_PREFIX`.
verbose: If you want to see the content of the scratchpad.
- [ Defaults to `False` ]
agent_executor_kwargs: If there is any other parameter you want to send to the
- agent. [ Defaults to `None` ]
- kwargs: Additional named parameters to pass to the ZeroShotAgent.
+ agent.
+ kwargs: Additional named parameters to pass to the `ZeroShotAgent`.
Returns:
- Returns a callable AgentExecutor object.
+ Returns a callable `AgentExecutor` object.
Either you can call it or use run method with the query to get the response.
"""
diff --git a/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/toolkit.py b/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/toolkit.py
index 3d2df64ffcc..56940d8564e 100644
--- a/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/toolkit.py
+++ b/libs/langchain/langchain_classic/agents/agent_toolkits/vectorstore/toolkit.py
@@ -8,7 +8,7 @@ from pydantic import BaseModel, ConfigDict, Field
class VectorStoreInfo(BaseModel):
- """Information about a VectorStore."""
+ """Information about a `VectorStore`."""
vectorstore: VectorStore = Field(exclude=True)
name: str
@@ -20,7 +20,7 @@ class VectorStoreInfo(BaseModel):
class VectorStoreToolkit(BaseToolkit):
- """Toolkit for interacting with a Vector Store."""
+ """Toolkit for interacting with a `VectorStore`."""
vectorstore_info: VectorStoreInfo = Field(exclude=True)
llm: BaseLanguageModel
diff --git a/libs/langchain/langchain_classic/agents/agent_types.py b/libs/langchain/langchain_classic/agents/agent_types.py
index 073cb62cd3d..5bff78f3054 100644
--- a/libs/langchain/langchain_classic/agents/agent_types.py
+++ b/libs/langchain/langchain_classic/agents/agent_types.py
@@ -13,10 +13,7 @@ from langchain_classic._api.deprecation import AGENT_DEPRECATION_WARNING
removal="1.0",
)
class AgentType(str, Enum):
- """An enum for agent types.
-
- See documentation: https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent_types.AgentType.html
- """
+ """An enum for agent types."""
ZERO_SHOT_REACT_DESCRIPTION = "zero-shot-react-description"
"""A zero shot agent that does a reasoning step before acting."""
diff --git a/libs/langchain/langchain_classic/agents/chat/base.py b/libs/langchain/langchain_classic/agents/chat/base.py
index 136d4f1f4f0..c2f5f9576fa 100644
--- a/libs/langchain/langchain_classic/agents/chat/base.py
+++ b/libs/langchain/langchain_classic/agents/chat/base.py
@@ -94,13 +94,10 @@ class ChatAgent(Agent):
Args:
tools: A list of tools.
system_message_prefix: The system message prefix.
- Default is SYSTEM_MESSAGE_PREFIX.
system_message_suffix: The system message suffix.
- Default is SYSTEM_MESSAGE_SUFFIX.
- human_message: The human message. Default is HUMAN_MESSAGE.
+ human_message: The `HumanMessage`.
format_instructions: The format instructions.
- Default is FORMAT_INSTRUCTIONS.
- input_variables: The input variables. Default is None.
+ input_variables: The input variables.
Returns:
A prompt template.
@@ -141,16 +138,13 @@ class ChatAgent(Agent):
Args:
llm: The language model.
tools: A list of tools.
- callback_manager: The callback manager. Default is None.
- output_parser: The output parser. Default is None.
+ callback_manager: The callback manager.
+ output_parser: The output parser.
system_message_prefix: The system message prefix.
- Default is SYSTEM_MESSAGE_PREFIX.
system_message_suffix: The system message suffix.
- Default is SYSTEM_MESSAGE_SUFFIX.
- human_message: The human message. Default is HUMAN_MESSAGE.
+ human_message: The `HumanMessage`.
format_instructions: The format instructions.
- Default is FORMAT_INSTRUCTIONS.
- input_variables: The input variables. Default is None.
+ input_variables: The input variables.
kwargs: Additional keyword arguments.
Returns:
diff --git a/libs/langchain/langchain_classic/agents/conversational/base.py b/libs/langchain/langchain_classic/agents/conversational/base.py
index f198bb733df..84afe3313d6 100644
--- a/libs/langchain/langchain_classic/agents/conversational/base.py
+++ b/libs/langchain/langchain_classic/agents/conversational/base.py
@@ -87,15 +87,13 @@ class ConversationalAgent(Agent):
Args:
tools: List of tools the agent will have access to, used to format the
prompt.
- prefix: String to put before the list of tools. Defaults to PREFIX.
- suffix: String to put after the list of tools. Defaults to SUFFIX.
- format_instructions: Instructions on how to use the tools. Defaults to
- FORMAT_INSTRUCTIONS
- ai_prefix: String to use before AI output. Defaults to "AI".
+ prefix: String to put before the list of tools.
+ suffix: String to put after the list of tools.
+ format_instructions: Instructions on how to use the tools.
+ ai_prefix: String to use before AI output.
human_prefix: String to use before human output.
- Defaults to "Human".
input_variables: List of input variables the final prompt will expect.
- Defaults to ["input", "chat_history", "agent_scratchpad"].
+ Defaults to `["input", "chat_history", "agent_scratchpad"]`.
Returns:
A PromptTemplate with the template assembled from the pieces here.
@@ -139,16 +137,14 @@ class ConversationalAgent(Agent):
Args:
llm: The language model to use.
tools: A list of tools to use.
- callback_manager: The callback manager to use. Default is None.
- output_parser: The output parser to use. Default is None.
- prefix: The prefix to use in the prompt. Default is PREFIX.
- suffix: The suffix to use in the prompt. Default is SUFFIX.
+ callback_manager: The callback manager to use.
+ output_parser: The output parser to use.
+ prefix: The prefix to use in the prompt.
+ suffix: The suffix to use in the prompt.
format_instructions: The format instructions to use.
- Default is FORMAT_INSTRUCTIONS.
- ai_prefix: The prefix to use before AI output. Default is "AI".
+ ai_prefix: The prefix to use before AI output.
human_prefix: The prefix to use before human output.
- Default is "Human".
- input_variables: The input variables to use. Default is None.
+ input_variables: The input variables to use.
**kwargs: Any additional keyword arguments to pass to the agent.
Returns:
diff --git a/libs/langchain/langchain_classic/agents/conversational_chat/base.py b/libs/langchain/langchain_classic/agents/conversational_chat/base.py
index b8943bdb356..d0f17707018 100644
--- a/libs/langchain/langchain_classic/agents/conversational_chat/base.py
+++ b/libs/langchain/langchain_classic/agents/conversational_chat/base.py
@@ -87,15 +87,13 @@ class ConversationalChatAgent(Agent):
Args:
tools: The tools to use.
- system_message: The system message to use.
- Defaults to the PREFIX.
- human_message: The human message to use.
- Defaults to the SUFFIX.
- input_variables: The input variables to use. Defaults to `None`.
- output_parser: The output parser to use. Defaults to `None`.
+ system_message: The `SystemMessage` to use.
+ human_message: The `HumanMessage` to use.
+ input_variables: The input variables to use.
+ output_parser: The output parser to use.
Returns:
- A PromptTemplate.
+ A `PromptTemplate`.
"""
tool_strings = "\n".join(
[f"> {tool.name}: {tool.description}" for tool in tools],
@@ -150,11 +148,11 @@ class ConversationalChatAgent(Agent):
Args:
llm: The language model to use.
tools: A list of tools to use.
- callback_manager: The callback manager to use. Default is None.
- output_parser: The output parser to use. Default is None.
- system_message: The system message to use. Default is PREFIX.
- human_message: The human message to use. Default is SUFFIX.
- input_variables: The input variables to use. Default is None.
+ callback_manager: The callback manager to use.
+ output_parser: The output parser to use.
+ system_message: The `SystemMessage` to use.
+ human_message: The `HumanMessage` to use.
+ input_variables: The input variables to use.
**kwargs: Any additional arguments.
Returns:
diff --git a/libs/langchain/langchain_classic/agents/format_scratchpad/log.py b/libs/langchain/langchain_classic/agents/format_scratchpad/log.py
index bf24a96a67a..5bef3f0cec2 100644
--- a/libs/langchain/langchain_classic/agents/format_scratchpad/log.py
+++ b/libs/langchain/langchain_classic/agents/format_scratchpad/log.py
@@ -11,12 +11,10 @@ def format_log_to_str(
Args:
intermediate_steps: List of tuples of AgentAction and observation strings.
observation_prefix: Prefix to append the observation with.
- Defaults to "Observation: ".
llm_prefix: Prefix to append the llm call with.
- Defaults to "Thought: ".
Returns:
- str: The scratchpad.
+ The scratchpad.
"""
thoughts = ""
for action, observation in intermediate_steps:
diff --git a/libs/langchain/langchain_classic/agents/format_scratchpad/log_to_messages.py b/libs/langchain/langchain_classic/agents/format_scratchpad/log_to_messages.py
index a193e37ae38..5bc338831e9 100644
--- a/libs/langchain/langchain_classic/agents/format_scratchpad/log_to_messages.py
+++ b/libs/langchain/langchain_classic/agents/format_scratchpad/log_to_messages.py
@@ -11,10 +11,10 @@ def format_log_to_messages(
Args:
intermediate_steps: List of tuples of AgentAction and observation strings.
template_tool_response: Template to format the observation with.
- Defaults to "{observation}".
+ Defaults to `"{observation}"`.
Returns:
- List[BaseMessage]: The scratchpad.
+ The scratchpad.
"""
thoughts: list[BaseMessage] = []
for action, observation in intermediate_steps:
diff --git a/libs/langchain/langchain_classic/agents/initialize.py b/libs/langchain/langchain_classic/agents/initialize.py
index c1df318ab60..c01e0e441ea 100644
--- a/libs/langchain/langchain_classic/agents/initialize.py
+++ b/libs/langchain/langchain_classic/agents/initialize.py
@@ -38,14 +38,14 @@ def initialize_agent(
tools: List of tools this agent has access to.
llm: Language model to use as the agent.
agent: Agent type to use. If `None` and agent_path is also None, will default
- to AgentType.ZERO_SHOT_REACT_DESCRIPTION. Defaults to `None`.
+ to AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager: CallbackManager to use. Global callback manager is used if
- not provided. Defaults to `None`.
+ not provided.
agent_path: Path to serialized agent to use. If `None` and agent is also None,
- will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION. Defaults to `None`.
+ will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION.
agent_kwargs: Additional keyword arguments to pass to the underlying agent.
- Defaults to `None`.
- tags: Tags to apply to the traced runs. Defaults to `None`.
+
+ tags: Tags to apply to the traced runs.
kwargs: Additional keyword arguments passed to the agent executor.
Returns:
diff --git a/libs/langchain/langchain_classic/agents/json_chat/base.py b/libs/langchain/langchain_classic/agents/json_chat/base.py
index 2c8091601e8..90dd72ad753 100644
--- a/libs/langchain/langchain_classic/agents/json_chat/base.py
+++ b/libs/langchain/langchain_classic/agents/json_chat/base.py
@@ -30,13 +30,12 @@ def create_json_chat_agent(
If `False`, does not add a stop token.
If a list of str, uses the provided list as the stop tokens.
- Default is True. You may to set this to False if the LLM you are using
- does not support stop sequences.
+ You may to set this to False if the LLM you are using does not support stop
+ sequences.
tools_renderer: This controls how the tools are converted into a string and
- then passed into the LLM. Default is `render_text_description`.
+ then passed into the LLM.
template_tool_response: Template prompt that uses the tool response
(observation) to make the LLM generate the next action to take.
- Default is TEMPLATE_TOOL_RESPONSE.
Returns:
A Runnable sequence representing an agent. It takes as input all the same input
@@ -51,7 +50,7 @@ def create_json_chat_agent(
Example:
```python
from langchain_classic import hub
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_classic.agents import AgentExecutor, create_json_chat_agent
prompt = hub.pull("hwchase17/react-chat-json")
diff --git a/libs/langchain/langchain_classic/agents/mrkl/base.py b/libs/langchain/langchain_classic/agents/mrkl/base.py
index 799b7337022..6ed82f50e72 100644
--- a/libs/langchain/langchain_classic/agents/mrkl/base.py
+++ b/libs/langchain/langchain_classic/agents/mrkl/base.py
@@ -93,12 +93,11 @@ class ZeroShotAgent(Agent):
Args:
tools: List of tools the agent will have access to, used to format the
prompt.
- prefix: String to put before the list of tools. Defaults to PREFIX.
- suffix: String to put after the list of tools. Defaults to SUFFIX.
+ prefix: String to put before the list of tools.
+ suffix: String to put after the list of tools.
format_instructions: Instructions on how to use the tools.
- Defaults to FORMAT_INSTRUCTIONS
input_variables: List of input variables the final prompt will expect.
- Defaults to `None`.
+
Returns:
A PromptTemplate with the template assembled from the pieces here.
@@ -129,13 +128,12 @@ class ZeroShotAgent(Agent):
Args:
llm: The LLM to use as the agent LLM.
tools: The tools to use.
- callback_manager: The callback manager to use. Defaults to `None`.
- output_parser: The output parser to use. Defaults to `None`.
- prefix: The prefix to use. Defaults to PREFIX.
- suffix: The suffix to use. Defaults to SUFFIX.
+ callback_manager: The callback manager to use.
+ output_parser: The output parser to use.
+ prefix: The prefix to use.
+ suffix: The suffix to use.
format_instructions: The format instructions to use.
- Defaults to FORMAT_INSTRUCTIONS.
- input_variables: The input variables to use. Defaults to `None`.
+ input_variables: The input variables to use.
kwargs: Additional parameters to pass to the agent.
"""
cls._validate_tools(tools)
diff --git a/libs/langchain/langchain_classic/agents/openai_assistant/base.py b/libs/langchain/langchain_classic/agents/openai_assistant/base.py
index ed428e4ed16..472c3ad27a1 100644
--- a/libs/langchain/langchain_classic/agents/openai_assistant/base.py
+++ b/libs/langchain/langchain_classic/agents/openai_assistant/base.py
@@ -231,15 +231,15 @@ class OpenAIAssistantRunnable(RunnableSerializable[dict, OutputType]):
"""
client: Any = Field(default_factory=_get_openai_client)
- """OpenAI or AzureOpenAI client."""
+ """`OpenAI` or `AzureOpenAI` client."""
async_client: Any = None
- """OpenAI or AzureOpenAI async client."""
+ """`OpenAI` or `AzureOpenAI` async client."""
assistant_id: str
"""OpenAI assistant id."""
check_every_ms: float = 1_000.0
"""Frequency with which to check run progress in ms."""
as_agent: bool = False
- """Use as a LangChain agent, compatible with the AgentExecutor."""
+ """Use as a LangChain agent, compatible with the `AgentExecutor`."""
@model_validator(mode="after")
def _validate_async_client(self) -> Self:
@@ -314,7 +314,7 @@ class OpenAIAssistantRunnable(RunnableSerializable[dict, OutputType]):
run_metadata: Metadata to associate with new run.
attachments: A list of files attached to the message, and the
tools they should be added to.
- config: Runnable config. Defaults to `None`.
+ config: Runnable config.
**kwargs: Additional arguments.
Returns:
@@ -446,7 +446,7 @@ class OpenAIAssistantRunnable(RunnableSerializable[dict, OutputType]):
max_completion_tokens: Allow setting max_completion_tokens for this run.
max_prompt_tokens: Allow setting max_prompt_tokens for this run.
run_metadata: Metadata to associate with new run.
- config: Runnable config. Defaults to `None`.
+ config: Runnable config.
kwargs: Additional arguments.
Returns:
diff --git a/libs/langchain/langchain_classic/agents/openai_functions_agent/agent_token_buffer_memory.py b/libs/langchain/langchain_classic/agents/openai_functions_agent/agent_token_buffer_memory.py
index 4f8e49f0deb..f6f1307cfc9 100644
--- a/libs/langchain/langchain_classic/agents/openai_functions_agent/agent_token_buffer_memory.py
+++ b/libs/langchain/langchain_classic/agents/openai_functions_agent/agent_token_buffer_memory.py
@@ -17,18 +17,17 @@ class AgentTokenBufferMemory(BaseChatMemory):
"""Memory used to save agent output AND intermediate steps.
Args:
- human_prefix: Prefix for human messages. Default is "Human".
- ai_prefix: Prefix for AI messages. Default is "AI".
+ human_prefix: Prefix for human messages.
+ ai_prefix: Prefix for AI messages.
llm: Language model.
- memory_key: Key to save memory under. Default is "history".
+ memory_key: Key to save memory under.
max_token_limit: Maximum number of tokens to keep in the buffer.
Once the buffer exceeds this many tokens, the oldest
- messages will be pruned. Default is 12000.
- return_messages: Whether to return messages. Default is True.
- output_key: Key to save output under. Default is "output".
+ messages will be pruned.
+ return_messages: Whether to return messages.
+ output_key: Key to save output under.
intermediate_steps_key: Key to save intermediate steps under.
- Default is "intermediate_steps".
- format_as_tools: Whether to format as tools. Default is False.
+ format_as_tools: Whether to format as tools.
"""
human_prefix: str = "Human"
@@ -50,10 +49,7 @@ class AgentTokenBufferMemory(BaseChatMemory):
@property
def memory_variables(self) -> list[str]:
- """Always return list of memory variables.
-
- :meta private:
- """
+ """Always return list of memory variables."""
return [self.memory_key]
@override
diff --git a/libs/langchain/langchain_classic/agents/openai_functions_agent/base.py b/libs/langchain/langchain_classic/agents/openai_functions_agent/base.py
index c6907b7eeba..6a4cb091df8 100644
--- a/libs/langchain/langchain_classic/agents/openai_functions_agent/base.py
+++ b/libs/langchain/langchain_classic/agents/openai_functions_agent/base.py
@@ -40,15 +40,14 @@ class OpenAIFunctionsAgent(BaseSingleActionAgent):
"""An Agent driven by OpenAIs function powered API.
Args:
- llm: This should be an instance of ChatOpenAI, specifically a model
+ llm: This should be an instance of `ChatOpenAI`, specifically a model
that supports using `functions`.
tools: The tools this agent has access to.
prompt: The prompt for this agent, should support agent_scratchpad as one
of the variables. For an easy way to construct this prompt, use
`OpenAIFunctionsAgent.create_prompt(...)`
output_parser: The output parser for this agent. Should be an instance of
- OpenAIFunctionsAgentOutputParser.
- Defaults to OpenAIFunctionsAgentOutputParser.
+ `OpenAIFunctionsAgentOutputParser`.
"""
llm: BaseLanguageModel
@@ -106,14 +105,14 @@ class OpenAIFunctionsAgent(BaseSingleActionAgent):
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations.
- callbacks: Callbacks to use. Defaults to `None`.
- with_functions: Whether to use functions. Defaults to `True`.
+ callbacks: Callbacks to use.
+ with_functions: Whether to use functions.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
- If the agent is finished, returns an AgentFinish.
- If the agent is not finished, returns an AgentAction.
+ If the agent is finished, returns an `AgentFinish`.
+ If the agent is not finished, returns an `AgentAction`.
"""
agent_scratchpad = format_to_openai_function_messages(intermediate_steps)
selected_inputs = {
@@ -146,7 +145,7 @@ class OpenAIFunctionsAgent(BaseSingleActionAgent):
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations.
- callbacks: Callbacks to use. Defaults to `None`.
+ callbacks: Callbacks to use.
**kwargs: User inputs.
Returns:
@@ -261,8 +260,8 @@ class OpenAIFunctionsAgent(BaseSingleActionAgent):
Args:
llm: The LLM to use as the agent.
tools: The tools to use.
- callback_manager: The callback manager to use. Defaults to `None`.
- extra_prompt_messages: Extra prompt messages to use. Defaults to `None`.
+ callback_manager: The callback manager to use.
+ extra_prompt_messages: Extra prompt messages to use.
system_message: The system message to use.
Defaults to a default system message.
kwargs: Additional parameters to pass to the agent.
@@ -311,7 +310,7 @@ def create_openai_functions_agent(
Creating an agent with no memory
```python
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_classic.agents import (
AgentExecutor,
create_openai_functions_agent,
diff --git a/libs/langchain/langchain_classic/agents/openai_functions_multi_agent/base.py b/libs/langchain/langchain_classic/agents/openai_functions_multi_agent/base.py
index 3c3383dd451..340a71c21cb 100644
--- a/libs/langchain/langchain_classic/agents/openai_functions_multi_agent/base.py
+++ b/libs/langchain/langchain_classic/agents/openai_functions_multi_agent/base.py
@@ -212,7 +212,7 @@ class OpenAIMultiFunctionsAgent(BaseMultiActionAgent):
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations.
- callbacks: Callbacks to use. Default is None.
+ callbacks: Callbacks to use.
**kwargs: User inputs.
Returns:
@@ -243,7 +243,7 @@ class OpenAIMultiFunctionsAgent(BaseMultiActionAgent):
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations.
- callbacks: Callbacks to use. Default is None.
+ callbacks: Callbacks to use.
**kwargs: User inputs.
Returns:
@@ -275,7 +275,7 @@ class OpenAIMultiFunctionsAgent(BaseMultiActionAgent):
system_message: Message to use as the system message that will be the
first in the prompt.
extra_prompt_messages: Prompt messages that will be placed between the
- system message and the new human input. Default is None.
+ system message and the new human input.
Returns:
A prompt template to pass into this agent.
@@ -313,10 +313,10 @@ class OpenAIMultiFunctionsAgent(BaseMultiActionAgent):
Args:
llm: The language model to use.
tools: A list of tools to use.
- callback_manager: The callback manager to use. Default is None.
- extra_prompt_messages: Extra prompt messages to use. Default is None.
- system_message: The system message to use.
- Default is a default system message.
+ callback_manager: The callback manager to use.
+ extra_prompt_messages: Extra prompt messages to use.
+ system_message: The system message to use. Default is a default system
+ message.
kwargs: Additional arguments.
"""
system_message_ = (
diff --git a/libs/langchain/langchain_classic/agents/openai_tools/base.py b/libs/langchain/langchain_classic/agents/openai_tools/base.py
index 891dc97d647..5fa0cf58575 100644
--- a/libs/langchain/langchain_classic/agents/openai_tools/base.py
+++ b/libs/langchain/langchain_classic/agents/openai_tools/base.py
@@ -40,7 +40,7 @@ def create_openai_tools_agent(
Example:
```python
from langchain_classic import hub
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_classic.agents import (
AgentExecutor,
create_openai_tools_agent,
diff --git a/libs/langchain/langchain_classic/agents/react/agent.py b/libs/langchain/langchain_classic/agents/react/agent.py
index ce56d6ed667..9dce04145c5 100644
--- a/libs/langchain/langchain_classic/agents/react/agent.py
+++ b/libs/langchain/langchain_classic/agents/react/agent.py
@@ -42,13 +42,13 @@ def create_react_agent(
prompt: The prompt to use. See Prompt section below for more.
output_parser: AgentOutputParser for parse the LLM output.
tools_renderer: This controls how the tools are converted into a string and
- then passed into the LLM. Default is `render_text_description`.
+ then passed into the LLM.
stop_sequence: bool or list of str.
If `True`, adds a stop token of "Observation:" to avoid hallucinates.
If `False`, does not add a stop token.
If a list of str, uses the provided list as the stop tokens.
- Default is True. You may to set this to False if the LLM you are using
+ You may to set this to False if the LLM you are using
does not support stop sequences.
Returns:
@@ -59,7 +59,7 @@ def create_react_agent(
Examples:
```python
from langchain_classic import hub
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from langchain_classic.agents import AgentExecutor, create_react_agent
prompt = hub.pull("hwchase17/react")
diff --git a/libs/langchain/langchain_classic/agents/self_ask_with_search/base.py b/libs/langchain/langchain_classic/agents/self_ask_with_search/base.py
index caf513a40ba..cdbff7d590e 100644
--- a/libs/langchain/langchain_classic/agents/self_ask_with_search/base.py
+++ b/libs/langchain/langchain_classic/agents/self_ask_with_search/base.py
@@ -116,7 +116,7 @@ def create_self_ask_with_search_agent(
Examples:
```python
from langchain_classic import hub
- from langchain_community.chat_models import ChatAnthropic
+ from langchain_anthropic import ChatAnthropic
from langchain_classic.agents import (
AgentExecutor,
create_self_ask_with_search_agent,
diff --git a/libs/langchain/langchain_classic/agents/structured_chat/base.py b/libs/langchain/langchain_classic/agents/structured_chat/base.py
index 2a2d9f25aeb..d76de1bc7d9 100644
--- a/libs/langchain/langchain_classic/agents/structured_chat/base.py
+++ b/libs/langchain/langchain_classic/agents/structured_chat/base.py
@@ -182,10 +182,10 @@ def create_structured_chat_agent(
If `False`, does not add a stop token.
If a list of str, uses the provided list as the stop tokens.
- Default is True. You may to set this to False if the LLM you are using
+ You may to set this to False if the LLM you are using
does not support stop sequences.
tools_renderer: This controls how the tools are converted into a string and
- then passed into the LLM. Default is `render_text_description`.
+ then passed into the LLM.
Returns:
A Runnable sequence representing an agent. It takes as input all the same input
@@ -195,7 +195,7 @@ def create_structured_chat_agent(
Examples:
```python
from langchain_classic import hub
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_classic.agents import (
AgentExecutor,
create_structured_chat_agent,
diff --git a/libs/langchain/langchain_classic/agents/tool_calling_agent/base.py b/libs/langchain/langchain_classic/agents/tool_calling_agent/base.py
index 1c8b23ae669..ad7ec028e4e 100644
--- a/libs/langchain/langchain_classic/agents/tool_calling_agent/base.py
+++ b/libs/langchain/langchain_classic/agents/tool_calling_agent/base.py
@@ -55,7 +55,7 @@ def create_tool_calling_agent(
("placeholder", "{agent_scratchpad}"),
]
)
- model = ChatAnthropic(model="claude-3-opus-20240229")
+ model = ChatAnthropic(model="claude-opus-4-1-20250805")
@tool
def magic_function(input: int) -> int:
@@ -83,11 +83,15 @@ def create_tool_calling_agent(
```
Prompt:
-
The agent prompt must have an `agent_scratchpad` key that is a
`MessagesPlaceholder`. Intermediate agent actions and tool output
messages will be passed in here.
+ Troubleshooting:
+ - If you encounter `invalid_tool_calls` errors, ensure that your tool
+ functions return properly formatted responses. Tool outputs should be
+ serializable to JSON. For custom objects, implement proper __str__ or
+ to_dict methods.
"""
missing_vars = {"agent_scratchpad"}.difference(
prompt.input_variables + list(prompt.partial_variables),
diff --git a/libs/langchain/langchain_classic/agents/xml/base.py b/libs/langchain/langchain_classic/agents/xml/base.py
index d06ff88c1ed..5cd81ea4015 100644
--- a/libs/langchain/langchain_classic/agents/xml/base.py
+++ b/libs/langchain/langchain_classic/agents/xml/base.py
@@ -129,13 +129,13 @@ def create_xml_agent(
`tools`: contains descriptions for each tool.
`agent_scratchpad`: contains previous agent actions and tool outputs.
tools_renderer: This controls how the tools are converted into a string and
- then passed into the LLM. Default is `render_text_description`.
+ then passed into the LLM.
stop_sequence: bool or list of str.
If `True`, adds a stop token of "" to avoid hallucinates.
If `False`, does not add a stop token.
If a list of str, uses the provided list as the stop tokens.
- Default is True. You may to set this to False if the LLM you are using
+ You may to set this to False if the LLM you are using
does not support stop sequences.
Returns:
@@ -146,7 +146,7 @@ def create_xml_agent(
Example:
```python
from langchain_classic import hub
- from langchain_community.chat_models import ChatAnthropic
+ from langchain_anthropic import ChatAnthropic
from langchain_classic.agents import AgentExecutor, create_xml_agent
prompt = hub.pull("hwchase17/xml-agent-convo")
diff --git a/libs/core/langchain_core/memory.py b/libs/langchain/langchain_classic/base_memory.py
similarity index 99%
rename from libs/core/langchain_core/memory.py
rename to libs/langchain/langchain_classic/base_memory.py
index ec50cc01e07..eb178bb888c 100644
--- a/libs/core/langchain_core/memory.py
+++ b/libs/langchain/langchain_classic/base_memory.py
@@ -10,11 +10,10 @@ from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Any
-from pydantic import ConfigDict
-
from langchain_core._api import deprecated
from langchain_core.load.serializable import Serializable
from langchain_core.runnables import run_in_executor
+from pydantic import ConfigDict
@deprecated(
diff --git a/libs/langchain/langchain_classic/callbacks/streamlit/__init__.py b/libs/langchain/langchain_classic/callbacks/streamlit/__init__.py
index a17acc3c319..a3b68eb384f 100644
--- a/libs/langchain/langchain_classic/callbacks/streamlit/__init__.py
+++ b/libs/langchain/langchain_classic/callbacks/streamlit/__init__.py
@@ -31,16 +31,15 @@ def StreamlitCallbackHandler( # noqa: N802
max_thought_containers
The max number of completed LLM thought containers to show at once. When this
threshold is reached, a new thought will cause the oldest thoughts to be
- collapsed into a "History" expander. Defaults to 4.
+ collapsed into a "History" expander.
expand_new_thoughts
Each LLM "thought" gets its own `st.expander`. This param controls whether that
- expander is expanded by default. Defaults to `True`.
+ expander is expanded by default.
collapse_completed_thoughts
If `True`, LLM thought expanders will be collapsed when completed.
- Defaults to `True`.
thought_labeler
An optional custom LLMThoughtLabeler instance. If unspecified, the handler
- will use the default thought labeling logic. Defaults to `None`.
+ will use the default thought labeling logic.
Returns:
-------
diff --git a/libs/langchain/langchain_classic/callbacks/tracers/schemas.py b/libs/langchain/langchain_classic/callbacks/tracers/schemas.py
index e8f34027d34..32e6b2e4f13 100644
--- a/libs/langchain/langchain_classic/callbacks/tracers/schemas.py
+++ b/libs/langchain/langchain_classic/callbacks/tracers/schemas.py
@@ -1,27 +1,5 @@
-from langchain_core.tracers.schemas import (
- BaseRun,
- ChainRun,
- LLMRun,
- Run,
- RunTypeEnum,
- ToolRun,
- TracerSession,
- TracerSessionBase,
- TracerSessionV1,
- TracerSessionV1Base,
- TracerSessionV1Create,
-)
+from langchain_core.tracers.schemas import Run
__all__ = [
- "BaseRun",
- "ChainRun",
- "LLMRun",
"Run",
- "RunTypeEnum",
- "ToolRun",
- "TracerSession",
- "TracerSessionBase",
- "TracerSessionV1",
- "TracerSessionV1Base",
- "TracerSessionV1Create",
]
diff --git a/libs/langchain/langchain_classic/chains/api/base.py b/libs/langchain/langchain_classic/chains/api/base.py
index 955e52fd5ee..d0009dca6dd 100644
--- a/libs/langchain/langchain_classic/chains/api/base.py
+++ b/libs/langchain/langchain_classic/chains/api/base.py
@@ -42,7 +42,7 @@ def _check_in_allowed_domain(url: str, limit_to_domains: Sequence[str]) -> bool:
limit_to_domains: The allowed domains.
Returns:
- True if the URL is in the allowed domains, False otherwise.
+ `True` if the URL is in the allowed domains, `False` otherwise.
"""
scheme, domain = _extract_scheme_and_domain(url)
@@ -68,8 +68,8 @@ try:
class APIChain(Chain):
"""Chain that makes API calls and summarizes the responses to answer a question.
- *Security Note*: This API chain uses the requests toolkit
- to make GET, POST, PATCH, PUT, and DELETE requests to an API.
+ **Security Note**: This API chain uses the requests toolkit
+ to make `GET`, `POST`, `PATCH`, `PUT`, and `DELETE` requests to an API.
Exercise care in who is allowed to use this chain. If exposing
to end users, consider that users will be able to make arbitrary
@@ -80,7 +80,8 @@ try:
Control access to who can submit issue requests using this toolkit and
what network access it has.
- See https://python.langchain.com/docs/security for more information.
+ See https://docs.langchain.com/oss/python/security-policy for more
+ information.
!!! note
This class is deprecated. See below for a replacement implementation using
@@ -90,7 +91,7 @@ try:
- Support for both token-by-token and step-by-step streaming;
- Support for checkpointing and memory of chat history;
- Easier to modify or extend
- (e.g., with additional tools, structured responses, etc.)
+ (e.g., with additional tools, structured responses, etc.)
Install LangGraph with:
@@ -143,7 +144,7 @@ try:
description: Limit the number of results
\"\"\"
- llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
+ model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
toolkit = RequestsToolkit(
requests_wrapper=TextRequestsWrapper(headers={}), # no auth required
allow_dangerous_requests=ALLOW_DANGEROUS_REQUESTS,
@@ -152,7 +153,7 @@ try:
api_request_chain = (
API_URL_PROMPT.partial(api_docs=api_spec)
- | llm.bind_tools(tools, tool_choice="any")
+ | model.bind_tools(tools, tool_choice="any")
)
class ChainState(TypedDict):
@@ -169,7 +170,7 @@ try:
return {"messages": [response]}
async def acall_model(state: ChainState, config: RunnableConfig):
- response = await llm.ainvoke(state["messages"], config)
+ response = await model.ainvoke(state["messages"], config)
return {"messages": [response]}
graph_builder = StateGraph(ChainState)
@@ -196,17 +197,22 @@ try:
"""
api_request_chain: LLMChain
+
api_answer_chain: LLMChain
+
requests_wrapper: TextRequestsWrapper = Field(exclude=True)
+
api_docs: str
- question_key: str = "question" #: :meta private:
- output_key: str = "output" #: :meta private:
+
+ question_key: str = "question"
+
+ output_key: str = "output"
+
limit_to_domains: Sequence[str] | None = Field(default_factory=list)
"""Use to limit the domains that can be accessed by the API chain.
* For example, to limit to just the domain `https://www.example.com`, set
`limit_to_domains=["https://www.example.com"]`.
-
* The default value is an empty tuple, which means that no domains are
allowed by default. By design this will raise an error on instantiation.
* Use a None if you want to allow all domains by default -- this is not
@@ -217,18 +223,12 @@ try:
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.question_key]
@property
def output_keys(self) -> list[str]:
- """Expect output key.
-
- :meta private:
- """
+ """Expect output key."""
return [self.output_key]
@model_validator(mode="after")
diff --git a/libs/langchain/langchain_classic/chains/base.py b/libs/langchain/langchain_classic/chains/base.py
index 2d1c0b1e028..c91ccf6c688 100644
--- a/libs/langchain/langchain_classic/chains/base.py
+++ b/libs/langchain/langchain_classic/chains/base.py
@@ -20,7 +20,6 @@ from langchain_core.callbacks import (
CallbackManagerForChainRun,
Callbacks,
)
-from langchain_core.memory import BaseMemory
from langchain_core.outputs import RunInfo
from langchain_core.runnables import (
RunnableConfig,
@@ -38,6 +37,7 @@ from pydantic import (
)
from typing_extensions import override
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.schema import RUN_KEY
logger = logging.getLogger(__name__)
@@ -73,14 +73,14 @@ class Chain(RunnableSerializable[dict[str, Any], dict[str, Any]], ABC):
"""
memory: BaseMemory | None = None
- """Optional memory object. Defaults to `None`.
+ """Optional memory object.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog."""
callbacks: Callbacks = Field(default=None, exclude=True)
- """Optional list of callback handlers (or callback manager). Defaults to `None`.
+ """Optional list of callback handlers (or callback manager).
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
@@ -90,13 +90,13 @@ class Chain(RunnableSerializable[dict[str, Any], dict[str, Any]], ABC):
will be printed to the console. Defaults to the global `verbose` value,
accessible via `langchain.globals.get_verbose()`."""
tags: list[str] | None = None
- """Optional list of tags associated with the chain. Defaults to `None`.
+ """Optional list of tags associated with the chain.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a chain with its use case.
"""
- metadata: dict[str, Any] | None = None
- """Optional metadata associated with the chain. Defaults to `None`.
+ metadata: builtins.dict[str, Any] | None = None
+ """Optional metadata associated with the chain.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a chain with its use case.
@@ -317,9 +317,9 @@ class Chain(RunnableSerializable[dict[str, Any], dict[str, Any]], ABC):
@abstractmethod
def _call(
self,
- inputs: dict[str, Any],
+ inputs: builtins.dict[str, Any],
run_manager: CallbackManagerForChainRun | None = None,
- ) -> dict[str, Any]:
+ ) -> builtins.dict[str, Any]:
"""Execute the chain.
This is a private method that is not user-facing. It is only called within
@@ -339,9 +339,9 @@ class Chain(RunnableSerializable[dict[str, Any], dict[str, Any]], ABC):
async def _acall(
self,
- inputs: dict[str, Any],
+ inputs: builtins.dict[str, Any],
run_manager: AsyncCallbackManagerForChainRun | None = None,
- ) -> dict[str, Any]:
+ ) -> builtins.dict[str, Any]:
"""Asynchronously execute the chain.
This is a private method that is not user-facing. It is only called within
@@ -387,14 +387,14 @@ class Chain(RunnableSerializable[dict[str, Any], dict[str, Any]], ABC):
return_only_outputs: Whether to return only outputs in the
response. If `True`, only new keys generated by this chain will be
returned. If `False`, both input keys and new keys generated by this
- chain will be returned. Defaults to `False`.
+ chain will be returned.
callbacks: Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags: List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
- metadata: Optional metadata associated with the chain. Defaults to `None`.
+ metadata: Optional metadata associated with the chain.
run_name: Optional name for this run of the chain.
include_run_info: Whether to include run info in the response. Defaults
to False.
@@ -439,14 +439,14 @@ class Chain(RunnableSerializable[dict[str, Any], dict[str, Any]], ABC):
return_only_outputs: Whether to return only outputs in the
response. If `True`, only new keys generated by this chain will be
returned. If `False`, both input keys and new keys generated by this
- chain will be returned. Defaults to `False`.
+ chain will be returned.
callbacks: Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags: List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
- metadata: Optional metadata associated with the chain. Defaults to `None`.
+ metadata: Optional metadata associated with the chain.
run_name: Optional name for this run of the chain.
include_run_info: Whether to include run info in the response. Defaults
to False.
diff --git a/libs/langchain/langchain_classic/chains/combine_documents/base.py b/libs/langchain/langchain_classic/chains/combine_documents/base.py
index ee672ea1001..96b6fdf46ed 100644
--- a/libs/langchain/langchain_classic/chains/combine_documents/base.py
+++ b/libs/langchain/langchain_classic/chains/combine_documents/base.py
@@ -44,8 +44,8 @@ class BaseCombineDocumentsChain(Chain, ABC):
that will be longer than the context length).
"""
- input_key: str = "input_documents" #: :meta private:
- output_key: str = "output_text" #: :meta private:
+ input_key: str = "input_documents"
+ output_key: str = "output_text"
@override
def get_input_schema(
@@ -69,18 +69,12 @@ class BaseCombineDocumentsChain(Chain, ABC):
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return output key.
-
- :meta private:
- """
+ """Return output key."""
return [self.output_key]
def prompt_length(self, docs: list[Document], **kwargs: Any) -> int | None: # noqa: ARG002
@@ -234,24 +228,18 @@ class AnalyzeDocumentChain(Chain):
```
"""
- input_key: str = "input_document" #: :meta private:
+ input_key: str = "input_document"
text_splitter: TextSplitter = Field(default_factory=RecursiveCharacterTextSplitter)
combine_docs_chain: BaseCombineDocumentsChain
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return output key.
-
- :meta private:
- """
+ """Return output key."""
return self.combine_docs_chain.output_keys
@override
diff --git a/libs/langchain/langchain_classic/chains/combine_documents/map_reduce.py b/libs/langchain/langchain_classic/chains/combine_documents/map_reduce.py
index 99fdf7bc1be..4c5cf66913b 100644
--- a/libs/langchain/langchain_classic/chains/combine_documents/map_reduce.py
+++ b/libs/langchain/langchain_classic/chains/combine_documents/map_reduce.py
@@ -44,7 +44,7 @@ class MapReduceDocumentsChain(BaseCombineDocumentsChain):
MapReduceDocumentsChain,
)
from langchain_core.prompts import PromptTemplate
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
@@ -53,16 +53,16 @@ class MapReduceDocumentsChain(BaseCombineDocumentsChain):
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
- llm = OpenAI()
+ model = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template("Summarize this content: {context}")
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = LLMChain(llm=model, prompt=prompt)
# We now define how to combine these summaries
reduce_prompt = PromptTemplate.from_template(
"Combine these summaries: {context}"
)
- reduce_llm_chain = LLMChain(llm=llm, prompt=reduce_prompt)
+ reduce_llm_chain = LLMChain(llm=model, prompt=reduce_prompt)
combine_documents_chain = StuffDocumentsChain(
llm_chain=reduce_llm_chain,
document_prompt=document_prompt,
@@ -79,7 +79,7 @@ class MapReduceDocumentsChain(BaseCombineDocumentsChain):
# which is specifically aimed at collapsing documents BEFORE
# the final call.
prompt = PromptTemplate.from_template("Collapse this content: {context}")
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = LLMChain(llm=model, prompt=prompt)
collapse_documents_chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_prompt=document_prompt,
@@ -125,10 +125,7 @@ class MapReduceDocumentsChain(BaseCombineDocumentsChain):
@property
def output_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
_output_keys = super().output_keys
if self.return_intermediate_steps:
_output_keys = [*_output_keys, "intermediate_steps"]
diff --git a/libs/langchain/langchain_classic/chains/combine_documents/map_rerank.py b/libs/langchain/langchain_classic/chains/combine_documents/map_rerank.py
index a712ab6f06b..d7889c72cf7 100644
--- a/libs/langchain/langchain_classic/chains/combine_documents/map_rerank.py
+++ b/libs/langchain/langchain_classic/chains/combine_documents/map_rerank.py
@@ -38,11 +38,11 @@ class MapRerankDocumentsChain(BaseCombineDocumentsChain):
```python
from langchain_classic.chains import MapRerankDocumentsChain, LLMChain
from langchain_core.prompts import PromptTemplate
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from langchain_classic.output_parsers.regex import RegexParser
document_variable_name = "context"
- llm = OpenAI()
+ model = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
# The actual prompt will need to be a lot more complex, this is just
@@ -61,7 +61,7 @@ class MapRerankDocumentsChain(BaseCombineDocumentsChain):
input_variables=["context"],
output_parser=output_parser,
)
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = LLMChain(llm=model, prompt=prompt)
chain = MapRerankDocumentsChain(
llm_chain=llm_chain,
document_variable_name=document_variable_name,
@@ -108,10 +108,7 @@ class MapRerankDocumentsChain(BaseCombineDocumentsChain):
@property
def output_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
_output_keys = super().output_keys
if self.return_intermediate_steps:
_output_keys = [*_output_keys, "intermediate_steps"]
diff --git a/libs/langchain/langchain_classic/chains/combine_documents/reduce.py b/libs/langchain/langchain_classic/chains/combine_documents/reduce.py
index 0f86c389ff0..d8a09dd1344 100644
--- a/libs/langchain/langchain_classic/chains/combine_documents/reduce.py
+++ b/libs/langchain/langchain_classic/chains/combine_documents/reduce.py
@@ -33,17 +33,18 @@ def split_list_of_docs(
token_max: int,
**kwargs: Any,
) -> list[list[Document]]:
- """Split Documents into subsets that each meet a cumulative length constraint.
+ """Split `Document` objects to subsets that each meet a cumulative len. constraint.
Args:
- docs: The full list of Documents.
- length_func: Function for computing the cumulative length of a set of Documents.
- token_max: The maximum cumulative length of any subset of Documents.
+ docs: The full list of `Document` objects.
+ length_func: Function for computing the cumulative length of a set of `Document`
+ objects.
+ token_max: The maximum cumulative length of any subset of `Document` objects.
**kwargs: Arbitrary additional keyword params to pass to each call of the
- length_func.
+ `length_func`.
Returns:
- A List[List[Document]].
+ A `list[list[Document]]`.
"""
new_result_doc_list = []
_sub_result_docs = []
@@ -71,18 +72,18 @@ def collapse_docs(
"""Execute a collapse function on a set of documents and merge their metadatas.
Args:
- docs: A list of Documents to combine.
- combine_document_func: A function that takes in a list of Documents and
+ docs: A list of `Document` objects to combine.
+ combine_document_func: A function that takes in a list of `Document` objects and
optionally addition keyword parameters and combines them into a single
string.
**kwargs: Arbitrary additional keyword params to pass to the
- combine_document_func.
+ `combine_document_func`.
Returns:
- A single Document with the output of combine_document_func for the page content
- and the combined metadata's of all the input documents. All metadata values
- are strings, and where there are overlapping keys across documents the
- values are joined by ", ".
+ A single `Document` with the output of `combine_document_func` for the page
+ content and the combined metadata's of all the input documents. All metadata
+ values are strings, and where there are overlapping keys across documents
+ the values are joined by `', '`.
"""
result = combine_document_func(docs, **kwargs)
combined_metadata = {k: str(v) for k, v in docs[0].metadata.items()}
@@ -103,18 +104,18 @@ async def acollapse_docs(
"""Execute a collapse function on a set of documents and merge their metadatas.
Args:
- docs: A list of Documents to combine.
- combine_document_func: A function that takes in a list of Documents and
+ docs: A list of `Document` objects to combine.
+ combine_document_func: A function that takes in a list of `Document` objects and
optionally addition keyword parameters and combines them into a single
string.
**kwargs: Arbitrary additional keyword params to pass to the
- combine_document_func.
+ `combine_document_func`.
Returns:
- A single Document with the output of combine_document_func for the page content
- and the combined metadata's of all the input documents. All metadata values
- are strings, and where there are overlapping keys across documents the
- values are joined by ", ".
+ A single `Document` with the output of `combine_document_func` for the page
+ content and the combined metadata's of all the input documents. All metadata
+ values are strings, and where there are overlapping keys across documents
+ the values are joined by `', '`.
"""
result = await combine_document_func(docs, **kwargs)
combined_metadata = {k: str(v) for k, v in docs[0].metadata.items()}
@@ -141,11 +142,11 @@ class ReduceDocumentsChain(BaseCombineDocumentsChain):
This involves
- - combine_documents_chain
-
- - collapse_documents_chain
+ - `combine_documents_chain`
+ - `collapse_documents_chain`
`combine_documents_chain` is ALWAYS provided. This is final chain that is called.
+
We pass all previous results to this chain, and the output of this chain is
returned as a final result.
@@ -162,7 +163,7 @@ class ReduceDocumentsChain(BaseCombineDocumentsChain):
ReduceDocumentsChain,
)
from langchain_core.prompts import PromptTemplate
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
@@ -171,11 +172,11 @@ class ReduceDocumentsChain(BaseCombineDocumentsChain):
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
- llm = OpenAI()
+ model = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template("Summarize this content: {context}")
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = LLMChain(llm=model, prompt=prompt)
combine_documents_chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_prompt=document_prompt,
@@ -188,7 +189,7 @@ class ReduceDocumentsChain(BaseCombineDocumentsChain):
# which is specifically aimed at collapsing documents BEFORE
# the final call.
prompt = PromptTemplate.from_template("Collapse this content: {context}")
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = LLMChain(llm=model, prompt=prompt)
collapse_documents_chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_prompt=document_prompt,
@@ -203,19 +204,28 @@ class ReduceDocumentsChain(BaseCombineDocumentsChain):
combine_documents_chain: BaseCombineDocumentsChain
"""Final chain to call to combine documents.
- This is typically a StuffDocumentsChain."""
+
+ This is typically a `StuffDocumentsChain`.
+ """
collapse_documents_chain: BaseCombineDocumentsChain | None = None
"""Chain to use to collapse documents if needed until they can all fit.
- If `None`, will use the combine_documents_chain.
- This is typically a StuffDocumentsChain."""
+ If `None`, will use the `combine_documents_chain`.
+
+ This is typically a `StuffDocumentsChain`.
+ """
token_max: int = 3000
- """The maximum number of tokens to group documents into. For example, if
- set to 3000 then documents will be grouped into chunks of no greater than
- 3000 tokens before trying to combine them into a smaller chunk."""
+ """The maximum number of tokens to group documents into.
+
+ For example, if set to 3000 then documents will be grouped into chunks of no greater
+ than 3000 tokens before trying to combine them into a smaller chunk.
+ """
collapse_max_retries: int | None = None
- """The maximum number of retries to collapse documents to fit token_max.
- If `None`, it will keep trying to collapse documents to fit token_max.
- Otherwise, after it reaches the max number, it will throw an error"""
+ """The maximum number of retries to collapse documents to fit `token_max`.
+
+ If `None`, it will keep trying to collapse documents to fit `token_max`.
+
+ Otherwise, after it reaches the max number, it will throw an error.
+ """
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -248,7 +258,7 @@ class ReduceDocumentsChain(BaseCombineDocumentsChain):
Returns:
The first element returned is the single string output. The second
- element returned is a dictionary of other keys to return.
+ element returned is a dictionary of other keys to return.
"""
result_docs, _ = self._collapse(
docs,
@@ -282,7 +292,7 @@ class ReduceDocumentsChain(BaseCombineDocumentsChain):
Returns:
The first element returned is the single string output. The second
- element returned is a dictionary of other keys to return.
+ element returned is a dictionary of other keys to return.
"""
result_docs, _ = await self._acollapse(
docs,
diff --git a/libs/langchain/langchain_classic/chains/combine_documents/refine.py b/libs/langchain/langchain_classic/chains/combine_documents/refine.py
index a6d694c63d1..0e66b5690d1 100644
--- a/libs/langchain/langchain_classic/chains/combine_documents/refine.py
+++ b/libs/langchain/langchain_classic/chains/combine_documents/refine.py
@@ -46,7 +46,7 @@ class RefineDocumentsChain(BaseCombineDocumentsChain):
```python
from langchain_classic.chains import RefineDocumentsChain, LLMChain
from langchain_core.prompts import PromptTemplate
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
@@ -55,11 +55,11 @@ class RefineDocumentsChain(BaseCombineDocumentsChain):
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
- llm = OpenAI()
+ model = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template("Summarize this content: {context}")
- initial_llm_chain = LLMChain(llm=llm, prompt=prompt)
+ initial_llm_chain = LLMChain(llm=model, prompt=prompt)
initial_response_name = "prev_response"
# The prompt here should take as an input variable the
# `document_variable_name` as well as `initial_response_name`
@@ -67,7 +67,7 @@ class RefineDocumentsChain(BaseCombineDocumentsChain):
"Here's your first summary: {prev_response}. "
"Now add to it based on the following context: {context}"
)
- refine_llm_chain = LLMChain(llm=llm, prompt=prompt_refine)
+ refine_llm_chain = LLMChain(llm=model, prompt=prompt_refine)
chain = RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
@@ -96,10 +96,7 @@ class RefineDocumentsChain(BaseCombineDocumentsChain):
@property
def output_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
_output_keys = super().output_keys
if self.return_intermediate_steps:
_output_keys = [*_output_keys, "intermediate_steps"]
diff --git a/libs/langchain/langchain_classic/chains/combine_documents/stuff.py b/libs/langchain/langchain_classic/chains/combine_documents/stuff.py
index 5e05d01114e..0556154fe1b 100644
--- a/libs/langchain/langchain_classic/chains/combine_documents/stuff.py
+++ b/libs/langchain/langchain_classic/chains/combine_documents/stuff.py
@@ -35,10 +35,10 @@ def create_stuff_documents_chain(
Args:
llm: Language model.
- prompt: Prompt template. Must contain input variable "context" (override by
+ prompt: Prompt template. Must contain input variable `"context"` (override by
setting document_variable), which will be used for passing in the formatted
documents.
- output_parser: Output parser. Defaults to StrOutputParser.
+ output_parser: Output parser. Defaults to `StrOutputParser`.
document_prompt: Prompt used for formatting each document into a string. Input
variables can be "page_content" or any metadata keys that are in all
documents. "page_content" will automatically retrieve the
@@ -47,18 +47,18 @@ def create_stuff_documents_chain(
a prompt that only contains `Document.page_content`.
document_separator: String separator to use between formatted document strings.
document_variable_name: Variable name to use for the formatted documents in the
- prompt. Defaults to "context".
+ prompt. Defaults to `"context"`.
Returns:
- An LCEL Runnable. The input is a dictionary that must have a "context" key that
- maps to a List[Document], and any other input variables expected in the prompt.
- The Runnable return type depends on output_parser used.
+ An LCEL Runnable. The input is a dictionary that must have a `"context"` key
+ that maps to a `list[Document]`, and any other input variables expected in the
+ prompt. The `Runnable` return type depends on `output_parser` used.
Example:
```python
- # pip install -U langchain langchain-community
+ # pip install -U langchain langchain-openai
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_core.documents import Document
from langchain_core.prompts import ChatPromptTemplate
from langchain_classic.chains.combine_documents import (
@@ -68,8 +68,8 @@ def create_stuff_documents_chain(
prompt = ChatPromptTemplate.from_messages(
[("system", "What are everyone's favorite colors:\n\n{context}")]
)
- llm = ChatOpenAI(model="gpt-3.5-turbo")
- chain = create_stuff_documents_chain(llm, prompt)
+ model = ChatOpenAI(model="gpt-3.5-turbo")
+ chain = create_stuff_documents_chain(model, prompt)
docs = [
Document(page_content="Jesse loves red but not yellow"),
@@ -123,7 +123,7 @@ class StuffDocumentsChain(BaseCombineDocumentsChain):
```python
from langchain_classic.chains import StuffDocumentsChain, LLMChain
from langchain_core.prompts import PromptTemplate
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
@@ -132,11 +132,11 @@ class StuffDocumentsChain(BaseCombineDocumentsChain):
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
- llm = OpenAI()
+ model = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template("Summarize this content: {context}")
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = LLMChain(llm=model, prompt=prompt)
chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_prompt=document_prompt,
diff --git a/libs/langchain/langchain_classic/chains/constitutional_ai/base.py b/libs/langchain/langchain_classic/chains/constitutional_ai/base.py
index 72bbcbb9323..4eaa74269bc 100644
--- a/libs/langchain/langchain_classic/chains/constitutional_ai/base.py
+++ b/libs/langchain/langchain_classic/chains/constitutional_ai/base.py
@@ -58,7 +58,7 @@ class ConstitutionalChain(Chain):
from langgraph.graph import END, START, StateGraph
from typing_extensions import Annotated, TypedDict
- llm = ChatOpenAI(model="gpt-4o-mini")
+ model = ChatOpenAI(model="gpt-4o-mini")
class Critique(TypedDict):
"""Generate a critique, if needed."""
@@ -86,9 +86,9 @@ class ConstitutionalChain(Chain):
"Revision Request: {revision_request}"
)
- chain = llm | StrOutputParser()
- critique_chain = critique_prompt | llm.with_structured_output(Critique)
- revision_chain = revision_prompt | llm | StrOutputParser()
+ chain = model | StrOutputParser()
+ critique_chain = critique_prompt | model.with_structured_output(Critique)
+ revision_chain = revision_prompt | model | StrOutputParser()
class State(TypedDict):
@@ -165,21 +165,21 @@ class ConstitutionalChain(Chain):
Example:
```python
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from langchain_classic.chains import LLMChain, ConstitutionalChain
from langchain_classic.chains.constitutional_ai.models \
import ConstitutionalPrinciple
- llm = OpenAI()
+ llmodelm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
- qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
+ qa_chain = LLMChain(llm=model, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
- llm=llm,
+ llm=model,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
diff --git a/libs/langchain/langchain_classic/chains/conversation/base.py b/libs/langchain/langchain_classic/chains/conversation/base.py
index 07c1c98ee08..98a73a97299 100644
--- a/libs/langchain/langchain_classic/chains/conversation/base.py
+++ b/libs/langchain/langchain_classic/chains/conversation/base.py
@@ -1,11 +1,11 @@
"""Chain that carries on a conversation and calls an LLM."""
from langchain_core._api import deprecated
-from langchain_core.memory import BaseMemory
from langchain_core.prompts import BasePromptTemplate
from pydantic import ConfigDict, Field, model_validator
from typing_extensions import Self, override
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.chains.conversation.prompt import PROMPT
from langchain_classic.chains.llm import LLMChain
from langchain_classic.memory.buffer import ConversationBufferMemory
@@ -47,9 +47,9 @@ class ConversationChain(LLMChain):
return store[session_id]
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125")
- chain = RunnableWithMessageHistory(llm, get_session_history)
+ chain = RunnableWithMessageHistory(model, get_session_history)
chain.invoke(
"Hi I'm Bob.",
config={"configurable": {"session_id": "1"}},
@@ -85,9 +85,9 @@ class ConversationChain(LLMChain):
return store[session_id]
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125")
- chain = RunnableWithMessageHistory(llm, get_session_history)
+ chain = RunnableWithMessageHistory(model, get_session_history)
chain.invoke(
"Hi I'm Bob.",
config={"configurable": {"session_id": "1"}},
@@ -97,7 +97,7 @@ class ConversationChain(LLMChain):
Example:
```python
from langchain_classic.chains import ConversationChain
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
conversation = ConversationChain(llm=OpenAI())
```
@@ -108,8 +108,8 @@ class ConversationChain(LLMChain):
prompt: BasePromptTemplate = PROMPT
"""Default conversation prompt to use."""
- input_key: str = "input" #: :meta private:
- output_key: str = "response" #: :meta private:
+ input_key: str = "input"
+ output_key: str = "response"
model_config = ConfigDict(
arbitrary_types_allowed=True,
diff --git a/libs/langchain/langchain_classic/chains/conversational_retrieval/base.py b/libs/langchain/langchain_classic/chains/conversational_retrieval/base.py
index d1424619ec4..db8bd9daf31 100644
--- a/libs/langchain/langchain_classic/chains/conversational_retrieval/base.py
+++ b/libs/langchain/langchain_classic/chains/conversational_retrieval/base.py
@@ -122,10 +122,7 @@ class BaseConversationalRetrievalChain(Chain):
@property
def output_keys(self) -> list[str]:
- """Return the output keys.
-
- :meta private:
- """
+ """Return the output keys."""
_output_keys = [self.output_key]
if self.return_source_documents:
_output_keys = [*_output_keys, "source_documents"]
@@ -283,7 +280,7 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
retriever = ... # Your retriever
- llm = ChatOpenAI()
+ model = ChatOpenAI()
# Contextualize question
contextualize_q_system_prompt = (
@@ -301,7 +298,7 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
]
)
history_aware_retriever = create_history_aware_retriever(
- llm, retriever, contextualize_q_prompt
+ model, retriever, contextualize_q_prompt
)
# Answer question
@@ -324,7 +321,7 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
# Below we use create_stuff_documents_chain to feed all retrieved context
# into the LLM. Note that we can also use StuffDocumentsChain and other
# instances of BaseCombineDocumentsChain.
- question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
+ question_answer_chain = create_stuff_documents_chain(model, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
# Usage:
@@ -337,17 +334,17 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
The algorithm for this chain consists of three parts:
1. Use the chat history and the new question to create a "standalone question".
- This is done so that this question can be passed into the retrieval step to fetch
- relevant documents. If only the new question was passed in, then relevant context
- may be lacking. If the whole conversation was passed into retrieval, there may
- be unnecessary information there that would distract from retrieval.
+ This is done so that this question can be passed into the retrieval step to
+ fetch relevant documents. If only the new question was passed in, then relevant
+ context may be lacking. If the whole conversation was passed into retrieval,
+ there may be unnecessary information there that would distract from retrieval.
2. This new question is passed to the retriever and relevant documents are
- returned.
+ returned.
3. The retrieved documents are passed to an LLM along with either the new question
- (default behavior) or the original question and chat history to generate a final
- response.
+ (default behavior) or the original question and chat history to generate a final
+ response.
Example:
```python
@@ -357,7 +354,7 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
ConversationalRetrievalChain,
)
from langchain_core.prompts import PromptTemplate
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
combine_docs_chain = StuffDocumentsChain(...)
vectorstore = ...
@@ -371,8 +368,8 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
"Follow up question: {question}"
)
prompt = PromptTemplate.from_template(template)
- llm = OpenAI()
- question_generator_chain = LLMChain(llm=llm, prompt=prompt)
+ model = OpenAI()
+ question_generator_chain = LLMChain(llm=model, prompt=prompt)
chain = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
retriever=retriever,
diff --git a/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py b/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py
index 589cccbb832..e041de38351 100644
--- a/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py
+++ b/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py
@@ -31,7 +31,7 @@ class ElasticsearchDatabaseChain(Chain):
Example:
```python
from langchain_classic.chains import ElasticsearchDatabaseChain
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from elasticsearch import Elasticsearch
database = Elasticsearch("http://localhost:9200")
@@ -49,8 +49,8 @@ class ElasticsearchDatabaseChain(Chain):
"""Number of results to return from the query"""
ignore_indices: list[str] | None = None
include_indices: list[str] | None = None
- input_key: str = "question" #: :meta private:
- output_key: str = "result" #: :meta private:
+ input_key: str = "question"
+ output_key: str = "result"
sample_documents_in_index_info: int = 3
return_intermediate_steps: bool = False
"""Whether or not to return the intermediate steps along with the final answer."""
@@ -69,18 +69,12 @@ class ElasticsearchDatabaseChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Return the singular input key.
-
- :meta private:
- """
+ """Return the singular input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return the singular output key.
-
- :meta private:
- """
+ """Return the singular output key."""
if not self.return_intermediate_steps:
return [self.output_key]
return [self.output_key, INTERMEDIATE_STEPS_KEY]
@@ -198,7 +192,7 @@ class ElasticsearchDatabaseChain(Chain):
query_prompt: The prompt to use for query construction.
answer_prompt: The prompt to use for answering user question given data.
query_output_parser: The output parser to use for parsing model-generated
- ES query. Defaults to SimpleJsonOutputParser.
+ ES query. Defaults to `SimpleJsonOutputParser`.
kwargs: Additional arguments to pass to the constructor.
"""
query_prompt = query_prompt or DSL_PROMPT
diff --git a/libs/langchain/langchain_classic/chains/history_aware_retriever.py b/libs/langchain/langchain_classic/chains/history_aware_retriever.py
index ed16c85560b..0926cdeaedf 100644
--- a/libs/langchain/langchain_classic/chains/history_aware_retriever.py
+++ b/libs/langchain/langchain_classic/chains/history_aware_retriever.py
@@ -20,28 +20,28 @@ def create_history_aware_retriever(
Args:
llm: Language model to use for generating a search term given chat history
- retriever: RetrieverLike object that takes a string as input and outputs
- a list of Documents.
+ retriever: `RetrieverLike` object that takes a string as input and outputs
+ a list of `Document` objects.
prompt: The prompt used to generate the search query for the retriever.
Returns:
An LCEL Runnable. The runnable input must take in `input`, and if there
is chat history should take it in the form of `chat_history`.
- The Runnable output is a list of Documents
+ The `Runnable` output is a list of `Document` objects
Example:
```python
# pip install -U langchain langchain-community
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_classic.chains import create_history_aware_retriever
from langchain_classic import hub
rephrase_prompt = hub.pull("langchain-ai/chat-langchain-rephrase")
- llm = ChatOpenAI()
+ model = ChatOpenAI()
retriever = ...
chat_retriever_chain = create_history_aware_retriever(
- llm, retriever, rephrase_prompt
+ model, retriever, rephrase_prompt
)
chain.invoke({"input": "...", "chat_history": })
diff --git a/libs/langchain/langchain_classic/chains/llm.py b/libs/langchain/langchain_classic/chains/llm.py
index 534cd97794c..274fd1d0d83 100644
--- a/libs/langchain/langchain_classic/chains/llm.py
+++ b/libs/langchain/langchain_classic/chains/llm.py
@@ -55,8 +55,8 @@ class LLMChain(Chain):
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
- llm = OpenAI()
- chain = prompt | llm | StrOutputParser()
+ model = OpenAI()
+ chain = prompt | model | StrOutputParser()
chain.invoke("your adjective here")
```
@@ -64,12 +64,12 @@ class LLMChain(Chain):
Example:
```python
from langchain_classic.chains import LLMChain
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from langchain_core.prompts import PromptTemplate
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
- llm = LLMChain(llm=OpenAI(), prompt=prompt)
+ model = LLMChain(llm=OpenAI(), prompt=prompt)
```
"""
@@ -82,14 +82,14 @@ class LLMChain(Chain):
"""Prompt object to use."""
llm: Runnable[LanguageModelInput, str] | Runnable[LanguageModelInput, BaseMessage]
"""Language model to call."""
- output_key: str = "text" #: :meta private:
+ output_key: str = "text"
output_parser: BaseLLMOutputParser = Field(default_factory=StrOutputParser)
"""Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise."""
return_final_only: bool = True
- """Whether to return only the final parsed result. Defaults to `True`.
- If false, will return a bunch of extra information about the generation."""
+ """Whether to return only the final parsed result.
+ If `False`, will return a bunch of extra information about the generation."""
llm_kwargs: dict = Field(default_factory=dict)
model_config = ConfigDict(
@@ -99,18 +99,12 @@ class LLMChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Will be whatever keys the prompt expects.
-
- :meta private:
- """
+ """Will be whatever keys the prompt expects."""
return self.prompt.input_variables
@property
def output_keys(self) -> list[str]:
- """Will always return text key.
-
- :meta private:
- """
+ """Will always return text key."""
if self.return_final_only:
return [self.output_key]
return [self.output_key, "full_generation"]
diff --git a/libs/langchain/langchain_classic/chains/llm_checker/base.py b/libs/langchain/langchain_classic/chains/llm_checker/base.py
index 5bd53322e9c..9db3194fb3d 100644
--- a/libs/langchain/langchain_classic/chains/llm_checker/base.py
+++ b/libs/langchain/langchain_classic/chains/llm_checker/base.py
@@ -77,11 +77,11 @@ class LLMCheckerChain(Chain):
Example:
```python
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from langchain_classic.chains import LLMCheckerChain
- llm = OpenAI(temperature=0.7)
- checker_chain = LLMCheckerChain.from_llm(llm)
+ model = OpenAI(temperature=0.7)
+ checker_chain = LLMCheckerChain.from_llm(model)
```
"""
@@ -97,8 +97,8 @@ class LLMCheckerChain(Chain):
"""[Deprecated]"""
revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT
"""[Deprecated] Prompt to use when questioning the documents."""
- input_key: str = "query" #: :meta private:
- output_key: str = "result" #: :meta private:
+ input_key: str = "query"
+ output_key: str = "result"
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -138,18 +138,12 @@ class LLMCheckerChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Return the singular input key.
-
- :meta private:
- """
+ """Return the singular input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return the singular output key.
-
- :meta private:
- """
+ """Return the singular output key."""
return [self.output_key]
def _call(
diff --git a/libs/langchain/langchain_classic/chains/llm_math/base.py b/libs/langchain/langchain_classic/chains/llm_math/base.py
index 86eea13e1b7..b8b038d5201 100644
--- a/libs/langchain/langchain_classic/chains/llm_math/base.py
+++ b/libs/langchain/langchain_classic/chains/llm_math/base.py
@@ -84,9 +84,9 @@ class LLMMathChain(Chain):
)
)
- llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
+ model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [calculator]
- llm_with_tools = llm.bind_tools(tools, tool_choice="any")
+ model_with_tools = model.bind_tools(tools, tool_choice="any")
class ChainState(TypedDict):
\"\"\"LangGraph state.\"\"\"
@@ -95,11 +95,11 @@ class LLMMathChain(Chain):
async def acall_chain(state: ChainState, config: RunnableConfig):
last_message = state["messages"][-1]
- response = await llm_with_tools.ainvoke(state["messages"], config)
+ response = await model_with_tools.ainvoke(state["messages"], config)
return {"messages": [response]}
async def acall_model(state: ChainState, config: RunnableConfig):
- response = await llm.ainvoke(state["messages"], config)
+ response = await model.ainvoke(state["messages"], config)
return {"messages": [response]}
graph_builder = StateGraph(ChainState)
@@ -145,7 +145,7 @@ class LLMMathChain(Chain):
Example:
```python
from langchain_classic.chains import LLMMathChain
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
llm_math = LLMMathChain.from_llm(OpenAI())
```
@@ -156,8 +156,8 @@ class LLMMathChain(Chain):
"""[Deprecated] LLM wrapper to use."""
prompt: BasePromptTemplate = PROMPT
"""[Deprecated] Prompt to use to translate to python if necessary."""
- input_key: str = "question" #: :meta private:
- output_key: str = "answer" #: :meta private:
+ input_key: str = "question"
+ output_key: str = "answer"
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -189,18 +189,12 @@ class LLMMathChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Expect output key.
-
- :meta private:
- """
+ """Expect output key."""
return [self.output_key]
def _evaluate_expression(self, expression: str) -> str:
diff --git a/libs/langchain/langchain_classic/chains/llm_summarization_checker/base.py b/libs/langchain/langchain_classic/chains/llm_summarization_checker/base.py
index 28e745772a7..e5d86dd2f03 100644
--- a/libs/langchain/langchain_classic/chains/llm_summarization_checker/base.py
+++ b/libs/langchain/langchain_classic/chains/llm_summarization_checker/base.py
@@ -80,11 +80,11 @@ class LLMSummarizationCheckerChain(Chain):
Example:
```python
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from langchain_classic.chains import LLMSummarizationCheckerChain
- llm = OpenAI(temperature=0.0)
- checker_chain = LLMSummarizationCheckerChain.from_llm(llm)
+ model = OpenAI(temperature=0.0)
+ checker_chain = LLMSummarizationCheckerChain.from_llm(model)
```
"""
@@ -101,8 +101,8 @@ class LLMSummarizationCheckerChain(Chain):
are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT
"""[Deprecated]"""
- input_key: str = "query" #: :meta private:
- output_key: str = "result" #: :meta private:
+ input_key: str = "query"
+ output_key: str = "result"
max_checks: int = 2
"""Maximum number of times to check the assertions. Default to double-checking."""
@@ -134,18 +134,12 @@ class LLMSummarizationCheckerChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Return the singular input key.
-
- :meta private:
- """
+ """Return the singular input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return the singular output key.
-
- :meta private:
- """
+ """Return the singular output key."""
return [self.output_key]
def _call(
diff --git a/libs/langchain/langchain_classic/chains/mapreduce.py b/libs/langchain/langchain_classic/chains/mapreduce.py
index 81dc9731906..1845439f7a6 100644
--- a/libs/langchain/langchain_classic/chains/mapreduce.py
+++ b/libs/langchain/langchain_classic/chains/mapreduce.py
@@ -44,8 +44,8 @@ class MapReduceChain(Chain):
"""Chain to use to combine documents."""
text_splitter: TextSplitter
"""Text splitter to use."""
- input_key: str = "input_text" #: :meta private:
- output_key: str = "output_text" #: :meta private:
+ input_key: str = "input_text"
+ output_key: str = "output_text"
@classmethod
def from_params(
@@ -88,18 +88,12 @@ class MapReduceChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return output key.
-
- :meta private:
- """
+ """Return output key."""
return [self.output_key]
def _call(
diff --git a/libs/langchain/langchain_classic/chains/moderation.py b/libs/langchain/langchain_classic/chains/moderation.py
index 1a7e9aa5002..f4c80e4cdb3 100644
--- a/libs/langchain/langchain_classic/chains/moderation.py
+++ b/libs/langchain/langchain_classic/chains/moderation.py
@@ -30,14 +30,14 @@ class OpenAIModerationChain(Chain):
```
"""
- client: Any = None #: :meta private:
- async_client: Any = None #: :meta private:
+ client: Any = None
+ async_client: Any = None
model_name: str | None = None
"""Moderation model name to use."""
error: bool = False
"""Whether or not to error if bad content was found."""
- input_key: str = "input" #: :meta private:
- output_key: str = "output" #: :meta private:
+ input_key: str = "input"
+ output_key: str = "output"
openai_api_key: str | None = None
openai_organization: str | None = None
openai_pre_1_0: bool = Field(default=False)
@@ -84,18 +84,12 @@ class OpenAIModerationChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return output key.
-
- :meta private:
- """
+ """Return output key."""
return [self.output_key]
def _moderate(self, text: str, results: Any) -> str:
diff --git a/libs/langchain/langchain_classic/chains/natbot/base.py b/libs/langchain/langchain_classic/chains/natbot/base.py
index 6c482bccd86..d36033525c3 100644
--- a/libs/langchain/langchain_classic/chains/natbot/base.py
+++ b/libs/langchain/langchain_classic/chains/natbot/base.py
@@ -40,7 +40,7 @@ class NatBotChain(Chain):
access and use this chain, and isolate the network access of the server
that hosts this chain.
- See https://python.langchain.com/docs/security for more information.
+ See https://docs.langchain.com/oss/python/security-policy for more information.
Example:
```python
@@ -55,10 +55,10 @@ class NatBotChain(Chain):
"""Objective that NatBot is tasked with completing."""
llm: BaseLanguageModel | None = None
"""[Deprecated] LLM wrapper to use."""
- input_url_key: str = "url" #: :meta private:
- input_browser_content_key: str = "browser_content" #: :meta private:
- previous_command: str = "" #: :meta private:
- output_key: str = "command" #: :meta private:
+ input_url_key: str = "url"
+ input_browser_content_key: str = "browser_content"
+ previous_command: str = ""
+ output_key: str = "command"
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -84,8 +84,8 @@ class NatBotChain(Chain):
"""Load with default LLMChain."""
msg = (
"This method is no longer implemented. Please use from_llm."
- "llm = OpenAI(temperature=0.5, best_of=10, n=3, max_tokens=50)"
- "For example, NatBotChain.from_llm(llm, objective)"
+ "model = OpenAI(temperature=0.5, best_of=10, n=3, max_tokens=50)"
+ "For example, NatBotChain.from_llm(model, objective)"
)
raise NotImplementedError(msg)
@@ -102,18 +102,12 @@ class NatBotChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Expect url and browser content.
-
- :meta private:
- """
+ """Expect url and browser content."""
return [self.input_url_key, self.input_browser_content_key]
@property
def output_keys(self) -> list[str]:
- """Return command.
-
- :meta private:
- """
+ """Return command."""
return [self.output_key]
def _call(
diff --git a/libs/langchain/langchain_classic/chains/natbot/crawler.py b/libs/langchain/langchain_classic/chains/natbot/crawler.py
index 692d90fad39..037977038ce 100644
--- a/libs/langchain/langchain_classic/chains/natbot/crawler.py
+++ b/libs/langchain/langchain_classic/chains/natbot/crawler.py
@@ -58,7 +58,7 @@ class Crawler:
Make sure to scope permissions to the minimal permissions necessary for
the application.
- See https://python.langchain.com/docs/security for more information.
+ See https://docs.langchain.com/oss/python/security-policy for more information.
"""
def __init__(self) -> None:
diff --git a/libs/langchain/langchain_classic/chains/openai_functions/base.py b/libs/langchain/langchain_classic/chains/openai_functions/base.py
index 417ae1e2a64..0a3e89fc304 100644
--- a/libs/langchain/langchain_classic/chains/openai_functions/base.py
+++ b/libs/langchain/langchain_classic/chains/openai_functions/base.py
@@ -85,7 +85,7 @@ def create_openai_fn_chain(
from typing import Optional
from langchain_classic.chains.openai_functions import create_openai_fn_chain
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel, Field
@@ -107,7 +107,7 @@ def create_openai_fn_chain(
fav_food: str | None = Field(None, description="The dog's favorite food")
- llm = ChatOpenAI(model="gpt-4", temperature=0)
+ model = ChatOpenAI(model="gpt-4", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a world class algorithm for recording entities."),
@@ -115,7 +115,7 @@ def create_openai_fn_chain(
("human", "Tip: Make sure to answer in the correct format"),
]
)
- chain = create_openai_fn_chain([RecordPerson, RecordDog], llm, prompt)
+ chain = create_openai_fn_chain([RecordPerson, RecordDog], model, prompt)
chain.run("Harry was a chubby brown beagle who loved chicken")
# -> RecordDog(name="Harry", color="brown", fav_food="chicken")
@@ -179,7 +179,7 @@ def create_structured_output_chain(
from typing import Optional
from langchain_classic.chains.openai_functions import create_structured_output_chain
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel, Field
@@ -191,7 +191,7 @@ def create_structured_output_chain(
color: str = Field(..., description="The dog's color")
fav_food: str | None = Field(None, description="The dog's favorite food")
- llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
+ model = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a world class algorithm for extracting information in structured formats."),
@@ -199,7 +199,7 @@ def create_structured_output_chain(
("human", "Tip: Make sure to answer in the correct format"),
]
)
- chain = create_structured_output_chain(Dog, llm, prompt)
+ chain = create_structured_output_chain(Dog, model, prompt)
chain.run("Harry was a chubby brown beagle who loved chicken")
# -> Dog(name="Harry", color="brown", fav_food="chicken")
diff --git a/libs/langchain/langchain_classic/chains/openai_functions/citation_fuzzy_match.py b/libs/langchain/langchain_classic/chains/openai_functions/citation_fuzzy_match.py
index 0a8cc3bed02..4b3fd369d39 100644
--- a/libs/langchain/langchain_classic/chains/openai_functions/citation_fuzzy_match.py
+++ b/libs/langchain/langchain_classic/chains/openai_functions/citation_fuzzy_match.py
@@ -83,12 +83,12 @@ def create_citation_fuzzy_match_runnable(llm: BaseChatModel) -> Runnable:
from langchain_classic.chains import create_citation_fuzzy_match_runnable
from langchain_openai import ChatOpenAI
- llm = ChatOpenAI(model="gpt-4o-mini")
+ model = ChatOpenAI(model="gpt-4o-mini")
context = "Alice has blue eyes. Bob has brown eyes. Charlie has green eyes."
question = "What color are Bob's eyes?"
- chain = create_citation_fuzzy_match_runnable(llm)
+ chain = create_citation_fuzzy_match_runnable(model)
chain.invoke({"question": question, "context": context})
```
diff --git a/libs/langchain/langchain_classic/chains/openai_functions/extraction.py b/libs/langchain/langchain_classic/chains/openai_functions/extraction.py
index 1230bc769f7..467ec638708 100644
--- a/libs/langchain/langchain_classic/chains/openai_functions/extraction.py
+++ b/libs/langchain/langchain_classic/chains/openai_functions/extraction.py
@@ -50,13 +50,7 @@ Passage:
"LangChain has introduced a method called `with_structured_output` that"
"is available on ChatModels capable of tool calling."
"You can read more about the method here: "
- ". "
- "Please follow our extraction use case documentation for more guidelines"
- "on how to do information extraction with LLMs."
- ". "
- "If you notice other issues, please provide "
- "feedback here:"
- ""
+ "."
),
removal="1.0",
alternative=(
@@ -69,12 +63,12 @@ Passage:
punchline: str = Field(description="The punchline to the joke")
# Or any other chat model that supports tools.
- # Please reference to to the documentation of structured_output
+ # Please reference to the documentation of structured_output
# to see an up to date list of which models support
# with_structured_output.
- model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)
- structured_llm = model.with_structured_output(Joke)
- structured_llm.invoke("Tell me a joke about cats.
+ model = ChatAnthropic(model="claude-opus-4-1-20250805", temperature=0)
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke("Tell me a joke about cats.
Make sure to call the Joke function.")
"""
),
@@ -94,8 +88,7 @@ def create_extraction_chain(
prompt: The prompt to use for extraction.
tags: Optional list of tags to associate with the chain.
verbose: Whether to run in verbose mode. In verbose mode, some intermediate
- logs will be printed to the console. Defaults to the global `verbose` value,
- accessible via `langchain.globals.get_verbose()`.
+ logs will be printed to the console.
Returns:
Chain that can be used to extract information from a passage.
@@ -120,7 +113,7 @@ def create_extraction_chain(
"LangChain has introduced a method called `with_structured_output` that"
"is available on ChatModels capable of tool calling."
"You can read more about the method here: "
- ". "
+ ". "
"Please follow our extraction use case documentation for more guidelines"
"on how to do information extraction with LLMs."
". "
@@ -139,12 +132,12 @@ def create_extraction_chain(
punchline: str = Field(description="The punchline to the joke")
# Or any other chat model that supports tools.
- # Please reference to to the documentation of structured_output
+ # Please reference to the documentation of structured_output
# to see an up to date list of which models support
# with_structured_output.
- model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)
- structured_llm = model.with_structured_output(Joke)
- structured_llm.invoke("Tell me a joke about cats.
+ model = ChatAnthropic(model="claude-opus-4-1-20250805", temperature=0)
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke("Tell me a joke about cats.
Make sure to call the Joke function.")
"""
),
@@ -155,15 +148,14 @@ def create_extraction_chain_pydantic(
prompt: BasePromptTemplate | None = None,
verbose: bool = False, # noqa: FBT001,FBT002
) -> Chain:
- """Creates a chain that extracts information from a passage using pydantic schema.
+ """Creates a chain that extracts information from a passage using Pydantic schema.
Args:
- pydantic_schema: The pydantic schema of the entities to extract.
+ pydantic_schema: The Pydantic schema of the entities to extract.
llm: The language model to use.
prompt: The prompt to use for extraction.
verbose: Whether to run in verbose mode. In verbose mode, some intermediate
- logs will be printed to the console. Defaults to the global `verbose` value,
- accessible via `langchain.globals.get_verbose()`
+ logs will be printed to the console.
Returns:
Chain that can be used to extract information from a passage.
diff --git a/libs/langchain/langchain_classic/chains/openai_functions/openapi.py b/libs/langchain/langchain_classic/chains/openai_functions/openapi.py
index 2cfbfc31522..c29c111e388 100644
--- a/libs/langchain/langchain_classic/chains/openai_functions/openapi.py
+++ b/libs/langchain/langchain_classic/chains/openai_functions/openapi.py
@@ -330,7 +330,7 @@ def get_openapi_chain(
prompt = ChatPromptTemplate.from_template(
"Use the provided APIs to respond to this user query:\\n\\n{query}"
)
- llm = ChatOpenAI(model="gpt-4o-mini", temperature=0).bind_tools(tools)
+ model = ChatOpenAI(model="gpt-4o-mini", temperature=0).bind_tools(tools)
def _execute_tool(message) -> Any:
if tool_calls := message.tool_calls:
@@ -341,7 +341,7 @@ def get_openapi_chain(
else:
return message.content
- chain = prompt | llm | _execute_tool
+ chain = prompt | model | _execute_tool
```
```python
@@ -394,7 +394,7 @@ def get_openapi_chain(
msg = (
"Must provide an LLM for this chain.For example,\n"
"from langchain_openai import ChatOpenAI\n"
- "llm = ChatOpenAI()\n"
+ "model = ChatOpenAI()\n"
)
raise ValueError(msg)
prompt = prompt or ChatPromptTemplate.from_template(
diff --git a/libs/langchain/langchain_classic/chains/openai_functions/qa_with_structure.py b/libs/langchain/langchain_classic/chains/openai_functions/qa_with_structure.py
index 8e7bede26c6..b36a9c49a22 100644
--- a/libs/langchain/langchain_classic/chains/openai_functions/qa_with_structure.py
+++ b/libs/langchain/langchain_classic/chains/openai_functions/qa_with_structure.py
@@ -51,8 +51,7 @@ def create_qa_with_structure_chain(
Args:
llm: Language model to use for the chain.
schema: Pydantic schema to use for the output.
- output_parser: Output parser to use. Should be one of `pydantic` or `base`.
- Default to `base`.
+ output_parser: Output parser to use. Should be one of `'pydantic'` or `'base'`.
prompt: Optional prompt to use for the chain.
verbose: Whether to run the chain in verbose mode.
diff --git a/libs/langchain/langchain_classic/chains/openai_functions/tagging.py b/libs/langchain/langchain_classic/chains/openai_functions/tagging.py
index 81021bc842d..6c30b21cd0c 100644
--- a/libs/langchain/langchain_classic/chains/openai_functions/tagging.py
+++ b/libs/langchain/langchain_classic/chains/openai_functions/tagging.py
@@ -41,7 +41,7 @@ Passage:
"See API reference for this function for replacement: <"
"https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.tagging.create_tagging_chain.html"
"> You can read more about `with_structured_output` here: "
- ". "
+ ". "
"If you notice other issues, please provide "
"feedback here: "
""
@@ -73,18 +73,18 @@ def create_tagging_chain(
punchline: Annotated[str, ..., "The punchline of the joke"]
# Or any other chat model that supports tools.
- # Please reference to to the documentation of structured_output
+ # Please reference to the documentation of structured_output
# to see an up to date list of which models support
# with_structured_output.
model = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)
- structured_llm = model.with_structured_output(Joke)
- structured_llm.invoke(
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke(
"Why did the cat cross the road? To get to the other "
"side... and then lay down in the middle of it!"
)
```
- Read more here: https://python.langchain.com/docs/how_to/structured_output/
+ Read more here: https://docs.langchain.com/oss/python/langchain/models#structured-outputs
Args:
schema: The schema of the entities to extract.
@@ -93,7 +93,7 @@ def create_tagging_chain(
kwargs: Additional keyword arguments to pass to the chain.
Returns:
- Chain (LLMChain) that can be used to extract information from a passage.
+ Chain (`LLMChain`) that can be used to extract information from a passage.
"""
function = _get_tagging_function(schema)
@@ -117,7 +117,7 @@ def create_tagging_chain(
"See API reference for this function for replacement: <"
"https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.tagging.create_tagging_chain_pydantic.html"
"> You can read more about `with_structured_output` here: "
- ". "
+ ". "
"If you notice other issues, please provide "
"feedback here: "
""
@@ -130,10 +130,10 @@ def create_tagging_chain_pydantic(
prompt: ChatPromptTemplate | None = None,
**kwargs: Any,
) -> Chain:
- """Create tagging chain from pydantic schema.
+ """Create tagging chain from Pydantic schema.
Create a chain that extracts information from a passage
- based on a pydantic schema.
+ based on a Pydantic schema.
This function is deprecated. Please use `with_structured_output` instead.
See example usage below:
@@ -149,27 +149,27 @@ def create_tagging_chain_pydantic(
# Or any other chat model that supports tools.
- # Please reference to to the documentation of structured_output
+ # Please reference to the documentation of structured_output
# to see an up to date list of which models support
# with_structured_output.
- model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)
- structured_llm = model.with_structured_output(Joke)
- structured_llm.invoke(
+ model = ChatAnthropic(model="claude-opus-4-1-20250805", temperature=0)
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke(
"Why did the cat cross the road? To get to the other "
"side... and then lay down in the middle of it!"
)
```
- Read more here: https://python.langchain.com/docs/how_to/structured_output/
+ Read more here: https://docs.langchain.com/oss/python/langchain/models#structured-outputs
Args:
- pydantic_schema: The pydantic schema of the entities to extract.
+ pydantic_schema: The Pydantic schema of the entities to extract.
llm: The language model to use.
prompt: The prompt template to use for the chain.
kwargs: Additional keyword arguments to pass to the chain.
Returns:
- Chain (LLMChain) that can be used to extract information from a passage.
+ Chain (`LLMChain`) that can be used to extract information from a passage.
"""
if hasattr(pydantic_schema, "model_json_schema"):
diff --git a/libs/langchain/langchain_classic/chains/openai_tools/extraction.py b/libs/langchain/langchain_classic/chains/openai_tools/extraction.py
index 36c5729cdbc..31460f401d5 100644
--- a/libs/langchain/langchain_classic/chains/openai_tools/extraction.py
+++ b/libs/langchain/langchain_classic/chains/openai_tools/extraction.py
@@ -3,7 +3,9 @@ from langchain_core.language_models import BaseLanguageModel
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import Runnable
-from langchain_core.utils.function_calling import convert_pydantic_to_openai_function
+from langchain_core.utils.function_calling import (
+ convert_to_openai_function as convert_pydantic_to_openai_function,
+)
from pydantic import BaseModel
_EXTRACTION_TEMPLATE = """Extract and save the relevant entities mentioned \
@@ -18,7 +20,7 @@ If a property is not present and is not required in the function parameters, do
"LangChain has introduced a method called `with_structured_output` that"
"is available on ChatModels capable of tool calling."
"You can read more about the method here: "
- ". "
+ ". "
"Please follow our extraction use case documentation for more guidelines"
"on how to do information extraction with LLMs."
". "
@@ -38,12 +40,12 @@ If a property is not present and is not required in the function parameters, do
punchline: str = Field(description="The punchline to the joke")
# Or any other chat model that supports tools.
- # Please reference to to the documentation of structured_output
+ # Please reference to the documentation of structured_output
# to see an up to date list of which models support
# with_structured_output.
- model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)
- structured_llm = model.with_structured_output(Joke)
- structured_llm.invoke("Tell me a joke about cats.
+ model = ChatAnthropic(model="claude-opus-4-1-20250805", temperature=0)
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke("Tell me a joke about cats.
Make sure to call the Joke function.")
"""
),
diff --git a/libs/langchain/langchain_classic/chains/prompt_selector.py b/libs/langchain/langchain_classic/chains/prompt_selector.py
index bb8665b9fcf..431b8df33fd 100644
--- a/libs/langchain/langchain_classic/chains/prompt_selector.py
+++ b/libs/langchain/langchain_classic/chains/prompt_selector.py
@@ -48,7 +48,7 @@ def is_llm(llm: BaseLanguageModel) -> bool:
llm: Language model to check.
Returns:
- True if the language model is a BaseLLM model, False otherwise.
+ `True` if the language model is a BaseLLM model, `False` otherwise.
"""
return isinstance(llm, BaseLLM)
@@ -60,6 +60,6 @@ def is_chat_model(llm: BaseLanguageModel) -> bool:
llm: Language model to check.
Returns:
- True if the language model is a BaseChatModel model, False otherwise.
+ `True` if the language model is a BaseChatModel model, `False` otherwise.
"""
return isinstance(llm, BaseChatModel)
diff --git a/libs/langchain/langchain_classic/chains/qa_generation/base.py b/libs/langchain/langchain_classic/chains/qa_generation/base.py
index df74669d045..fc2ca8824b8 100644
--- a/libs/langchain/langchain_classic/chains/qa_generation/base.py
+++ b/libs/langchain/langchain_classic/chains/qa_generation/base.py
@@ -34,7 +34,7 @@ class QAGenerationChain(Chain):
- Supports async and streaming;
- Surfaces prompt and text splitter for easier customization;
- Use of JsonOutputParser supports JSONPatch operations in streaming mode,
- as well as robustness to markdown.
+ as well as robustness to markdown.
```python
from langchain_classic.chains.qa_generation.prompt import (
@@ -52,14 +52,14 @@ class QAGenerationChain(Chain):
from langchain_openai import ChatOpenAI
from langchain_text_splitters import RecursiveCharacterTextSplitter
- llm = ChatOpenAI()
+ model = ChatOpenAI()
text_splitter = RecursiveCharacterTextSplitter(chunk_overlap=500)
split_text = RunnableLambda(lambda x: text_splitter.create_documents([x]))
chain = RunnableParallel(
text=RunnablePassthrough(),
questions=(
- split_text | RunnableEach(bound=prompt | llm | JsonOutputParser())
+ split_text | RunnableEach(bound=prompt | model | JsonOutputParser())
),
)
```
diff --git a/libs/langchain/langchain_classic/chains/qa_with_sources/base.py b/libs/langchain/langchain_classic/chains/qa_with_sources/base.py
index 30966c177c2..3edd695157d 100644
--- a/libs/langchain/langchain_classic/chains/qa_with_sources/base.py
+++ b/libs/langchain/langchain_classic/chains/qa_with_sources/base.py
@@ -48,10 +48,10 @@ class BaseQAWithSourcesChain(Chain, ABC):
combine_documents_chain: BaseCombineDocumentsChain
"""Chain to use to combine documents."""
- question_key: str = "question" #: :meta private:
- input_docs_key: str = "docs" #: :meta private:
- answer_key: str = "answer" #: :meta private:
- sources_answer_key: str = "sources" #: :meta private:
+ question_key: str = "question"
+ input_docs_key: str = "docs"
+ answer_key: str = "answer"
+ sources_answer_key: str = "sources"
return_source_documents: bool = False
"""Return the source documents."""
@@ -109,18 +109,12 @@ class BaseQAWithSourcesChain(Chain, ABC):
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.question_key]
@property
def output_keys(self) -> list[str]:
- """Return output key.
-
- :meta private:
- """
+ """Return output key."""
_output_keys = [self.answer_key, self.sources_answer_key]
if self.return_source_documents:
_output_keys = [*_output_keys, "source_documents"]
@@ -233,14 +227,11 @@ class BaseQAWithSourcesChain(Chain, ABC):
class QAWithSourcesChain(BaseQAWithSourcesChain):
"""Question answering with sources over documents."""
- input_docs_key: str = "docs" #: :meta private:
+ input_docs_key: str = "docs"
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.input_docs_key, self.question_key]
@override
diff --git a/libs/langchain/langchain_classic/chains/query_constructor/base.py b/libs/langchain/langchain_classic/chains/query_constructor/base.py
index 902ea26b8b3..ed42df6cb29 100644
--- a/libs/langchain/langchain_classic/chains/query_constructor/base.py
+++ b/libs/langchain/langchain_classic/chains/query_constructor/base.py
@@ -218,7 +218,7 @@ def get_query_constructor_prompt(
examples: Optional list of examples to use for the chain.
allowed_comparators: Sequence of allowed comparators.
allowed_operators: Sequence of allowed operators.
- enable_limit: Whether to enable the limit operator. Defaults to `False`.
+ enable_limit: Whether to enable the limit operator.
schema_prompt: Prompt for describing query schema. Should have string input
variables allowed_comparators and allowed_operators.
kwargs: Additional named params to pass to FewShotPromptTemplate init.
@@ -289,9 +289,10 @@ def load_query_constructor_chain(
attribute_info: Sequence of attributes in the document.
examples: Optional list of examples to use for the chain.
allowed_comparators: Sequence of allowed comparators. Defaults to all
- Comparators.
- allowed_operators: Sequence of allowed operators. Defaults to all Operators.
- enable_limit: Whether to enable the limit operator. Defaults to `False`.
+ `Comparator` objects.
+ allowed_operators: Sequence of allowed operators. Defaults to all `Operator`
+ objects.
+ enable_limit: Whether to enable the limit operator.
schema_prompt: Prompt for describing query schema. Should have string input
variables allowed_comparators and allowed_operators.
**kwargs: Arbitrary named params to pass to LLMChain.
@@ -344,9 +345,10 @@ def load_query_constructor_runnable(
attribute_info: Sequence of attributes in the document.
examples: Optional list of examples to use for the chain.
allowed_comparators: Sequence of allowed comparators. Defaults to all
- Comparators.
- allowed_operators: Sequence of allowed operators. Defaults to all Operators.
- enable_limit: Whether to enable the limit operator. Defaults to `False`.
+ `Comparator` objects.
+ allowed_operators: Sequence of allowed operators. Defaults to all `Operator`
+ objects.
+ enable_limit: Whether to enable the limit operator.
schema_prompt: Prompt for describing query schema. Should have string input
variables allowed_comparators and allowed_operators.
fix_invalid: Whether to fix invalid filter directives by ignoring invalid
diff --git a/libs/langchain/langchain_classic/chains/query_constructor/parser.py b/libs/langchain/langchain_classic/chains/query_constructor/parser.py
index d7e4de680e6..2bc4d3a05c1 100644
--- a/libs/langchain/langchain_classic/chains/query_constructor/parser.py
+++ b/libs/langchain/langchain_classic/chains/query_constructor/parser.py
@@ -112,7 +112,7 @@ class QueryTransformer(Transformer):
args: The arguments passed to the function.
Returns:
- FilterDirective: The filter directive.
+ The filter directive.
Raises:
ValueError: If the function is a comparator and the first arg is not in the
diff --git a/libs/langchain/langchain_classic/chains/retrieval.py b/libs/langchain/langchain_classic/chains/retrieval.py
index 8eb4ff47768..5d635265cc9 100644
--- a/libs/langchain/langchain_classic/chains/retrieval.py
+++ b/libs/langchain/langchain_classic/chains/retrieval.py
@@ -36,9 +36,9 @@ def create_retrieval_chain(
Example:
```python
- # pip install -U langchain langchain-community
+ # pip install -U langchain langchain-openai
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_classic.chains.combine_documents import (
create_stuff_documents_chain,
)
@@ -46,9 +46,11 @@ def create_retrieval_chain(
from langchain_classic import hub
retrieval_qa_chat_prompt = hub.pull("langchain-ai/retrieval-qa-chat")
- llm = ChatOpenAI()
+ model = ChatOpenAI()
retriever = ...
- combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)
+ combine_docs_chain = create_stuff_documents_chain(
+ model, retrieval_qa_chat_prompt
+ )
retrieval_chain = create_retrieval_chain(retriever, combine_docs_chain)
retrieval_chain.invoke({"input": "..."})
diff --git a/libs/langchain/langchain_classic/chains/retrieval_qa/base.py b/libs/langchain/langchain_classic/chains/retrieval_qa/base.py
index 39fdf46ff60..ec62a77eaa2 100644
--- a/libs/langchain/langchain_classic/chains/retrieval_qa/base.py
+++ b/libs/langchain/langchain_classic/chains/retrieval_qa/base.py
@@ -42,8 +42,8 @@ class BaseRetrievalQA(Chain):
combine_documents_chain: BaseCombineDocumentsChain
"""Chain to use to combine the documents."""
- input_key: str = "query" #: :meta private:
- output_key: str = "result" #: :meta private:
+ input_key: str = "query"
+ output_key: str = "result"
return_source_documents: bool = False
"""Return the source documents or not."""
@@ -55,18 +55,12 @@ class BaseRetrievalQA(Chain):
@property
def input_keys(self) -> list[str]:
- """Input keys.
-
- :meta private:
- """
+ """Input keys."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Output keys.
-
- :meta private:
- """
+ """Output keys."""
_output_keys = [self.output_key]
if self.return_source_documents:
_output_keys = [*_output_keys, "source_documents"]
@@ -237,7 +231,7 @@ class RetrievalQA(BaseRetrievalQA):
retriever = ... # Your retriever
- llm = ChatOpenAI()
+ model = ChatOpenAI()
system_prompt = (
"Use the given context to answer the question. "
@@ -251,7 +245,7 @@ class RetrievalQA(BaseRetrievalQA):
("human", "{input}"),
]
)
- question_answer_chain = create_stuff_documents_chain(llm, prompt)
+ question_answer_chain = create_stuff_documents_chain(model, prompt)
chain = create_retrieval_chain(retriever, question_answer_chain)
chain.invoke({"input": query})
@@ -259,7 +253,7 @@ class RetrievalQA(BaseRetrievalQA):
Example:
```python
- from langchain_community.llms import OpenAI
+ from langchain_openai import OpenAI
from langchain_classic.chains import RetrievalQA
from langchain_community.vectorstores import FAISS
from langchain_core.vectorstores import VectorStoreRetriever
diff --git a/libs/langchain/langchain_classic/chains/router/base.py b/libs/langchain/langchain_classic/chains/router/base.py
index ea078cb037b..0a0ca8cc024 100644
--- a/libs/langchain/langchain_classic/chains/router/base.py
+++ b/libs/langchain/langchain_classic/chains/router/base.py
@@ -73,8 +73,7 @@ class MultiRouteChain(Chain):
default_chain: Chain
"""Default chain to use when none of the destination chains are suitable."""
silent_errors: bool = False
- """If `True`, use default_chain when an invalid destination name is provided.
- Defaults to `False`."""
+ """If `True`, use default_chain when an invalid destination name is provided."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -83,18 +82,12 @@ class MultiRouteChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Will be whatever keys the router chain prompt expects.
-
- :meta private:
- """
+ """Will be whatever keys the router chain prompt expects."""
return self.router_chain.input_keys
@property
def output_keys(self) -> list[str]:
- """Will always return text key.
-
- :meta private:
- """
+ """Will always return text key."""
return []
def _call(
diff --git a/libs/langchain/langchain_classic/chains/router/embedding_router.py b/libs/langchain/langchain_classic/chains/router/embedding_router.py
index a519fabc2a3..60d986c1468 100644
--- a/libs/langchain/langchain_classic/chains/router/embedding_router.py
+++ b/libs/langchain/langchain_classic/chains/router/embedding_router.py
@@ -29,10 +29,7 @@ class EmbeddingRouterChain(RouterChain):
@property
def input_keys(self) -> list[str]:
- """Will be whatever keys the LLM chain prompt expects.
-
- :meta private:
- """
+ """Will be whatever keys the LLM chain prompt expects."""
return self.routing_keys
@override
diff --git a/libs/langchain/langchain_classic/chains/router/llm_router.py b/libs/langchain/langchain_classic/chains/router/llm_router.py
index 6fa390a2f12..8d93789a1b6 100644
--- a/libs/langchain/langchain_classic/chains/router/llm_router.py
+++ b/libs/langchain/langchain_classic/chains/router/llm_router.py
@@ -48,7 +48,7 @@ class LLMRouterChain(RouterChain):
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI
- llm = ChatOpenAI(model="gpt-4o-mini")
+ model = ChatOpenAI(model="gpt-4o-mini")
prompt_1 = ChatPromptTemplate.from_messages(
[
@@ -63,8 +63,8 @@ class LLMRouterChain(RouterChain):
]
)
- chain_1 = prompt_1 | llm | StrOutputParser()
- chain_2 = prompt_2 | llm | StrOutputParser()
+ chain_1 = prompt_1 | model | StrOutputParser()
+ chain_2 = prompt_2 | model | StrOutputParser()
route_system = "Route the user's query to either the animal "
"or vegetable expert."
@@ -83,7 +83,7 @@ class LLMRouterChain(RouterChain):
route_chain = (
route_prompt
- | llm.with_structured_output(RouteQuery)
+ | model.with_structured_output(RouteQuery)
| itemgetter("destination")
)
@@ -118,10 +118,7 @@ class LLMRouterChain(RouterChain):
@property
def input_keys(self) -> list[str]:
- """Will be whatever keys the LLM chain prompt expects.
-
- :meta private:
- """
+ """Will be whatever keys the LLM chain prompt expects."""
return self.llm_chain.input_keys
def _validate_outputs(self, outputs: dict[str, Any]) -> None:
diff --git a/libs/langchain/langchain_classic/chains/router/multi_prompt.py b/libs/langchain/langchain_classic/chains/router/multi_prompt.py
index 2c13ec476d4..52c3eac6a1e 100644
--- a/libs/langchain/langchain_classic/chains/router/multi_prompt.py
+++ b/libs/langchain/langchain_classic/chains/router/multi_prompt.py
@@ -49,7 +49,7 @@ class MultiPromptChain(MultiRouteChain):
from langgraph.graph import END, START, StateGraph
from typing_extensions import TypedDict
- llm = ChatOpenAI(model="gpt-4o-mini")
+ model = ChatOpenAI(model="gpt-4o-mini")
# Define the prompts we will route to
prompt_1 = ChatPromptTemplate.from_messages(
@@ -68,8 +68,8 @@ class MultiPromptChain(MultiRouteChain):
# Construct the chains we will route to. These format the input query
# into the respective prompt, run it through a chat model, and cast
# the result to a string.
- chain_1 = prompt_1 | llm | StrOutputParser()
- chain_2 = prompt_2 | llm | StrOutputParser()
+ chain_1 = prompt_1 | model | StrOutputParser()
+ chain_2 = prompt_2 | model | StrOutputParser()
# Next: define the chain that selects which branch to route to.
@@ -92,7 +92,7 @@ class MultiPromptChain(MultiRouteChain):
destination: Literal["animal", "vegetable"]
- route_chain = route_prompt | llm.with_structured_output(RouteQuery)
+ route_chain = route_prompt | model.with_structured_output(RouteQuery)
# For LangGraph, we will define the state of the graph to hold the query,
diff --git a/libs/langchain/langchain_classic/chains/router/multi_retrieval_qa.py b/libs/langchain/langchain_classic/chains/router/multi_retrieval_qa.py
index bf710cd8040..7f9d743f0f8 100644
--- a/libs/langchain/langchain_classic/chains/router/multi_retrieval_qa.py
+++ b/libs/langchain/langchain_classic/chains/router/multi_retrieval_qa.py
@@ -117,7 +117,7 @@ class MultiRetrievalQAChain(MultiRouteChain):
"default LLMs on behalf of users."
"You can provide a conversation LLM like so:\n"
"from langchain_openai import ChatOpenAI\n"
- "llm = ChatOpenAI()"
+ "model = ChatOpenAI()"
)
raise NotImplementedError(msg)
_default_chain = ConversationChain(
diff --git a/libs/langchain/langchain_classic/chains/sequential.py b/libs/langchain/langchain_classic/chains/sequential.py
index 78b576fc6a1..3d333b52b53 100644
--- a/libs/langchain/langchain_classic/chains/sequential.py
+++ b/libs/langchain/langchain_classic/chains/sequential.py
@@ -18,7 +18,7 @@ class SequentialChain(Chain):
chains: list[Chain]
input_variables: list[str]
- output_variables: list[str] #: :meta private:
+ output_variables: list[str]
return_all: bool = False
model_config = ConfigDict(
@@ -28,18 +28,12 @@ class SequentialChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Return expected input keys to the chain.
-
- :meta private:
- """
+ """Return expected input keys to the chain."""
return self.input_variables
@property
def output_keys(self) -> list[str]:
- """Return output key.
-
- :meta private:
- """
+ """Return output key."""
return self.output_variables
@model_validator(mode="before")
@@ -131,8 +125,8 @@ class SimpleSequentialChain(Chain):
chains: list[Chain]
strip_outputs: bool = False
- input_key: str = "input" #: :meta private:
- output_key: str = "output" #: :meta private:
+ input_key: str = "input"
+ output_key: str = "output"
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -141,18 +135,12 @@ class SimpleSequentialChain(Chain):
@property
def input_keys(self) -> list[str]:
- """Expect input key.
-
- :meta private:
- """
+ """Expect input key."""
return [self.input_key]
@property
def output_keys(self) -> list[str]:
- """Return output key.
-
- :meta private:
- """
+ """Return output key."""
return [self.output_key]
@model_validator(mode="after")
diff --git a/libs/langchain/langchain_classic/chains/sql_database/query.py b/libs/langchain/langchain_classic/chains/sql_database/query.py
index 9b10b704b62..05667e8dba1 100644
--- a/libs/langchain/langchain_classic/chains/sql_database/query.py
+++ b/libs/langchain/langchain_classic/chains/sql_database/query.py
@@ -53,16 +53,15 @@ def create_sql_query_chain(
Control access to who can submit requests to this chain.
- See https://python.langchain.com/docs/security for more information.
+ See https://docs.langchain.com/oss/python/security-policy for more information.
Args:
llm: The language model to use.
db: The SQLDatabase to generate the query for.
prompt: The prompt to use. If none is provided, will choose one
- based on dialect. Defaults to `None`. See Prompt section below for more.
- k: The number of results per select statement to return. Defaults to 5.
+ based on dialect. See Prompt section below for more.
+ k: The number of results per select statement to return.
get_col_comments: Whether to retrieve column comments along with table info.
- Defaults to `False`.
Returns:
A chain that takes in a question and generates a SQL query that answers
@@ -76,8 +75,8 @@ def create_sql_query_chain(
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
- llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
- chain = create_sql_query_chain(llm, db)
+ model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
+ chain = create_sql_query_chain(model, db)
response = chain.invoke({"question": "How many employees are there"})
```
diff --git a/libs/langchain/langchain_classic/chains/structured_output/base.py b/libs/langchain/langchain_classic/chains/structured_output/base.py
index 9d99cb33f9c..93bd6156d55 100644
--- a/libs/langchain/langchain_classic/chains/structured_output/base.py
+++ b/libs/langchain/langchain_classic/chains/structured_output/base.py
@@ -34,7 +34,7 @@ from pydantic import BaseModel
"LangChain has introduced a method called `with_structured_output` that "
"is available on ChatModels capable of tool calling. "
"You can read more about the method here: "
- ". "
+ ". "
"Please follow our extraction use case documentation for more guidelines "
"on how to do information extraction with LLMs. "
". "
@@ -53,12 +53,12 @@ from pydantic import BaseModel
punchline: str = Field(description="The punchline to the joke")
# Or any other chat model that supports tools.
- # Please reference to to the documentation of structured_output
+ # Please reference to the documentation of structured_output
# to see an up to date list of which models support
# with_structured_output.
- model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)
- structured_llm = model.with_structured_output(Joke)
- structured_llm.invoke("Tell me a joke about cats.
+ model = ChatAnthropic(model="claude-opus-4-1-20250805", temperature=0)
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke("Tell me a joke about cats.
Make sure to call the Joke function.")
"""
),
@@ -127,9 +127,9 @@ def create_openai_fn_runnable(
fav_food: str | None = Field(None, description="The dog's favorite food")
- llm = ChatOpenAI(model="gpt-4", temperature=0)
- structured_llm = create_openai_fn_runnable([RecordPerson, RecordDog], llm)
- structured_llm.invoke("Harry was a chubby brown beagle who loved chicken)
+ model = ChatOpenAI(model="gpt-4", temperature=0)
+ structured_model = create_openai_fn_runnable([RecordPerson, RecordDog], model)
+ structured_model.invoke("Harry was a chubby brown beagle who loved chicken)
# -> RecordDog(name="Harry", color="brown", fav_food="chicken")
```
@@ -153,7 +153,7 @@ def create_openai_fn_runnable(
"LangChain has introduced a method called `with_structured_output` that "
"is available on ChatModels capable of tool calling. "
"You can read more about the method here: "
- "."
+ "."
"Please follow our extraction use case documentation for more guidelines "
"on how to do information extraction with LLMs. "
". "
@@ -172,12 +172,12 @@ def create_openai_fn_runnable(
punchline: str = Field(description="The punchline to the joke")
# Or any other chat model that supports tools.
- # Please reference to to the documentation of structured_output
+ # Please reference to the documentation of structured_output
# to see an up to date list of which models support
# with_structured_output.
- model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)
- structured_llm = model.with_structured_output(Joke)
- structured_llm.invoke("Tell me a joke about cats.
+ model = ChatAnthropic(model="claude-opus-4-1-20250805", temperature=0)
+ structured_model = model.with_structured_output(Joke)
+ structured_model.invoke("Tell me a joke about cats.
Make sure to call the Joke function.")
"""
),
@@ -250,21 +250,21 @@ def create_structured_output_runnable(
color: str = Field(..., description="The dog's color")
fav_food: str | None = Field(None, description="The dog's favorite food")
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an extraction algorithm. Please extract every possible instance"),
('human', '{input}')
]
)
- structured_llm = create_structured_output_runnable(
+ structured_model = create_structured_output_runnable(
RecordDog,
- llm,
+ model,
mode="openai-tools",
enforce_function_usage=True,
return_single=True
)
- structured_llm.invoke({"input": "Harry was a chubby brown beagle who loved chicken"})
+ structured_model.invoke({"input": "Harry was a chubby brown beagle who loved chicken"})
# -> RecordDog(name="Harry", color="brown", fav_food="chicken")
```
@@ -303,15 +303,15 @@ def create_structured_output_runnable(
}
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
- structured_llm = create_structured_output_runnable(
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
+ structured_model = create_structured_output_runnable(
dog_schema,
- llm,
+ model,
mode="openai-tools",
enforce_function_usage=True,
return_single=True
)
- structured_llm.invoke("Harry was a chubby brown beagle who loved chicken")
+ structured_model.invoke("Harry was a chubby brown beagle who loved chicken")
# -> {'name': 'Harry', 'color': 'brown', 'fav_food': 'chicken'}
```
@@ -330,9 +330,9 @@ def create_structured_output_runnable(
color: str = Field(..., description="The dog's color")
fav_food: str | None = Field(None, description="The dog's favorite food")
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
- structured_llm = create_structured_output_runnable(Dog, llm, mode="openai-functions")
- structured_llm.invoke("Harry was a chubby brown beagle who loved chicken")
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
+ structured_model = create_structured_output_runnable(Dog, model, mode="openai-functions")
+ structured_model.invoke("Harry was a chubby brown beagle who loved chicken")
# -> Dog(name="Harry", color="brown", fav_food="chicken")
```
@@ -352,13 +352,13 @@ def create_structured_output_runnable(
color: str = Field(..., description="The dog's color")
fav_food: str | None = Field(None, description="The dog's favorite food")
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
- structured_llm = create_structured_output_runnable(Dog, llm, mode="openai-functions")
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
+ structured_model = create_structured_output_runnable(Dog, model, mode="openai-functions")
system = '''Extract information about any dogs mentioned in the user input.'''
prompt = ChatPromptTemplate.from_messages(
[("system", system), ("human", "{input}"),]
)
- chain = prompt | structured_llm
+ chain = prompt | structured_model
chain.invoke({"input": "Harry was a chubby brown beagle who loved chicken"})
# -> Dog(name="Harry", color="brown", fav_food="chicken")
```
@@ -379,8 +379,8 @@ def create_structured_output_runnable(
color: str = Field(..., description="The dog's color")
fav_food: str | None = Field(None, description="The dog's favorite food")
- llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
- structured_llm = create_structured_output_runnable(Dog, llm, mode="openai-json")
+ model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
+ structured_model = create_structured_output_runnable(Dog, model, mode="openai-json")
system = '''You are a world class assistant for extracting information in structured JSON formats. \
Extract a valid JSON blob from the user input that matches the following JSON Schema:
@@ -389,7 +389,7 @@ def create_structured_output_runnable(
prompt = ChatPromptTemplate.from_messages(
[("system", system), ("human", "{input}"),]
)
- chain = prompt | structured_llm
+ chain = prompt | structured_model
chain.invoke({"input": "Harry was a chubby brown beagle who loved chicken"})
```
diff --git a/libs/langchain/langchain_classic/chains/summarize/chain.py b/libs/langchain/langchain_classic/chains/summarize/chain.py
index 8b712d06cb9..08f82329bee 100644
--- a/libs/langchain/langchain_classic/chains/summarize/chain.py
+++ b/libs/langchain/langchain_classic/chains/summarize/chain.py
@@ -47,10 +47,10 @@ def _load_stuff_chain(
Args:
llm: Language Model to use in the chain.
prompt: Prompt template that controls how the documents are formatted and
- passed into the LLM. Defaults to `stuff_prompt.PROMPT`.
+ passed into the LLM.
document_variable_name: Variable name in the prompt template where the
- document text will be inserted. Defaults to "text".
- verbose: Whether to log progress and intermediate steps. Defaults to `None`.
+ document text will be inserted.
+ verbose: Whether to log progress and intermediate steps.
**kwargs: Additional keyword arguments passed to the StuffDocumentsChain.
Returns:
@@ -103,26 +103,23 @@ def _load_map_reduce_chain(
Args:
llm: Language Model to use for map and reduce steps.
- map_prompt: Prompt used to summarize each documnet in the map step.
- Defaults to `map_reduce_prompt.PROMPT`.
+ map_prompt: Prompt used to summarize each document in the map step.
combine_prompt: Prompt used to combine summaries in the reduce step.
- Defaults to `map_reduce_prompt.PROMPT`.
combine_document_variable_name: Variable name in the `combine_prompt` where
- the mapped summaries are inserted. Defaults to "text".
+ the mapped summaries are inserted.
map_reduce_document_variable_name: Variable name in the `map_prompt`
- where document text is inserted. Defaults to "text".
+ where document text is inserted.
collapse_prompt: Optional prompt used to collapse intermediate summaries
- if they exceed the token limit (`token_max`). Defaults to `None`.
- reduce_llm: Optional separate LLM for the reduce step. Defaults to `None`,
+ if they exceed the token limit (`token_max`).
+ reduce_llm: Optional separate LLM for the reduce step.
which uses the same model as the map step.
- collapse_llm: Optional separate LLM for the collapse step. Defaults to `None`,
+ collapse_llm: Optional separate LLM for the collapse step.
which uses the same model as the map step.
- verbose: Whether to log progess and intermediate steps. Defaults to `None`.
+ verbose: Whether to log progress and intermediate steps.
token_max: Token threshold that triggers the collapse step during reduction.
- Defaults to 3000.
- callbacks: Optional callbacks for logging and tracing. Defaults to `None`.
+ callbacks: Optional callbacks for logging and tracing.
collapse_max_retries: Maximum retries for the collapse step if it fails.
- Defaults to `None`.
+
**kwargs: Additional keyword arguments passed to the MapReduceDocumentsChain.
Returns:
diff --git a/libs/langchain/langchain_classic/chains/transform.py b/libs/langchain/langchain_classic/chains/transform.py
index 709ec348dde..a56273f1e50 100644
--- a/libs/langchain/langchain_classic/chains/transform.py
+++ b/libs/langchain/langchain_classic/chains/transform.py
@@ -43,26 +43,17 @@ class TransformChain(Chain):
@staticmethod
@functools.lru_cache
def _log_once(msg: str) -> None:
- """Log a message once.
-
- :meta private:
- """
+ """Log a message once."""
logger.warning(msg)
@property
def input_keys(self) -> list[str]:
- """Expect input keys.
-
- :meta private:
- """
+ """Expect input keys."""
return self.input_variables
@property
def output_keys(self) -> list[str]:
- """Return output keys.
-
- :meta private:
- """
+ """Return output keys."""
return self.output_variables
@override
diff --git a/libs/langchain/langchain_classic/chat_models/base.py b/libs/langchain/langchain_classic/chat_models/base.py
index 20a888e9166..b22b74ae306 100644
--- a/libs/langchain/langchain_classic/chat_models/base.py
+++ b/libs/langchain/langchain_classic/chat_models/base.py
@@ -76,173 +76,204 @@ def init_chat_model(
config_prefix: str | None = None,
**kwargs: Any,
) -> BaseChatModel | _ConfigurableModel:
- """Initialize a ChatModel in a single line using the model's name and provider.
+ """Initialize a chat model from any supported provider using a unified interface.
+
+ **Two main use cases:**
+
+ 1. **Fixed model** β specify the model upfront and get back a ready-to-use chat
+ model.
+ 2. **Configurable model** β choose to specify parameters (including model name) at
+ runtime via `config`. Makes it easy to switch between models/providers without
+ changing your code
!!! note
- Must have the integration package corresponding to the model provider installed.
- You should look at the [provider integration's API reference](https://docs.langchain.com/oss/python/integrations/providers)
- to see what parameters are supported by the model.
+ Requires the integration package for the chosen model provider to be installed.
+
+ See the `model_provider` parameter below for specific package names
+ (e.g., `pip install langchain-openai`).
+
+ Refer to the [provider integration's API reference](https://docs.langchain.com/oss/python/integrations/providers)
+ for supported model parameters to use as `**kwargs`.
Args:
- model: The name of the model, e.g. `'o3-mini'`, `'claude-3-5-sonnet-latest'`. You can
- also specify model and model provider in a single argument using
+ model: The name or ID of the model, e.g. `'o3-mini'`, `'claude-sonnet-4-5-20250929'`.
+
+ You can also specify model and model provider in a single argument using
`'{model_provider}:{model}'` format, e.g. `'openai:o1'`.
- model_provider: The model provider if not specified as part of model arg (see
- above). Supported model_provider values and the corresponding integration
- package are:
+ model_provider: The model provider if not specified as part of the model arg
+ (see above).
- - `openai` -> `langchain-openai`
- - `anthropic` -> `langchain-anthropic`
- - `azure_openai` -> `langchain-openai`
- - `azure_ai` -> `langchain-azure-ai`
- - `google_vertexai` -> `langchain-google-vertexai`
- - `google_genai` -> `langchain-google-genai`
- - `bedrock` -> `langchain-aws`
- - `bedrock_converse` -> `langchain-aws`
- - `cohere` -> `langchain-cohere`
- - `fireworks` -> `langchain-fireworks`
- - `together` -> `langchain-together`
- - `mistralai` -> `langchain-mistralai`
- - `huggingface` -> `langchain-huggingface`
- - `groq` -> `langchain-groq`
- - `ollama` -> `langchain-ollama`
- - `google_anthropic_vertex` -> `langchain-google-vertexai`
- - `deepseek` -> `langchain-deepseek`
- - `ibm` -> `langchain-ibm`
- - `nvidia` -> `langchain-nvidia-ai-endpoints`
- - `xai` -> `langchain-xai`
- - `perplexity` -> `langchain-perplexity`
+ Supported `model_provider` values and the corresponding integration package
+ are:
- Will attempt to infer model_provider from model if not specified. The
+ - `openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `anthropic` -> [`langchain-anthropic`](https://docs.langchain.com/oss/python/integrations/providers/anthropic)
+ - `azure_openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `azure_ai` -> [`langchain-azure-ai`](https://docs.langchain.com/oss/python/integrations/providers/microsoft)
+ - `google_vertexai` -> [`langchain-google-vertexai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `google_genai` -> [`langchain-google-genai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `bedrock` -> [`langchain-aws`](https://docs.langchain.com/oss/python/integrations/providers/aws)
+ - `bedrock_converse` -> [`langchain-aws`](https://docs.langchain.com/oss/python/integrations/providers/aws)
+ - `cohere` -> [`langchain-cohere`](https://docs.langchain.com/oss/python/integrations/providers/cohere)
+ - `fireworks` -> [`langchain-fireworks`](https://docs.langchain.com/oss/python/integrations/providers/fireworks)
+ - `together` -> [`langchain-together`](https://docs.langchain.com/oss/python/integrations/providers/together)
+ - `mistralai` -> [`langchain-mistralai`](https://docs.langchain.com/oss/python/integrations/providers/mistralai)
+ - `huggingface` -> [`langchain-huggingface`](https://docs.langchain.com/oss/python/integrations/providers/huggingface)
+ - `groq` -> [`langchain-groq`](https://docs.langchain.com/oss/python/integrations/providers/groq)
+ - `ollama` -> [`langchain-ollama`](https://docs.langchain.com/oss/python/integrations/providers/ollama)
+ - `google_anthropic_vertex` -> [`langchain-google-vertexai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `deepseek` -> [`langchain-deepseek`](https://docs.langchain.com/oss/python/integrations/providers/deepseek)
+ - `ibm` -> [`langchain-ibm`](https://docs.langchain.com/oss/python/integrations/providers/deepseek)
+ - `nvidia` -> [`langchain-nvidia-ai-endpoints`](https://docs.langchain.com/oss/python/integrations/providers/nvidia)
+ - `xai` -> [`langchain-xai`](https://docs.langchain.com/oss/python/integrations/providers/xai)
+ - `perplexity` -> [`langchain-perplexity`](https://docs.langchain.com/oss/python/integrations/providers/perplexity)
+
+ Will attempt to infer `model_provider` from model if not specified. The
following providers will be inferred based on these model prefixes:
- `gpt-...` | `o1...` | `o3...` -> `openai`
- - `claude...` -> `anthropic`
- - `amazon...` -> `bedrock`
- - `gemini...` -> `google_vertexai`
- - `command...` -> `cohere`
- - `accounts/fireworks...` -> `fireworks`
- - `mistral...` -> `mistralai`
- - `deepseek...` -> `deepseek`
- - `grok...` -> `xai`
- - `sonar...` -> `perplexity`
- configurable_fields: Which model parameters are configurable:
+ - `claude...` -> `anthropic`
+ - `amazon...` -> `bedrock`
+ - `gemini...` -> `google_vertexai`
+ - `command...` -> `cohere`
+ - `accounts/fireworks...` -> `fireworks`
+ - `mistral...` -> `mistralai`
+ - `deepseek...` -> `deepseek`
+ - `grok...` -> `xai`
+ - `sonar...` -> `perplexity`
+ configurable_fields: Which model parameters are configurable at runtime:
- - None: No configurable fields.
- - `'any'`: All fields are configurable. **See Security Note below.**
- - Union[List[str], Tuple[str, ...]]: Specified fields are configurable.
+ - `None`: No configurable fields (i.e., a fixed model).
+ - `'any'`: All fields are configurable. **See security note below.**
+ - `list[str] | Tuple[str, ...]`: Specified fields are configurable.
- Fields are assumed to have config_prefix stripped if there is a
- config_prefix. If model is specified, then defaults to None. If model is
- not specified, then defaults to `("model", "model_provider")`.
+ Fields are assumed to have `config_prefix` stripped if a `config_prefix` is
+ specified.
- ***Security Note***: Setting `configurable_fields="any"` means fields like
- `api_key`, `base_url`, etc. can be altered at runtime, potentially redirecting
- model requests to a different service/user. Make sure that if you're
- accepting untrusted configurations that you enumerate the
- `configurable_fields=(...)` explicitly.
+ If `model` is specified, then defaults to `None`.
- config_prefix: If `'config_prefix'` is a non-empty string then model will be
- configurable at runtime via the
- `config["configurable"]["{config_prefix}_{param}"]` keys. If
- `'config_prefix'` is an empty string then model will be configurable via
+ If `model` is not specified, then defaults to `("model", "model_provider")`.
+
+ !!! warning "Security note"
+ Setting `configurable_fields="any"` means fields like `api_key`,
+ `base_url`, etc., can be altered at runtime, potentially redirecting
+ model requests to a different service/user.
+
+ Make sure that if you're accepting untrusted configurations that you
+ enumerate the `configurable_fields=(...)` explicitly.
+
+ config_prefix: Optional prefix for configuration keys.
+
+ Useful when you have multiple configurable models in the same application.
+
+ If `'config_prefix'` is a non-empty string then `model` will be configurable
+ at runtime via the `config["configurable"]["{config_prefix}_{param}"]` keys.
+ See examples below.
+
+ If `'config_prefix'` is an empty string then model will be configurable via
`config["configurable"]["{param}"]`.
- temperature: Model temperature.
- max_tokens: Max output tokens.
- timeout: The maximum time (in seconds) to wait for a response from the model
- before canceling the request.
- max_retries: The maximum number of attempts the system will make to resend a
- request if it fails due to issues like network timeouts or rate limits.
- base_url: The URL of the API endpoint where requests are sent.
- rate_limiter: A `BaseRateLimiter` to space out requests to avoid exceeding
- rate limits.
- kwargs: Additional model-specific keyword args to pass to
- `<>.__init__(model=model_name, **kwargs)`.
+ **kwargs: Additional model-specific keyword args to pass to the underlying
+ chat model's `__init__` method. Common parameters include:
+
+ - `temperature`: Model temperature for controlling randomness.
+ - `max_tokens`: Maximum number of output tokens.
+ - `timeout`: Maximum time (in seconds) to wait for a response.
+ - `max_retries`: Maximum number of retry attempts for failed requests.
+ - `base_url`: Custom API endpoint URL.
+ - `rate_limiter`: A
+ [`BaseRateLimiter`][langchain_core.rate_limiters.BaseRateLimiter]
+ instance to control request rate.
+
+ Refer to the specific model provider's
+ [integration reference](https://reference.langchain.com/python/integrations/)
+ for all available parameters.
Returns:
- A BaseChatModel corresponding to the model_name and model_provider specified if
- configurability is inferred to be False. If configurable, a chat model emulator
- that initializes the underlying model at runtime once a config is passed in.
+ A [`BaseChatModel`][langchain_core.language_models.BaseChatModel] corresponding
+ to the `model_name` and `model_provider` specified if configurability is
+ inferred to be `False`. If configurable, a chat model emulator that
+ initializes the underlying model at runtime once a config is passed in.
Raises:
- ValueError: If model_provider cannot be inferred or isn't supported.
+ ValueError: If `model_provider` cannot be inferred or isn't supported.
ImportError: If the model provider integration package is not installed.
- ???+ note "Init non-configurable model"
+ ???+ example "Initialize a non-configurable model"
```python
# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
+
from langchain_classic.chat_models import init_chat_model
o3_mini = init_chat_model("openai:o3-mini", temperature=0)
- claude_sonnet = init_chat_model(
- "anthropic:claude-3-5-sonnet-latest", temperature=0
- )
- gemini_2_flash = init_chat_model(
+ claude_sonnet = init_chat_model("anthropic:claude-sonnet-4-5-20250929", temperature=0)
+ gemini_2-5_flash = init_chat_model(
"google_vertexai:gemini-2.5-flash", temperature=0
)
o3_mini.invoke("what's your name")
claude_sonnet.invoke("what's your name")
- gemini_2_flash.invoke("what's your name")
+ gemini_2-5_flash.invoke("what's your name")
```
- ??? note "Partially configurable model with no default"
+ ??? example "Partially configurable model with no default"
```python
# pip install langchain langchain-openai langchain-anthropic
+
from langchain_classic.chat_models import init_chat_model
- # We don't need to specify configurable=True if a model isn't specified.
+ # (We don't need to specify configurable=True if a model isn't specified.)
configurable_model = init_chat_model(temperature=0)
configurable_model.invoke(
"what's your name", config={"configurable": {"model": "gpt-4o"}}
)
- # GPT-4o response
+ # Use GPT-4o to generate the response
configurable_model.invoke(
"what's your name",
- config={"configurable": {"model": "claude-3-5-sonnet-latest"}},
+ config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
)
- # claude-3.5 sonnet response
```
- ??? note "Fully configurable model with a default"
+ ??? example "Fully configurable model with a default"
```python
# pip install langchain langchain-openai langchain-anthropic
+
from langchain_classic.chat_models import init_chat_model
configurable_model_with_default = init_chat_model(
"openai:gpt-4o",
- configurable_fields="any", # this allows us to configure other params like temperature, max_tokens, etc at runtime.
+ configurable_fields="any", # This allows us to configure other params like temperature, max_tokens, etc at runtime.
config_prefix="foo",
temperature=0,
)
configurable_model_with_default.invoke("what's your name")
- # GPT-4o response with temperature 0
+ # GPT-4o response with temperature 0 (as set in default)
configurable_model_with_default.invoke(
"what's your name",
config={
"configurable": {
- "foo_model": "anthropic:claude-3-5-sonnet-latest",
+ "foo_model": "anthropic:claude-sonnet-4-5-20250929",
"foo_temperature": 0.6,
}
},
)
- # Claude-3.5 sonnet response with temperature 0.6
+ # Override default to use Sonnet 4.5 with temperature 0.6 to generate response
```
- ??? note "Bind tools to a configurable model"
+ ??? example "Bind tools to a configurable model"
- You can call any ChatModel declarative methods on a configurable model in the
- same way that you would with a normal model.
+ You can call any chat model declarative methods on a configurable model in the
+ same way that you would with a normal model:
```python
# pip install langchain langchain-openai langchain-anthropic
+
from langchain_classic.chat_models import init_chat_model
from pydantic import BaseModel, Field
@@ -276,33 +307,31 @@ def init_chat_model(
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?"
)
- # GPT-4o response with tool calls
+ # Use GPT-4o
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?",
- config={"configurable": {"model": "claude-3-5-sonnet-latest"}},
+ config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
)
- # Claude-3.5 sonnet response with tools
+ # Use Sonnet 4.5
```
- !!! version-added "Added in version 0.2.7"
-
- !!! warning "Behavior changed in 0.2.8"
+ !!! warning "Behavior changed in `langchain` 0.2.8"
Support for `configurable_fields` and `config_prefix` added.
- !!! warning "Behavior changed in 0.2.12"
+ !!! warning "Behavior changed in `langchain` 0.2.12"
Support for Ollama via langchain-ollama package added
- (langchain_ollama.ChatOllama). Previously,
+ (`langchain_ollama.ChatOllama`). Previously,
the now-deprecated langchain-community version of Ollama was imported
- (langchain_community.chat_models.ChatOllama).
+ (`langchain_community.chat_models.ChatOllama`).
Support for AWS Bedrock models via the Converse API added
- (model_provider="bedrock_converse").
+ (`model_provider="bedrock_converse"`).
- !!! warning "Behavior changed in 0.3.5"
+ !!! warning "Behavior changed in `langchain` 0.3.5"
Out of beta.
- !!! warning "Behavior changed in 0.3.19"
+ !!! warning "Behavior changed in `langchain` 0.3.19"
Support for Deepseek, IBM, Nvidia, and xAI models added.
""" # noqa: E501
@@ -406,11 +435,14 @@ def _init_chat_model_helper(
from langchain_mistralai import ChatMistralAI
return ChatMistralAI(model=model, **kwargs) # type: ignore[call-arg,unused-ignore]
+
if model_provider == "huggingface":
_check_pkg("langchain_huggingface")
- from langchain_huggingface import ChatHuggingFace
+ from langchain_huggingface import ChatHuggingFace, HuggingFacePipeline
+
+ llm = HuggingFacePipeline.from_model_id(model_id=model, **kwargs)
+ return ChatHuggingFace(llm=llm)
- return ChatHuggingFace(model_id=model, **kwargs)
if model_provider == "groq":
_check_pkg("langchain_groq")
from langchain_groq import ChatGroq
@@ -622,7 +654,7 @@ class _ConfigurableModel(Runnable[LanguageModelInput, Any]):
config: RunnableConfig | None = None,
**kwargs: Any,
) -> _ConfigurableModel:
- """Bind config to a Runnable, returning a new Runnable."""
+ """Bind config to a `Runnable`, returning a new `Runnable`."""
config = RunnableConfig(**(config or {}), **cast("RunnableConfig", kwargs))
model_params = self._model_params(config)
remaining_config = {k: v for k, v in config.items() if k != "configurable"}
diff --git a/libs/langchain/langchain_classic/embeddings/__init__.py b/libs/langchain/langchain_classic/embeddings/__init__.py
index c10293c4574..c76d24b0e97 100644
--- a/libs/langchain/langchain_classic/embeddings/__init__.py
+++ b/libs/langchain/langchain_classic/embeddings/__init__.py
@@ -70,34 +70,12 @@ if TYPE_CHECKING:
XinferenceEmbeddings,
)
+ from langchain_classic.chains.hyde.base import HypotheticalDocumentEmbedder
+
logger = logging.getLogger(__name__)
-# TODO: this is in here to maintain backwards compatibility
-class HypotheticalDocumentEmbedder:
- def __init__(self, *args: Any, **kwargs: Any):
- logger.warning(
- "Using a deprecated class. Please use "
- "`from langchain_classic.chains import "
- "HypotheticalDocumentEmbedder` instead",
- )
- from langchain_classic.chains.hyde.base import HypotheticalDocumentEmbedder as H
-
- return H(*args, **kwargs) # type: ignore[return-value] # noqa: PLE0101
-
- @classmethod
- def from_llm(cls, *args: Any, **kwargs: Any) -> Any:
- logger.warning(
- "Using a deprecated class. Please use "
- "`from langchain_classic.chains import "
- "HypotheticalDocumentEmbedder` instead",
- )
- from langchain_classic.chains.hyde.base import HypotheticalDocumentEmbedder as H
-
- return H.from_llm(*args, **kwargs)
-
-
# Create a way to dynamically look up deprecated imports.
# Used to consolidate logic for raising deprecation warnings and
# handling optional imports.
@@ -128,6 +106,7 @@ DEPRECATED_LOOKUP = {
"HuggingFaceHubEmbeddings": "langchain_community.embeddings",
"HuggingFaceInferenceAPIEmbeddings": "langchain_community.embeddings",
"HuggingFaceInstructEmbeddings": "langchain_community.embeddings",
+ "HypotheticalDocumentEmbedder": "langchain_classic.chains.hyde.base",
"InfinityEmbeddings": "langchain_community.embeddings",
"JavelinAIGatewayEmbeddings": "langchain_community.embeddings",
"JinaEmbeddings": "langchain_community.embeddings",
@@ -193,6 +172,7 @@ __all__ = [
"HuggingFaceHubEmbeddings",
"HuggingFaceInferenceAPIEmbeddings",
"HuggingFaceInstructEmbeddings",
+ "HypotheticalDocumentEmbedder",
"InfinityEmbeddings",
"JavelinAIGatewayEmbeddings",
"JinaEmbeddings",
diff --git a/libs/langchain/langchain_classic/embeddings/base.py b/libs/langchain/langchain_classic/embeddings/base.py
index d7f4f8021eb..7fc32abc98e 100644
--- a/libs/langchain/langchain_classic/embeddings/base.py
+++ b/libs/langchain/langchain_classic/embeddings/base.py
@@ -137,20 +137,34 @@ def init_embeddings(
installed.
Args:
- model: Name of the model to use. Can be either:
- - A model string like "openai:text-embedding-3-small"
- - Just the model name if provider is specified
- provider: Optional explicit provider name. If not specified,
- will attempt to parse from the model string. Supported providers
- and their required packages:
+ model: Name of the model to use.
- {_get_provider_list()}
+ Can be either:
+
+ - A model string like `"openai:text-embedding-3-small"`
+ - Just the model name if the provider is specified separately or can be
+ inferred.
+
+ See supported providers under the `provider` arg description.
+ provider: Optional explicit provider name. If not specified, will attempt to
+ parse from the model string in the `model` arg.
+
+ Supported providers:
+
+ - `openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `azure_openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `bedrock` -> [`langchain-aws`](https://docs.langchain.com/oss/python/integrations/providers/aws)
+ - `cohere` -> [`langchain-cohere`](https://docs.langchain.com/oss/python/integrations/providers/cohere)
+ - `google_vertexai` -> [`langchain-google-vertexai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `huggingface` -> [`langchain-huggingface`](https://docs.langchain.com/oss/python/integrations/providers/huggingface)
+ - `mistraiai` -> [`langchain-mistralai`](https://docs.langchain.com/oss/python/integrations/providers/mistralai)
+ - `ollama` -> [`langchain-ollama`](https://docs.langchain.com/oss/python/integrations/providers/ollama)
**kwargs: Additional model-specific parameters passed to the embedding model.
These vary by provider, see the provider-specific documentation for details.
Returns:
- An Embeddings instance that can generate embeddings for text.
+ An `Embeddings` instance that can generate embeddings for text.
Raises:
ValueError: If the model provider is not supported or cannot be determined
@@ -171,7 +185,7 @@ def init_embeddings(
model = init_embeddings("openai:text-embedding-3-small", api_key="sk-...")
```
- !!! version-added "Added in version 0.3.9"
+ !!! version-added "Added in `langchain` 0.3.9"
"""
if not model:
diff --git a/libs/langchain/langchain_classic/embeddings/cache.py b/libs/langchain/langchain_classic/embeddings/cache.py
index 98f6cf7376e..08a900a45f5 100644
--- a/libs/langchain/langchain_classic/embeddings/cache.py
+++ b/libs/langchain/langchain_classic/embeddings/cache.py
@@ -122,7 +122,7 @@ class CacheBackedEmbeddings(Embeddings):
```python
from langchain_classic.embeddings import CacheBackedEmbeddings
from langchain_classic.storage import LocalFileStore
- from langchain_community.embeddings import OpenAIEmbeddings
+ from langchain_openai import OpenAIEmbeddings
store = LocalFileStore("./my_cache")
diff --git a/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py b/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py
index e4cd9410d1f..f3828257785 100644
--- a/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py
+++ b/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py
@@ -104,7 +104,7 @@ class TrajectoryEvalChain(AgentTrajectoryEvaluator, LLMEvalChain):
Example:
```python
from langchain_classic.agents import AgentType, initialize_agent
- from langchain_community.chat_models import ChatOpenAI
+ from langchain_openai import ChatOpenAI
from langchain_classic.evaluation import TrajectoryEvalChain
from langchain_classic.tools import tool
@@ -113,10 +113,10 @@ class TrajectoryEvalChain(AgentTrajectoryEvaluator, LLMEvalChain):
\"\"\"Very helpful answers to geography questions.\"\"\"
return f"{country}? IDK - We may never know {question}."
- llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
+ model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = initialize_agent(
tools=[geography_answers],
- llm=llm,
+ llm=model,
agent=AgentType.OPENAI_FUNCTIONS,
return_intermediate_steps=True,
)
@@ -125,7 +125,7 @@ class TrajectoryEvalChain(AgentTrajectoryEvaluator, LLMEvalChain):
response = agent(question)
eval_chain = TrajectoryEvalChain.from_llm(
- llm=llm, agent_tools=[geography_answers], return_reasoning=True
+ llm=model, agent_tools=[geography_answers], return_reasoning=True
)
result = eval_chain.evaluate_agent_trajectory(
@@ -148,7 +148,7 @@ class TrajectoryEvalChain(AgentTrajectoryEvaluator, LLMEvalChain):
default_factory=TrajectoryOutputParser,
)
"""The output parser used to parse the output."""
- return_reasoning: bool = False # :meta private:
+ return_reasoning: bool = False
"""DEPRECATED. Reasoning always returned."""
model_config = ConfigDict(
@@ -165,7 +165,7 @@ class TrajectoryEvalChain(AgentTrajectoryEvaluator, LLMEvalChain):
"""Get the description of the agent tools.
Returns:
- str: The description of the agent tools.
+ The description of the agent tools.
"""
if self.agent_tools is None:
return ""
@@ -184,10 +184,10 @@ Description: {tool.description}"""
"""Get the agent trajectory as a formatted string.
Args:
- steps (Union[str, List[Tuple[AgentAction, str]]]): The agent trajectory.
+ steps: The agent trajectory.
Returns:
- str: The formatted agent trajectory.
+ The formatted agent trajectory.
"""
if isinstance(steps, str):
return steps
@@ -240,7 +240,7 @@ The following is the expected answer. Use this to measure correctness:
**kwargs: Additional keyword arguments.
Returns:
- TrajectoryEvalChain: The TrajectoryEvalChain object.
+ The `TrajectoryEvalChain` object.
"""
if not isinstance(llm, BaseChatModel):
msg = "Only chat models supported by the current trajectory eval"
@@ -259,7 +259,7 @@ The following is the expected answer. Use this to measure correctness:
"""Get the input keys for the chain.
Returns:
- List[str]: The input keys.
+ The input keys.
"""
return ["question", "agent_trajectory", "answer", "reference"]
@@ -268,7 +268,7 @@ The following is the expected answer. Use this to measure correctness:
"""Get the output keys for the chain.
Returns:
- List[str]: The output keys.
+ The output keys.
"""
return ["score", "reasoning"]
@@ -289,7 +289,7 @@ The following is the expected answer. Use this to measure correctness:
run_manager: The callback manager for the chain run.
Returns:
- Dict[str, Any]: The output values of the chain.
+ The output values of the chain.
"""
chain_input = {**inputs}
if self.agent_tools:
@@ -313,7 +313,7 @@ The following is the expected answer. Use this to measure correctness:
run_manager: The callback manager for the chain run.
Returns:
- Dict[str, Any]: The output values of the chain.
+ The output values of the chain.
"""
chain_input = {**inputs}
if self.agent_tools:
diff --git a/libs/langchain/langchain_classic/evaluation/comparison/__init__.py b/libs/langchain/langchain_classic/evaluation/comparison/__init__.py
index 790290a9e21..62148c2677b 100644
--- a/libs/langchain/langchain_classic/evaluation/comparison/__init__.py
+++ b/libs/langchain/langchain_classic/evaluation/comparison/__init__.py
@@ -6,7 +6,7 @@ preferences, measuring similarity / semantic equivalence between outputs,
or any other comparison task.
Example:
- >>> from langchain_community.chat_models import ChatOpenAI
+ >>> from langchain_openai import ChatOpenAI
>>> from langchain_classic.evaluation.comparison import PairwiseStringEvalChain
>>> llm = ChatOpenAI(temperature=0)
>>> chain = PairwiseStringEvalChain.from_llm(llm=llm)
diff --git a/libs/langchain/langchain_classic/evaluation/comparison/eval_chain.py b/libs/langchain/langchain_classic/evaluation/comparison/eval_chain.py
index 51ef0838cb9..03599b59b41 100644
--- a/libs/langchain/langchain_classic/evaluation/comparison/eval_chain.py
+++ b/libs/langchain/langchain_classic/evaluation/comparison/eval_chain.py
@@ -163,12 +163,12 @@ class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
output_parser (BaseOutputParser): The output parser for the chain.
Example:
- >>> from langchain_community.chat_models import ChatOpenAI
+ >>> from langchain_openai import ChatOpenAI
>>> from langchain_classic.evaluation.comparison import PairwiseStringEvalChain
- >>> llm = ChatOpenAI(
+ >>> model = ChatOpenAI(
... temperature=0, model_name="gpt-4", model_kwargs={"random_seed": 42}
... )
- >>> chain = PairwiseStringEvalChain.from_llm(llm=llm)
+ >>> chain = PairwiseStringEvalChain.from_llm(llm=model)
>>> result = chain.evaluate_string_pairs(
... input = "What is the chemical formula for water?",
... prediction = "H2O",
@@ -188,7 +188,7 @@ class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
"""
- output_key: str = "results" #: :meta private:
+ output_key: str = "results"
output_parser: BaseOutputParser = Field(
default_factory=PairwiseStringResultOutputParser,
)
@@ -207,7 +207,7 @@ class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
"""Return whether the chain requires a reference.
Returns:
- True if the chain requires a reference, False otherwise.
+ `True` if the chain requires a reference, `False` otherwise.
"""
return False
@@ -217,7 +217,7 @@ class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
"""Return whether the chain requires an input.
Returns:
- bool: True if the chain requires an input, False otherwise.
+ `True` if the chain requires an input, `False` otherwise.
"""
return True
@@ -227,7 +227,7 @@ class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
"""Return the warning to show when reference is ignored.
Returns:
- str: The warning to show when reference is ignored.
+ The warning to show when reference is ignored.
"""
return (
@@ -343,7 +343,7 @@ Performance may be significantly worse with other models.",
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
+ `dict` containing:
- reasoning: The reasoning for the preference.
- value: The preference value, which is either 'A', 'B', or None
for no preference.
@@ -389,7 +389,7 @@ Performance may be significantly worse with other models.",
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
+ `dict` containing:
- reasoning: The reasoning for the preference.
- value: The preference value, which is either 'A', 'B', or None
for no preference.
@@ -425,7 +425,7 @@ class LabeledPairwiseStringEvalChain(PairwiseStringEvalChain):
"""Return whether the chain requires a reference.
Returns:
- bool: True if the chain requires a reference, False otherwise.
+ `True` if the chain requires a reference, `False` otherwise.
"""
return True
@@ -442,18 +442,18 @@ class LabeledPairwiseStringEvalChain(PairwiseStringEvalChain):
"""Initialize the LabeledPairwiseStringEvalChain from an LLM.
Args:
- llm (BaseLanguageModel): The LLM to use.
- prompt (PromptTemplate, optional): The prompt to use.
- criteria (Union[CRITERIA_TYPE, str], optional): The criteria to use.
- **kwargs (Any): Additional keyword arguments.
+ llm: The LLM to use.
+ prompt: The prompt to use.
+ criteria: The criteria to use.
+ **kwargs: Additional keyword arguments.
Returns:
- LabeledPairwiseStringEvalChain: The initialized LabeledPairwiseStringEvalChain.
+ The initialized `LabeledPairwiseStringEvalChain`.
Raises:
ValueError: If the input variables are not as expected.
- """ # noqa: E501
+ """
expected_input_vars = {
"prediction",
"prediction_b",
diff --git a/libs/langchain/langchain_classic/evaluation/criteria/__init__.py b/libs/langchain/langchain_classic/evaluation/criteria/__init__.py
index c275541a06c..a551f59dbe0 100644
--- a/libs/langchain/langchain_classic/evaluation/criteria/__init__.py
+++ b/libs/langchain/langchain_classic/evaluation/criteria/__init__.py
@@ -12,12 +12,12 @@ chain against specified criteria.
Examples:
--------
Using a predefined criterion:
->>> from langchain_community.llms import OpenAI
+>>> from langchain_openai import OpenAI
>>> from langchain_classic.evaluation.criteria import CriteriaEvalChain
->>> llm = OpenAI()
+>>> model = OpenAI()
>>> criteria = "conciseness"
->>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)
+>>> chain = CriteriaEvalChain.from_llm(llm=model, criteria=criteria)
>>> chain.evaluate_strings(
prediction="The answer is 42.",
reference="42",
@@ -26,10 +26,10 @@ Using a predefined criterion:
Using a custom criterion:
->>> from langchain_community.llms import OpenAI
+>>> from langchain_openai import OpenAI
>>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
->>> llm = OpenAI()
+>>> model = OpenAI()
>>> criteria = {
"hallucination": (
"Does this submission contain information"
@@ -37,7 +37,7 @@ Using a custom criterion:
),
}
>>> chain = LabeledCriteriaEvalChain.from_llm(
- llm=llm,
+ llm=model,
criteria=criteria,
)
>>> chain.evaluate_strings(
diff --git a/libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py b/libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py
index 80d964c451f..5d90659caa7 100644
--- a/libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py
+++ b/libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py
@@ -190,9 +190,9 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
--------
>>> from langchain_anthropic import ChatAnthropic
>>> from langchain_classic.evaluation.criteria import CriteriaEvalChain
- >>> llm = ChatAnthropic(temperature=0)
+ >>> model = ChatAnthropic(temperature=0)
>>> criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"}
- >>> evaluator = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)
+ >>> evaluator = CriteriaEvalChain.from_llm(llm=model, criteria=criteria)
>>> evaluator.evaluate_strings(
... prediction="Imagine an ice cream flavor for the color aquamarine",
... input="Tell me an idea",
@@ -205,10 +205,10 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
>>> from langchain_openai import ChatOpenAI
>>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
- >>> llm = ChatOpenAI(model="gpt-4", temperature=0)
+ >>> model = ChatOpenAI(model="gpt-4", temperature=0)
>>> criteria = "correctness"
>>> evaluator = LabeledCriteriaEvalChain.from_llm(
- ... llm=llm,
+ ... llm=model,
... criteria=criteria,
... )
>>> evaluator.evaluate_strings(
@@ -228,7 +228,7 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
"""The parser to use to map the output to a structured result."""
criterion_name: str
"""The name of the criterion being evaluated."""
- output_key: str = "results" #: :meta private:
+ output_key: str = "results"
@classmethod
@override
@@ -347,7 +347,7 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
--------
>>> from langchain_openai import OpenAI
>>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
- >>> llm = OpenAI()
+ >>> model = OpenAI()
>>> criteria = {
"hallucination": (
"Does this submission contain information"
@@ -355,7 +355,7 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
),
}
>>> chain = LabeledCriteriaEvalChain.from_llm(
- llm=llm,
+ llm=model,
criteria=criteria,
)
"""
@@ -433,9 +433,9 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
Examples:
>>> from langchain_openai import OpenAI
>>> from langchain_classic.evaluation.criteria import CriteriaEvalChain
- >>> llm = OpenAI()
+ >>> model = OpenAI()
>>> criteria = "conciseness"
- >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)
+ >>> chain = CriteriaEvalChain.from_llm(llm=model, criteria=criteria)
>>> chain.evaluate_strings(
prediction="The answer is 42.",
reference="42",
@@ -485,9 +485,9 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
Examples:
>>> from langchain_openai import OpenAI
>>> from langchain_classic.evaluation.criteria import CriteriaEvalChain
- >>> llm = OpenAI()
+ >>> model = OpenAI()
>>> criteria = "conciseness"
- >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)
+ >>> chain = CriteriaEvalChain.from_llm(llm=model, criteria=criteria)
>>> await chain.aevaluate_strings(
prediction="The answer is 42.",
reference="42",
@@ -569,7 +569,7 @@ class LabeledCriteriaEvalChain(CriteriaEvalChain):
--------
>>> from langchain_openai import OpenAI
>>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
- >>> llm = OpenAI()
+ >>> model = OpenAI()
>>> criteria = {
"hallucination": (
"Does this submission contain information"
@@ -577,7 +577,7 @@ class LabeledCriteriaEvalChain(CriteriaEvalChain):
),
}
>>> chain = LabeledCriteriaEvalChain.from_llm(
- llm=llm,
+ llm=model,
criteria=criteria,
)
"""
diff --git a/libs/langchain/langchain_classic/evaluation/embedding_distance/base.py b/libs/langchain/langchain_classic/evaluation/embedding_distance/base.py
index 58b7f032ff2..c7c0e320dbe 100644
--- a/libs/langchain/langchain_classic/evaluation/embedding_distance/base.py
+++ b/libs/langchain/langchain_classic/evaluation/embedding_distance/base.py
@@ -48,10 +48,10 @@ def _check_numpy() -> bool:
def _embedding_factory() -> Embeddings:
- """Create an Embeddings object.
+ """Create an `Embeddings` object.
Returns:
- Embeddings: The created Embeddings object.
+ The created `Embeddings` object.
"""
# Here for backwards compatibility.
# Generally, we do not want to be seeing imports from langchain community
@@ -94,9 +94,8 @@ class _EmbeddingDistanceChainMixin(Chain):
"""Shared functionality for embedding distance evaluators.
Attributes:
- embeddings (Embeddings): The embedding objects to vectorize the outputs.
- distance_metric (EmbeddingDistance): The distance metric to use
- for comparing the embeddings.
+ embeddings: The embedding objects to vectorize the outputs.
+ distance_metric: The distance metric to use for comparing the embeddings.
"""
embeddings: Embeddings = Field(default_factory=_embedding_factory)
@@ -107,10 +106,10 @@ class _EmbeddingDistanceChainMixin(Chain):
"""Validate that the TikTok library is installed.
Args:
- values (Dict[str, Any]): The values to validate.
+ values: The values to validate.
Returns:
- Dict[str, Any]: The validated values.
+ The validated values.
"""
embeddings = values.get("embeddings")
types_ = []
@@ -159,7 +158,7 @@ class _EmbeddingDistanceChainMixin(Chain):
"""Return the output keys of the chain.
Returns:
- List[str]: The output keys.
+ The output keys.
"""
return ["score"]
@@ -173,10 +172,10 @@ class _EmbeddingDistanceChainMixin(Chain):
"""Get the metric function for the given metric name.
Args:
- metric (EmbeddingDistance): The metric name.
+ metric: The metric name.
Returns:
- Any: The metric function.
+ The metric function.
"""
metrics = {
EmbeddingDistance.COSINE: self._cosine_distance,
@@ -334,7 +333,7 @@ class _EmbeddingDistanceChainMixin(Chain):
vectors (np.ndarray): The input vectors.
Returns:
- float: The computed score.
+ The computed score.
"""
metric = self._get_metric(self.distance_metric)
if _check_numpy() and isinstance(vectors, _import_numpy().ndarray):
@@ -362,7 +361,7 @@ class EmbeddingDistanceEvalChain(_EmbeddingDistanceChainMixin, StringEvaluator):
"""Return whether the chain requires a reference.
Returns:
- bool: True if a reference is required, False otherwise.
+ True if a reference is required, `False` otherwise.
"""
return True
@@ -376,7 +375,7 @@ class EmbeddingDistanceEvalChain(_EmbeddingDistanceChainMixin, StringEvaluator):
"""Return the input keys of the chain.
Returns:
- List[str]: The input keys.
+ The input keys.
"""
return ["prediction", "reference"]
@@ -393,7 +392,7 @@ class EmbeddingDistanceEvalChain(_EmbeddingDistanceChainMixin, StringEvaluator):
run_manager: The callback manager.
Returns:
- Dict[str, Any]: The computed score.
+ The computed score.
"""
vectors = self.embeddings.embed_documents(
[inputs["prediction"], inputs["reference"]],
@@ -413,12 +412,11 @@ class EmbeddingDistanceEvalChain(_EmbeddingDistanceChainMixin, StringEvaluator):
"""Asynchronously compute the score for a prediction and reference.
Args:
- inputs (Dict[str, Any]): The input data.
- run_manager (AsyncCallbackManagerForChainRun, optional):
- The callback manager.
+ inputs: The input data.
+ run_manager: The callback manager.
Returns:
- Dict[str, Any]: The computed score.
+ The computed score.
"""
vectors = await self.embeddings.aembed_documents(
[
@@ -456,7 +454,7 @@ class EmbeddingDistanceEvalChain(_EmbeddingDistanceChainMixin, StringEvaluator):
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
+ `dict` containing:
- score: The embedding distance between the two predictions.
"""
result = self(
@@ -492,7 +490,7 @@ class EmbeddingDistanceEvalChain(_EmbeddingDistanceChainMixin, StringEvaluator):
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
+ `dict` containing:
- score: The embedding distance between the two predictions.
"""
result = await self.acall(
@@ -523,7 +521,7 @@ class PairwiseEmbeddingDistanceEvalChain(
"""Return the input keys of the chain.
Returns:
- List[str]: The input keys.
+ The input keys.
"""
return ["prediction", "prediction_b"]
@@ -541,12 +539,11 @@ class PairwiseEmbeddingDistanceEvalChain(
"""Compute the score for two predictions.
Args:
- inputs (Dict[str, Any]): The input data.
- run_manager (CallbackManagerForChainRun, optional):
- The callback manager.
+ inputs: The input data.
+ run_manager: The callback manager.
Returns:
- Dict[str, Any]: The computed score.
+ The computed score.
"""
vectors = self.embeddings.embed_documents(
[
@@ -569,12 +566,11 @@ class PairwiseEmbeddingDistanceEvalChain(
"""Asynchronously compute the score for two predictions.
Args:
- inputs (Dict[str, Any]): The input data.
- run_manager (AsyncCallbackManagerForChainRun, optional):
- The callback manager.
+ inputs: The input data.
+ run_manager: The callback manager.
Returns:
- Dict[str, Any]: The computed score.
+ The computed score.
"""
vectors = await self.embeddings.aembed_documents(
[
@@ -612,7 +608,7 @@ class PairwiseEmbeddingDistanceEvalChain(
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
+ `dict` containing:
- score: The embedding distance between the two predictions.
"""
result = self(
@@ -648,8 +644,8 @@ class PairwiseEmbeddingDistanceEvalChain(
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
- - score: The embedding distance between the two predictions.
+ `dict` containing:
+ - score: The embedding distance between the two predictions.
"""
result = await self.acall(
inputs={"prediction": prediction, "prediction_b": prediction_b},
diff --git a/libs/langchain/langchain_classic/evaluation/exact_match/base.py b/libs/langchain/langchain_classic/evaluation/exact_match/base.py
index b555d932b78..05f0202a316 100644
--- a/libs/langchain/langchain_classic/evaluation/exact_match/base.py
+++ b/libs/langchain/langchain_classic/evaluation/exact_match/base.py
@@ -31,15 +31,12 @@ class ExactMatchStringEvaluator(StringEvaluator):
ignore_numbers: bool = False,
**_: Any,
):
- """Initialize the ExactMatchStringEvaluator.
+ """Initialize the `ExactMatchStringEvaluator`.
Args:
ignore_case: Whether to ignore case when comparing strings.
- Defaults to `False`.
ignore_punctuation: Whether to ignore punctuation when comparing strings.
- Defaults to `False`.
ignore_numbers: Whether to ignore numbers when comparing strings.
- Defaults to `False`.
"""
super().__init__()
self.ignore_case = ignore_case
@@ -61,7 +58,7 @@ class ExactMatchStringEvaluator(StringEvaluator):
"""Get the input keys.
Returns:
- List[str]: The input keys.
+ The input keys.
"""
return ["reference", "prediction"]
@@ -70,7 +67,7 @@ class ExactMatchStringEvaluator(StringEvaluator):
"""Get the evaluation name.
Returns:
- str: The evaluation name.
+ The evaluation name.
"""
return "exact_match"
diff --git a/libs/langchain/langchain_classic/evaluation/parsing/base.py b/libs/langchain/langchain_classic/evaluation/parsing/base.py
index 98141790096..bed2e6e3789 100644
--- a/libs/langchain/langchain_classic/evaluation/parsing/base.py
+++ b/libs/langchain/langchain_classic/evaluation/parsing/base.py
@@ -71,10 +71,11 @@ class JsonValidityEvaluator(StringEvaluator):
**kwargs: Additional keyword arguments (not used).
Returns:
- dict: A dictionary containing the evaluation score. The score is 1 if
- the prediction is valid JSON, and 0 otherwise.
+ `dict` containing the evaluation score. The score is `1` if
+ the prediction is valid JSON, and `0` otherwise.
+
If the prediction is not valid JSON, the dictionary also contains
- a "reasoning" field with the error message.
+ a `reasoning` field with the error message.
"""
try:
@@ -168,7 +169,7 @@ class JsonEqualityEvaluator(StringEvaluator):
**kwargs: Additional keyword arguments (not used).
Returns:
- A dictionary containing the evaluation score.
+ `dict` containing the evaluation score.
"""
parsed = self._parse_json(prediction)
label = self._parse_json(cast("str", reference))
diff --git a/libs/langchain/langchain_classic/evaluation/parsing/json_distance.py b/libs/langchain/langchain_classic/evaluation/parsing/json_distance.py
index b1167cd40cb..057b8d4e1f4 100644
--- a/libs/langchain/langchain_classic/evaluation/parsing/json_distance.py
+++ b/libs/langchain/langchain_classic/evaluation/parsing/json_distance.py
@@ -50,7 +50,7 @@ class JsonEditDistanceEvaluator(StringEvaluator):
Raises:
ImportError: If the `rapidfuzz` package is not installed and no
- `string_distance` function is provided.
+ `string_distance` function is provided.
"""
super().__init__()
if string_distance is not None:
diff --git a/libs/langchain/langchain_classic/evaluation/qa/eval_chain.py b/libs/langchain/langchain_classic/evaluation/qa/eval_chain.py
index 189a2f53212..f87cb1b7488 100644
--- a/libs/langchain/langchain_classic/evaluation/qa/eval_chain.py
+++ b/libs/langchain/langchain_classic/evaluation/qa/eval_chain.py
@@ -77,7 +77,7 @@ def _parse_string_eval_output(text: str) -> dict:
class QAEvalChain(LLMChain, StringEvaluator, LLMEvalChain):
"""LLM Chain for evaluating question answering."""
- output_key: str = "results" #: :meta private:
+ output_key: str = "results"
model_config = ConfigDict(
extra="ignore",
@@ -113,17 +113,16 @@ class QAEvalChain(LLMChain, StringEvaluator, LLMEvalChain):
"""Load QA Eval Chain from LLM.
Args:
- llm (BaseLanguageModel): the base language model to use.
+ llm: The base language model to use.
+ prompt: A prompt template containing the input_variables:
+ `'input'`, `'answer'` and `'result'` that will be used as the prompt
+ for evaluation.
- prompt (PromptTemplate): A prompt template containing the input_variables:
- 'input', 'answer' and 'result' that will be used as the prompt
- for evaluation.
- Defaults to PROMPT.
-
- **kwargs: additional keyword arguments.
+ Defaults to `PROMPT`.
+ **kwargs: Additional keyword arguments.
Returns:
- QAEvalChain: the loaded QA eval chain.
+ The loaded QA eval chain.
"""
prompt = prompt or PROMPT
expected_input_vars = {"query", "answer", "result"}
@@ -264,17 +263,16 @@ class ContextQAEvalChain(LLMChain, StringEvaluator, LLMEvalChain):
"""Load QA Eval Chain from LLM.
Args:
- llm (BaseLanguageModel): the base language model to use.
+ llm: The base language model to use.
+ prompt: A prompt template containing the `input_variables`:
+ `'query'`, `'context'` and `'result'` that will be used as the prompt
+ for evaluation.
- prompt (PromptTemplate): A prompt template containing the input_variables:
- 'query', 'context' and 'result' that will be used as the prompt
- for evaluation.
- Defaults to PROMPT.
-
- **kwargs: additional keyword arguments.
+ Defaults to `PROMPT`.
+ **kwargs: Additional keyword arguments.
Returns:
- ContextQAEvalChain: the loaded QA eval chain.
+ The loaded QA eval chain.
"""
prompt = prompt or CONTEXT_PROMPT
cls._validate_input_vars(prompt)
diff --git a/libs/langchain/langchain_classic/evaluation/regex_match/base.py b/libs/langchain/langchain_classic/evaluation/regex_match/base.py
index 23afa56ae4d..cda1ecee143 100644
--- a/libs/langchain/langchain_classic/evaluation/regex_match/base.py
+++ b/libs/langchain/langchain_classic/evaluation/regex_match/base.py
@@ -33,7 +33,7 @@ class RegexMatchStringEvaluator(StringEvaluator):
"""Initialize the RegexMatchStringEvaluator.
Args:
- flags: Flags to use for the regex match. Defaults to 0 (no flags).
+ flags: Flags to use for the regex match. Defaults to no flags.
"""
super().__init__()
self.flags = flags
@@ -53,7 +53,7 @@ class RegexMatchStringEvaluator(StringEvaluator):
"""Get the input keys.
Returns:
- List[str]: The input keys.
+ The input keys.
"""
return ["reference", "prediction"]
@@ -62,7 +62,7 @@ class RegexMatchStringEvaluator(StringEvaluator):
"""Get the evaluation name.
Returns:
- str: The evaluation name.
+ The evaluation name.
"""
return "regex_match"
diff --git a/libs/langchain/langchain_classic/evaluation/schema.py b/libs/langchain/langchain_classic/evaluation/schema.py
index 25d2b0b028f..d8e598846d4 100644
--- a/libs/langchain/langchain_classic/evaluation/schema.py
+++ b/libs/langchain/langchain_classic/evaluation/schema.py
@@ -114,8 +114,8 @@ class _EvalArgsMixin:
"""Check if the evaluation arguments are valid.
Args:
- reference (str | None, optional): The reference label.
- input_ (str | None, optional): The input string.
+ reference: The reference label.
+ input_: The input string.
Raises:
ValueError: If the evaluator requires an input string but none is provided,
@@ -162,17 +162,17 @@ class StringEvaluator(_EvalArgsMixin, ABC):
"""Evaluate Chain or LLM output, based on optional input and label.
Args:
- prediction (str): The LLM or chain prediction to evaluate.
- reference (str | None, optional): The reference label to evaluate against.
- input (str | None, optional): The input to consider during evaluation.
- kwargs: Additional keyword arguments, including callbacks, tags, etc.
+ prediction: The LLM or chain prediction to evaluate.
+ reference: The reference label to evaluate against.
+ input: The input to consider during evaluation.
+ **kwargs: Additional keyword arguments, including callbacks, tags, etc.
Returns:
- dict: The evaluation results containing the score or value.
- It is recommended that the dictionary contain the following keys:
- - score: the score of the evaluation, if applicable.
- - value: the string value of the evaluation, if applicable.
- - reasoning: the reasoning for the evaluation, if applicable.
+ The evaluation results containing the score or value.
+ It is recommended that the dictionary contain the following keys:
+ - score: the score of the evaluation, if applicable.
+ - value: the string value of the evaluation, if applicable.
+ - reasoning: the reasoning for the evaluation, if applicable.
"""
async def _aevaluate_strings(
@@ -186,17 +186,17 @@ class StringEvaluator(_EvalArgsMixin, ABC):
"""Asynchronously evaluate Chain or LLM output, based on optional input and label.
Args:
- prediction (str): The LLM or chain prediction to evaluate.
- reference (str | None, optional): The reference label to evaluate against.
- input (str | None, optional): The input to consider during evaluation.
- kwargs: Additional keyword arguments, including callbacks, tags, etc.
+ prediction: The LLM or chain prediction to evaluate.
+ reference: The reference label to evaluate against.
+ input: The input to consider during evaluation.
+ **kwargs: Additional keyword arguments, including callbacks, tags, etc.
Returns:
- dict: The evaluation results containing the score or value.
- It is recommended that the dictionary contain the following keys:
- - score: the score of the evaluation, if applicable.
- - value: the string value of the evaluation, if applicable.
- - reasoning: the reasoning for the evaluation, if applicable.
+ The evaluation results containing the score or value.
+ It is recommended that the dictionary contain the following keys:
+ - score: the score of the evaluation, if applicable.
+ - value: the string value of the evaluation, if applicable.
+ - reasoning: the reasoning for the evaluation, if applicable.
""" # noqa: E501
return await run_in_executor(
None,
@@ -218,13 +218,13 @@ class StringEvaluator(_EvalArgsMixin, ABC):
"""Evaluate Chain or LLM output, based on optional input and label.
Args:
- prediction (str): The LLM or chain prediction to evaluate.
- reference (str | None, optional): The reference label to evaluate against.
- input (str | None, optional): The input to consider during evaluation.
- kwargs: Additional keyword arguments, including callbacks, tags, etc.
+ prediction: The LLM or chain prediction to evaluate.
+ reference: The reference label to evaluate against.
+ input: The input to consider during evaluation.
+ **kwargs: Additional keyword arguments, including callbacks, tags, etc.
Returns:
- dict: The evaluation results containing the score or value.
+ The evaluation results containing the score or value.
"""
self._check_evaluation_args(reference=reference, input_=input)
return self._evaluate_strings(
@@ -245,13 +245,13 @@ class StringEvaluator(_EvalArgsMixin, ABC):
"""Asynchronously evaluate Chain or LLM output, based on optional input and label.
Args:
- prediction (str): The LLM or chain prediction to evaluate.
- reference (str | None, optional): The reference label to evaluate against.
- input (str | None, optional): The input to consider during evaluation.
- kwargs: Additional keyword arguments, including callbacks, tags, etc.
+ prediction: The LLM or chain prediction to evaluate.
+ reference: The reference label to evaluate against.
+ input: The input to consider during evaluation.
+ **kwargs: Additional keyword arguments, including callbacks, tags, etc.
Returns:
- dict: The evaluation results containing the score or value.
+ The evaluation results containing the score or value.
""" # noqa: E501
self._check_evaluation_args(reference=reference, input_=input)
return await self._aevaluate_strings(
@@ -278,14 +278,14 @@ class PairwiseStringEvaluator(_EvalArgsMixin, ABC):
"""Evaluate the output string pairs.
Args:
- prediction (str): The output string from the first model.
- prediction_b (str): The output string from the second model.
- reference (str | None, optional): The expected output / reference string.
- input (str | None, optional): The input string.
- kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
+ prediction: The output string from the first model.
+ prediction_b: The output string from the second model.
+ reference: The expected output / reference string.
+ input: The input string.
+ **kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
Returns:
- dict: A dictionary containing the preference, scores, and/or other information.
+ `dict` containing the preference, scores, and/or other information.
""" # noqa: E501
async def _aevaluate_string_pairs(
@@ -300,14 +300,14 @@ class PairwiseStringEvaluator(_EvalArgsMixin, ABC):
"""Asynchronously evaluate the output string pairs.
Args:
- prediction (str): The output string from the first model.
- prediction_b (str): The output string from the second model.
- reference (str | None, optional): The expected output / reference string.
- input (str | None, optional): The input string.
- kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
+ prediction: The output string from the first model.
+ prediction_b: The output string from the second model.
+ reference: The expected output / reference string.
+ input: The input string.
+ **kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
Returns:
- dict: A dictionary containing the preference, scores, and/or other information.
+ `dict` containing the preference, scores, and/or other information.
""" # noqa: E501
return await run_in_executor(
None,
@@ -331,14 +331,14 @@ class PairwiseStringEvaluator(_EvalArgsMixin, ABC):
"""Evaluate the output string pairs.
Args:
- prediction (str): The output string from the first model.
- prediction_b (str): The output string from the second model.
- reference (str | None, optional): The expected output / reference string.
- input (str | None, optional): The input string.
- kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
+ prediction: The output string from the first model.
+ prediction_b: The output string from the second model.
+ reference: The expected output / reference string.
+ input: The input string.
+ **kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
Returns:
- dict: A dictionary containing the preference, scores, and/or other information.
+ `dict` containing the preference, scores, and/or other information.
""" # noqa: E501
self._check_evaluation_args(reference=reference, input_=input)
return self._evaluate_string_pairs(
@@ -361,14 +361,14 @@ class PairwiseStringEvaluator(_EvalArgsMixin, ABC):
"""Asynchronously evaluate the output string pairs.
Args:
- prediction (str): The output string from the first model.
- prediction_b (str): The output string from the second model.
- reference (str | None, optional): The expected output / reference string.
- input (str | None, optional): The input string.
- kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
+ prediction: The output string from the first model.
+ prediction_b: The output string from the second model.
+ reference: The expected output / reference string.
+ input: The input string.
+ **kwargs: Additional keyword arguments, such as callbacks and optional reference strings.
Returns:
- dict: A dictionary containing the preference, scores, and/or other information.
+ `dict` containing the preference, scores, and/or other information.
""" # noqa: E501
self._check_evaluation_args(reference=reference, input_=input)
return await self._aevaluate_string_pairs(
diff --git a/libs/langchain/langchain_classic/evaluation/scoring/__init__.py b/libs/langchain/langchain_classic/evaluation/scoring/__init__.py
index 4faaefb278a..5527d530e4a 100644
--- a/libs/langchain/langchain_classic/evaluation/scoring/__init__.py
+++ b/libs/langchain/langchain_classic/evaluation/scoring/__init__.py
@@ -5,10 +5,10 @@ be they LLMs, Chains, or otherwise. This can be based on a variety of
criteria and or a reference answer.
Example:
- >>> from langchain_community.chat_models import ChatOpenAI
+ >>> from langchain_openai import ChatOpenAI
>>> from langchain_classic.evaluation.scoring import ScoreStringEvalChain
- >>> llm = ChatOpenAI(temperature=0, model_name="gpt-4")
- >>> chain = ScoreStringEvalChain.from_llm(llm=llm)
+ >>> model = ChatOpenAI(temperature=0, model_name="gpt-4")
+ >>> chain = ScoreStringEvalChain.from_llm(llm=model)
>>> result = chain.evaluate_strings(
... input="What is the chemical formula for water?",
... prediction="H2O",
diff --git a/libs/langchain/langchain_classic/evaluation/scoring/eval_chain.py b/libs/langchain/langchain_classic/evaluation/scoring/eval_chain.py
index cbc01d75bdf..f5ac405ac83 100644
--- a/libs/langchain/langchain_classic/evaluation/scoring/eval_chain.py
+++ b/libs/langchain/langchain_classic/evaluation/scoring/eval_chain.py
@@ -56,10 +56,10 @@ def resolve_criteria(
"""Resolve the criteria for the pairwise evaluator.
Args:
- criteria (Union[CRITERIA_TYPE, str], optional): The criteria to use.
+ criteria: The criteria to use.
Returns:
- dict: The resolved criteria.
+ The resolved criteria.
"""
if criteria is None:
@@ -154,10 +154,10 @@ class ScoreStringEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
output_parser (BaseOutputParser): The output parser for the chain.
Example:
- >>> from langchain_community.chat_models import ChatOpenAI
+ >>> from langchain_openai import ChatOpenAI
>>> from langchain_classic.evaluation.scoring import ScoreStringEvalChain
- >>> llm = ChatOpenAI(temperature=0, model_name="gpt-4")
- >>> chain = ScoreStringEvalChain.from_llm(llm=llm)
+ >>> model = ChatOpenAI(temperature=0, model_name="gpt-4")
+ >>> chain = ScoreStringEvalChain.from_llm(llm=model)
>>> result = chain.evaluate_strings(
... input="What is the chemical formula for water?",
... prediction="H2O",
@@ -173,7 +173,7 @@ class ScoreStringEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
"""
- output_key: str = "results" #: :meta private:
+ output_key: str = "results"
output_parser: BaseOutputParser = Field(
default_factory=ScoreStringResultOutputParser,
)
@@ -196,7 +196,7 @@ class ScoreStringEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
"""Return whether the chain requires a reference.
Returns:
- bool: True if the chain requires a reference, False otherwise.
+ `True` if the chain requires a reference, `False` otherwise.
"""
return False
@@ -206,7 +206,7 @@ class ScoreStringEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
"""Return whether the chain requires an input.
Returns:
- bool: True if the chain requires an input, False otherwise.
+ `True` if the chain requires an input, `False` otherwise.
"""
return True
@@ -227,7 +227,7 @@ class ScoreStringEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
"""Return the warning to show when reference is ignored.
Returns:
- str: The warning to show when reference is ignored.
+ The warning to show when reference is ignored.
"""
return (
@@ -354,7 +354,7 @@ Performance may be significantly worse with other models.",
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
+ `dict` containing:
- reasoning: The reasoning for the preference.
- score: A score between 1 and 10.
@@ -395,7 +395,7 @@ Performance may be significantly worse with other models.",
**kwargs: Additional keyword arguments.
Returns:
- A dictionary containing:
+ `dict` containing:
- reasoning: The reasoning for the preference.
- score: A score between 1 and 10.
@@ -424,7 +424,7 @@ class LabeledScoreStringEvalChain(ScoreStringEvalChain):
"""Return whether the chain requires a reference.
Returns:
- bool: True if the chain requires a reference, False otherwise.
+ `True` if the chain requires a reference, `False` otherwise.
"""
return True
@@ -442,14 +442,14 @@ class LabeledScoreStringEvalChain(ScoreStringEvalChain):
"""Initialize the LabeledScoreStringEvalChain from an LLM.
Args:
- llm (BaseLanguageModel): The LLM to use.
- prompt (PromptTemplate, optional): The prompt to use.
- criteria (Union[CRITERIA_TYPE, str], optional): The criteria to use.
- normalize_by (float, optional): The value to normalize the score by.
- **kwargs (Any): Additional keyword arguments.
+ llm: The LLM to use.
+ prompt: The prompt to use.
+ criteria: The criteria to use.
+ normalize_by: The value to normalize the score by.
+ **kwargs: Additional keyword arguments.
Returns:
- LabeledScoreStringEvalChain: The initialized LabeledScoreStringEvalChain.
+ The initialized LabeledScoreStringEvalChain.
Raises:
ValueError: If the input variables are not as expected.
diff --git a/libs/langchain/langchain_classic/evaluation/string_distance/base.py b/libs/langchain/langchain_classic/evaluation/string_distance/base.py
index e8d5c06fa9a..88e91cffefe 100644
--- a/libs/langchain/langchain_classic/evaluation/string_distance/base.py
+++ b/libs/langchain/langchain_classic/evaluation/string_distance/base.py
@@ -25,7 +25,7 @@ def _load_rapidfuzz() -> Any:
ImportError: If the rapidfuzz library is not installed.
Returns:
- Any: The rapidfuzz.distance module.
+ The `rapidfuzz.distance` module.
"""
try:
import rapidfuzz
@@ -42,12 +42,12 @@ class StringDistance(str, Enum):
"""Distance metric to use.
Attributes:
- DAMERAU_LEVENSHTEIN: The Damerau-Levenshtein distance.
- LEVENSHTEIN: The Levenshtein distance.
- JARO: The Jaro distance.
- JARO_WINKLER: The Jaro-Winkler distance.
- HAMMING: The Hamming distance.
- INDEL: The Indel distance.
+ `DAMERAU_LEVENSHTEIN`: The Damerau-Levenshtein distance.
+ `LEVENSHTEIN`: The Levenshtein distance.
+ `JARO`: The Jaro distance.
+ `JARO_WINKLER`: The Jaro-Winkler distance.
+ `HAMMING`: The Hamming distance.
+ `INDEL`: The Indel distance.
"""
DAMERAU_LEVENSHTEIN = "damerau_levenshtein"
@@ -63,7 +63,7 @@ class _RapidFuzzChainMixin(Chain):
distance: StringDistance = Field(default=StringDistance.JARO_WINKLER)
normalize_score: bool = Field(default=True)
- """Whether to normalize the score to a value between 0 and 1.
+ """Whether to normalize the score to a value between `0` and `1`.
Applies only to the Levenshtein and Damerau-Levenshtein distances."""
@pre_init
@@ -71,10 +71,10 @@ class _RapidFuzzChainMixin(Chain):
"""Validate that the rapidfuzz library is installed.
Args:
- values (Dict[str, Any]): The input values.
+ values: The input values.
Returns:
- Dict[str, Any]: The validated values.
+ The validated values.
"""
_load_rapidfuzz()
return values
@@ -84,7 +84,7 @@ class _RapidFuzzChainMixin(Chain):
"""Get the output keys.
Returns:
- List[str]: The output keys.
+ The output keys.
"""
return ["score"]
@@ -92,10 +92,10 @@ class _RapidFuzzChainMixin(Chain):
"""Prepare the output dictionary.
Args:
- result (Dict[str, Any]): The evaluation results.
+ result: The evaluation results.
Returns:
- Dict[str, Any]: The prepared output dictionary.
+ The prepared output dictionary.
"""
result = {"score": result["score"]}
if RUN_KEY in result:
@@ -111,7 +111,7 @@ class _RapidFuzzChainMixin(Chain):
normalize_score: Whether to normalize the score.
Returns:
- Callable: The distance metric function.
+ The distance metric function.
Raises:
ValueError: If the distance metric is invalid.
@@ -142,7 +142,7 @@ class _RapidFuzzChainMixin(Chain):
"""Get the distance metric function.
Returns:
- Callable: The distance metric function.
+ The distance metric function.
"""
return _RapidFuzzChainMixin._get_metric(
self.distance,
@@ -199,7 +199,7 @@ class StringDistanceEvalChain(StringEvaluator, _RapidFuzzChainMixin):
"""Get the input keys.
Returns:
- List[str]: The input keys.
+ The input keys.
"""
return ["reference", "prediction"]
@@ -208,7 +208,7 @@ class StringDistanceEvalChain(StringEvaluator, _RapidFuzzChainMixin):
"""Get the evaluation name.
Returns:
- str: The evaluation name.
+ The evaluation name.
"""
return f"{self.distance.value}_distance"
@@ -330,7 +330,7 @@ class PairwiseStringDistanceEvalChain(PairwiseStringEvaluator, _RapidFuzzChainMi
"""Get the input keys.
Returns:
- List[str]: The input keys.
+ The input keys.
"""
return ["prediction", "prediction_b"]
@@ -339,7 +339,7 @@ class PairwiseStringDistanceEvalChain(PairwiseStringEvaluator, _RapidFuzzChainMi
"""Get the evaluation name.
Returns:
- str: The evaluation name.
+ The evaluation name.
"""
return f"pairwise_{self.distance.value}_distance"
@@ -352,12 +352,11 @@ class PairwiseStringDistanceEvalChain(PairwiseStringEvaluator, _RapidFuzzChainMi
"""Compute the string distance between two predictions.
Args:
- inputs (Dict[str, Any]): The input values.
- run_manager (CallbackManagerForChainRun , optional):
- The callback manager.
+ inputs: The input values.
+ run_manager: The callback manager.
Returns:
- Dict[str, Any]: The evaluation results containing the score.
+ The evaluation results containing the score.
"""
return {
"score": self.compute_metric(inputs["prediction"], inputs["prediction_b"]),
@@ -372,12 +371,11 @@ class PairwiseStringDistanceEvalChain(PairwiseStringEvaluator, _RapidFuzzChainMi
"""Asynchronously compute the string distance between two predictions.
Args:
- inputs (Dict[str, Any]): The input values.
- run_manager (AsyncCallbackManagerForChainRun , optional):
- The callback manager.
+ inputs: The input values.
+ run_manager: The callback manager.
Returns:
- Dict[str, Any]: The evaluation results containing the score.
+ The evaluation results containing the score.
"""
return {
"score": self.compute_metric(inputs["prediction"], inputs["prediction_b"]),
diff --git a/libs/langchain/langchain_classic/hub.py b/libs/langchain/langchain_classic/hub.py
index 7abc310ab45..f23887a4546 100644
--- a/libs/langchain/langchain_classic/hub.py
+++ b/libs/langchain/langchain_classic/hub.py
@@ -15,6 +15,21 @@ def _get_client(
api_key: str | None = None,
api_url: str | None = None,
) -> Any:
+ """Get a client for interacting with the LangChain Hub.
+
+ Attempts to use LangSmith client if available, otherwise falls back to
+ the legacy `langchainhub` client.
+
+ Args:
+ api_key: API key to authenticate with the LangChain Hub API.
+ api_url: URL of the LangChain Hub API.
+
+ Returns:
+ Client instance for interacting with the hub.
+
+ Raises:
+ ImportError: If neither `langsmith` nor `langchainhub` can be imported.
+ """
try:
from langsmith import Client as LangSmithClient
@@ -51,18 +66,22 @@ def push(
) -> str:
"""Push an object to the hub and returns the URL it can be viewed at in a browser.
- :param repo_full_name: The full name of the prompt to push to in the format of
- `owner/prompt_name` or `prompt_name`.
- :param object: The LangChain to serialize and push to the hub.
- :param api_url: The URL of the LangChain Hub API. Defaults to the hosted API service
- if you have an api key set, or a localhost instance if not.
- :param api_key: The API key to use to authenticate with the LangChain Hub API.
- :param parent_commit_hash: The commit hash of the parent commit to push to. Defaults
- to the latest commit automatically.
- :param new_repo_is_public: Whether the prompt should be public. Defaults to
- False (Private by default).
- :param new_repo_description: The description of the prompt. Defaults to an empty
- string.
+ Args:
+ repo_full_name: The full name of the prompt to push to in the format of
+ `owner/prompt_name` or `prompt_name`.
+ object: The LangChain object to serialize and push to the hub.
+ api_url: The URL of the LangChain Hub API. Defaults to the hosted API service
+ if you have an API key set, or a localhost instance if not.
+ api_key: The API key to use to authenticate with the LangChain Hub API.
+ parent_commit_hash: The commit hash of the parent commit to push to. Defaults
+ to the latest commit automatically.
+ new_repo_is_public: Whether the prompt should be public.
+ new_repo_description: The description of the prompt.
+ readme: README content for the repository.
+ tags: Tags to associate with the prompt.
+
+ Returns:
+ URL where the pushed object can be viewed in a browser.
"""
client = _get_client(api_key=api_key, api_url=api_url)
@@ -98,12 +117,17 @@ def pull(
) -> Any:
"""Pull an object from the hub and returns it as a LangChain object.
- :param owner_repo_commit: The full name of the prompt to pull from in the format of
- `owner/prompt_name:commit_hash` or `owner/prompt_name`
- or just `prompt_name` if it's your own prompt.
- :param api_url: The URL of the LangChain Hub API. Defaults to the hosted API service
- if you have an api key set, or a localhost instance if not.
- :param api_key: The API key to use to authenticate with the LangChain Hub API.
+ Args:
+ owner_repo_commit: The full name of the prompt to pull from in the format of
+ `owner/prompt_name:commit_hash` or `owner/prompt_name`
+ or just `prompt_name` if it's your own prompt.
+ include_model: Whether to include the model configuration in the pulled prompt.
+ api_url: The URL of the LangChain Hub API. Defaults to the hosted API service
+ if you have an API key set, or a localhost instance if not.
+ api_key: The API key to use to authenticate with the LangChain Hub API.
+
+ Returns:
+ The pulled LangChain object.
"""
client = _get_client(api_key=api_key, api_url=api_url)
diff --git a/libs/langchain/langchain_classic/indexes/_sql_record_manager.py b/libs/langchain/langchain_classic/indexes/_sql_record_manager.py
index c5dbcfd8a67..e9ab8413ebb 100644
--- a/libs/langchain/langchain_classic/indexes/_sql_record_manager.py
+++ b/libs/langchain/langchain_classic/indexes/_sql_record_manager.py
@@ -97,21 +97,17 @@ class SQLRecordManager(RecordManager):
"""Initialize the SQLRecordManager.
This class serves as a manager persistence layer that uses an SQL
- backend to track upserted records. You should specify either a db_url
+ backend to track upserted records. You should specify either a `db_url`
to create an engine or provide an existing engine.
Args:
namespace: The namespace associated with this record manager.
engine: An already existing SQL Alchemy engine.
- Default is None.
- db_url: A database connection string used to create
- an SQL Alchemy engine. Default is None.
- engine_kwargs: Additional keyword arguments
- to be passed when creating the engine. Default is an empty dictionary.
- async_mode: Whether to create an async engine.
- Driver should support async operations.
- It only applies if db_url is provided.
- Default is False.
+ db_url: A database connection string used to create an SQL Alchemy engine.
+ engine_kwargs: Additional keyword arguments to be passed when creating the
+ engine.
+ async_mode: Whether to create an async engine. Driver should support async
+ operations. It only applies if `db_url` is provided.
Raises:
ValueError: If both db_url and engine are provided or neither.
diff --git a/libs/langchain/langchain_classic/indexes/vectorstore.py b/libs/langchain/langchain_classic/indexes/vectorstore.py
index d23951d0daf..073091c2b2a 100644
--- a/libs/langchain/langchain_classic/indexes/vectorstore.py
+++ b/libs/langchain/langchain_classic/indexes/vectorstore.py
@@ -22,7 +22,7 @@ def _get_default_text_splitter() -> TextSplitter:
class VectorStoreIndexWrapper(BaseModel):
- """Wrapper around a vectorstore for easy access."""
+ """Wrapper around a `VectorStore` for easy access."""
vectorstore: VectorStore
@@ -38,11 +38,11 @@ class VectorStoreIndexWrapper(BaseModel):
retriever_kwargs: dict[str, Any] | None = None,
**kwargs: Any,
) -> str:
- """Query the vectorstore using the provided LLM.
+ """Query the `VectorStore` using the provided LLM.
Args:
question: The question or prompt to query.
- llm: The language model to use. Must not be None.
+ llm: The language model to use. Must not be `None`.
retriever_kwargs: Optional keyword arguments for the retriever.
**kwargs: Additional keyword arguments forwarded to the chain.
@@ -55,7 +55,7 @@ class VectorStoreIndexWrapper(BaseModel):
"Please provide an llm to use for querying the vectorstore.\n"
"For example,\n"
"from langchain_openai import OpenAI\n"
- "llm = OpenAI(temperature=0)"
+ "model = OpenAI(temperature=0)"
)
raise NotImplementedError(msg)
retriever_kwargs = retriever_kwargs or {}
@@ -73,11 +73,11 @@ class VectorStoreIndexWrapper(BaseModel):
retriever_kwargs: dict[str, Any] | None = None,
**kwargs: Any,
) -> str:
- """Asynchronously query the vectorstore using the provided LLM.
+ """Asynchronously query the `VectorStore` using the provided LLM.
Args:
question: The question or prompt to query.
- llm: The language model to use. Must not be None.
+ llm: The language model to use. Must not be `None`.
retriever_kwargs: Optional keyword arguments for the retriever.
**kwargs: Additional keyword arguments forwarded to the chain.
@@ -90,7 +90,7 @@ class VectorStoreIndexWrapper(BaseModel):
"Please provide an llm to use for querying the vectorstore.\n"
"For example,\n"
"from langchain_openai import OpenAI\n"
- "llm = OpenAI(temperature=0)"
+ "model = OpenAI(temperature=0)"
)
raise NotImplementedError(msg)
retriever_kwargs = retriever_kwargs or {}
@@ -108,16 +108,16 @@ class VectorStoreIndexWrapper(BaseModel):
retriever_kwargs: dict[str, Any] | None = None,
**kwargs: Any,
) -> dict:
- """Query the vectorstore and retrieve the answer along with sources.
+ """Query the `VectorStore` and retrieve the answer along with sources.
Args:
question: The question or prompt to query.
- llm: The language model to use. Must not be None.
+ llm: The language model to use. Must not be `None`.
retriever_kwargs: Optional keyword arguments for the retriever.
**kwargs: Additional keyword arguments forwarded to the chain.
Returns:
- A dictionary containing the answer and source documents.
+ `dict` containing the answer and source documents.
"""
if llm is None:
msg = (
@@ -125,7 +125,7 @@ class VectorStoreIndexWrapper(BaseModel):
"Please provide an llm to use for querying the vectorstore.\n"
"For example,\n"
"from langchain_openai import OpenAI\n"
- "llm = OpenAI(temperature=0)"
+ "model = OpenAI(temperature=0)"
)
raise NotImplementedError(msg)
retriever_kwargs = retriever_kwargs or {}
@@ -143,16 +143,16 @@ class VectorStoreIndexWrapper(BaseModel):
retriever_kwargs: dict[str, Any] | None = None,
**kwargs: Any,
) -> dict:
- """Asynchronously query the vectorstore and retrieve the answer and sources.
+ """Asynchronously query the `VectorStore` and retrieve the answer and sources.
Args:
question: The question or prompt to query.
- llm: The language model to use. Must not be None.
+ llm: The language model to use. Must not be `None`.
retriever_kwargs: Optional keyword arguments for the retriever.
**kwargs: Additional keyword arguments forwarded to the chain.
Returns:
- A dictionary containing the answer and source documents.
+ `dict` containing the answer and source documents.
"""
if llm is None:
msg = (
@@ -160,7 +160,7 @@ class VectorStoreIndexWrapper(BaseModel):
"Please provide an llm to use for querying the vectorstore.\n"
"For example,\n"
"from langchain_openai import OpenAI\n"
- "llm = OpenAI(temperature=0)"
+ "model = OpenAI(temperature=0)"
)
raise NotImplementedError(msg)
retriever_kwargs = retriever_kwargs or {}
@@ -173,7 +173,7 @@ class VectorStoreIndexWrapper(BaseModel):
def _get_in_memory_vectorstore() -> type[VectorStore]:
- """Get the InMemoryVectorStore."""
+ """Get the `InMemoryVectorStore`."""
import warnings
try:
@@ -184,7 +184,7 @@ def _get_in_memory_vectorstore() -> type[VectorStore]:
warnings.warn(
"Using InMemoryVectorStore as the default vectorstore."
"This memory store won't persist data. You should explicitly"
- "specify a vectorstore when using VectorstoreIndexCreator",
+ "specify a VectorStore when using VectorstoreIndexCreator",
stacklevel=3,
)
return InMemoryVectorStore
@@ -206,7 +206,7 @@ class VectorstoreIndexCreator(BaseModel):
)
def from_loaders(self, loaders: list[BaseLoader]) -> VectorStoreIndexWrapper:
- """Create a vectorstore index from a list of loaders.
+ """Create a `VectorStore` index from a list of loaders.
Args:
loaders: A list of `BaseLoader` instances to load documents.
@@ -220,7 +220,7 @@ class VectorstoreIndexCreator(BaseModel):
return self.from_documents(docs)
async def afrom_loaders(self, loaders: list[BaseLoader]) -> VectorStoreIndexWrapper:
- """Asynchronously create a vectorstore index from a list of loaders.
+ """Asynchronously create a `VectorStore` index from a list of loaders.
Args:
loaders: A list of `BaseLoader` instances to load documents.
@@ -234,7 +234,7 @@ class VectorstoreIndexCreator(BaseModel):
return await self.afrom_documents(docs)
def from_documents(self, documents: list[Document]) -> VectorStoreIndexWrapper:
- """Create a vectorstore index from a list of documents.
+ """Create a `VectorStore` index from a list of documents.
Args:
documents: A list of `Document` objects.
@@ -254,7 +254,7 @@ class VectorstoreIndexCreator(BaseModel):
self,
documents: list[Document],
) -> VectorStoreIndexWrapper:
- """Asynchronously create a vectorstore index from a list of documents.
+ """Asynchronously create a `VectorStore` index from a list of documents.
Args:
documents: A list of `Document` objects.
diff --git a/libs/langchain/langchain_classic/memory/buffer.py b/libs/langchain/langchain_classic/memory/buffer.py
index c356b70da08..ab177fe3837 100644
--- a/libs/langchain/langchain_classic/memory/buffer.py
+++ b/libs/langchain/langchain_classic/memory/buffer.py
@@ -1,11 +1,11 @@
from typing import Any
from langchain_core._api import deprecated
-from langchain_core.memory import BaseMemory
from langchain_core.messages import BaseMessage, get_buffer_string
from langchain_core.utils import pre_init
from typing_extensions import override
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.memory.chat_memory import BaseChatMemory
from langchain_classic.memory.utils import get_prompt_input_key
@@ -30,7 +30,7 @@ class ConversationBufferMemory(BaseChatMemory):
human_prefix: str = "Human"
ai_prefix: str = "AI"
- memory_key: str = "history" #: :meta private:
+ memory_key: str = "history"
@property
def buffer(self) -> Any:
@@ -73,10 +73,7 @@ class ConversationBufferMemory(BaseChatMemory):
@property
def memory_variables(self) -> list[str]:
- """Will always return list of memory variables.
-
- :meta private:
- """
+ """Will always return list of memory variables."""
return [self.memory_key]
@override
@@ -118,7 +115,7 @@ class ConversationStringBufferMemory(BaseMemory):
buffer: str = ""
output_key: str | None = None
input_key: str | None = None
- memory_key: str = "history" #: :meta private:
+ memory_key: str = "history"
@pre_init
def validate_chains(cls, values: dict) -> dict:
@@ -130,10 +127,7 @@ class ConversationStringBufferMemory(BaseMemory):
@property
def memory_variables(self) -> list[str]:
- """Will always return list of memory variables.
-
- :meta private:
- """
+ """Will always return list of memory variables."""
return [self.memory_key]
@override
diff --git a/libs/langchain/langchain_classic/memory/buffer_window.py b/libs/langchain/langchain_classic/memory/buffer_window.py
index 264a836caa4..97e0d9cbb7c 100644
--- a/libs/langchain/langchain_classic/memory/buffer_window.py
+++ b/libs/langchain/langchain_classic/memory/buffer_window.py
@@ -24,7 +24,7 @@ class ConversationBufferWindowMemory(BaseChatMemory):
human_prefix: str = "Human"
ai_prefix: str = "AI"
- memory_key: str = "history" #: :meta private:
+ memory_key: str = "history"
k: int = 5
"""Number of messages to store in buffer."""
@@ -50,10 +50,7 @@ class ConversationBufferWindowMemory(BaseChatMemory):
@property
def memory_variables(self) -> list[str]:
- """Will always return list of memory variables.
-
- :meta private:
- """
+ """Will always return list of memory variables."""
return [self.memory_key]
@override
diff --git a/libs/langchain/langchain_classic/memory/chat_memory.py b/libs/langchain/langchain_classic/memory/chat_memory.py
index 5a86a78024e..c775c6ba317 100644
--- a/libs/langchain/langchain_classic/memory/chat_memory.py
+++ b/libs/langchain/langchain_classic/memory/chat_memory.py
@@ -7,10 +7,10 @@ from langchain_core.chat_history import (
BaseChatMessageHistory,
InMemoryChatMessageHistory,
)
-from langchain_core.memory import BaseMemory
from langchain_core.messages import AIMessage, HumanMessage
from pydantic import Field
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.memory.utils import get_prompt_input_key
diff --git a/libs/langchain/langchain_classic/memory/combined.py b/libs/langchain/langchain_classic/memory/combined.py
index b19c97edec1..3a5781ce01a 100644
--- a/libs/langchain/langchain_classic/memory/combined.py
+++ b/libs/langchain/langchain_classic/memory/combined.py
@@ -1,9 +1,9 @@
import warnings
from typing import Any
-from langchain_core.memory import BaseMemory
from pydantic import field_validator
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.memory.chat_memory import BaseChatMemory
diff --git a/libs/langchain/langchain_classic/memory/entity.py b/libs/langchain/langchain_classic/memory/entity.py
index 3b45140792d..be0679e4557 100644
--- a/libs/langchain/langchain_classic/memory/entity.py
+++ b/libs/langchain/langchain_classic/memory/entity.py
@@ -496,10 +496,7 @@ class ConversationEntityMemory(BaseChatMemory):
@property
def memory_variables(self) -> list[str]:
- """Will always return list of memory variables.
-
- :meta private:
- """
+ """Will always return list of memory variables."""
return ["entities", self.chat_history_key]
def load_memory_variables(self, inputs: dict[str, Any]) -> dict[str, Any]:
diff --git a/libs/langchain/langchain_classic/memory/readonly.py b/libs/langchain/langchain_classic/memory/readonly.py
index 42206123160..85ac1c626e1 100644
--- a/libs/langchain/langchain_classic/memory/readonly.py
+++ b/libs/langchain/langchain_classic/memory/readonly.py
@@ -1,6 +1,6 @@
from typing import Any
-from langchain_core.memory import BaseMemory
+from langchain_classic.base_memory import BaseMemory
class ReadOnlySharedMemory(BaseMemory):
diff --git a/libs/langchain/langchain_classic/memory/simple.py b/libs/langchain/langchain_classic/memory/simple.py
index 61fb2b27301..c9163c396f3 100644
--- a/libs/langchain/langchain_classic/memory/simple.py
+++ b/libs/langchain/langchain_classic/memory/simple.py
@@ -1,8 +1,9 @@
from typing import Any
-from langchain_core.memory import BaseMemory
from typing_extensions import override
+from langchain_classic.base_memory import BaseMemory
+
class SimpleMemory(BaseMemory):
"""Simple Memory.
diff --git a/libs/langchain/langchain_classic/memory/summary.py b/libs/langchain/langchain_classic/memory/summary.py
index 5b2ed54e56a..bd3d6e86895 100644
--- a/libs/langchain/langchain_classic/memory/summary.py
+++ b/libs/langchain/langchain_classic/memory/summary.py
@@ -97,7 +97,7 @@ class ConversationSummaryMemory(BaseChatMemory, SummarizerMixin):
"""
buffer: str = ""
- memory_key: str = "history" #: :meta private:
+ memory_key: str = "history"
@classmethod
def from_messages(
@@ -129,10 +129,7 @@ class ConversationSummaryMemory(BaseChatMemory, SummarizerMixin):
@property
def memory_variables(self) -> list[str]:
- """Will always return list of memory variables.
-
- :meta private:
- """
+ """Will always return list of memory variables."""
return [self.memory_key]
@override
diff --git a/libs/langchain/langchain_classic/memory/summary_buffer.py b/libs/langchain/langchain_classic/memory/summary_buffer.py
index fffcceb27bb..40c8ba168cf 100644
--- a/libs/langchain/langchain_classic/memory/summary_buffer.py
+++ b/libs/langchain/langchain_classic/memory/summary_buffer.py
@@ -41,10 +41,7 @@ class ConversationSummaryBufferMemory(BaseChatMemory, SummarizerMixin):
@property
def memory_variables(self) -> list[str]:
- """Will always return list of memory variables.
-
- :meta private:
- """
+ """Will always return list of memory variables."""
return [self.memory_key]
@override
diff --git a/libs/langchain/langchain_classic/memory/token_buffer.py b/libs/langchain/langchain_classic/memory/token_buffer.py
index caa0f78bcef..665da3c23dd 100644
--- a/libs/langchain/langchain_classic/memory/token_buffer.py
+++ b/libs/langchain/langchain_classic/memory/token_buffer.py
@@ -50,10 +50,7 @@ class ConversationTokenBufferMemory(BaseChatMemory):
@property
def memory_variables(self) -> list[str]:
- """Will always return list of memory variables.
-
- :meta private:
- """
+ """Will always return list of memory variables."""
return [self.memory_key]
@override
diff --git a/libs/langchain/langchain_classic/memory/vectorstore.py b/libs/langchain/langchain_classic/memory/vectorstore.py
index ce09cb3a3aa..fc6e07fe929 100644
--- a/libs/langchain/langchain_classic/memory/vectorstore.py
+++ b/libs/langchain/langchain_classic/memory/vectorstore.py
@@ -5,10 +5,10 @@ from typing import Any
from langchain_core._api import deprecated
from langchain_core.documents import Document
-from langchain_core.memory import BaseMemory
from langchain_core.vectorstores import VectorStoreRetriever
from pydantic import Field
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.memory.utils import get_prompt_input_key
@@ -30,7 +30,7 @@ class VectorStoreRetrieverMemory(BaseMemory):
retriever: VectorStoreRetriever = Field(exclude=True)
"""VectorStoreRetriever object to connect to."""
- memory_key: str = "history" #: :meta private:
+ memory_key: str = "history"
"""Key name to locate the memories in the result of load_memory_variables."""
input_key: str | None = None
diff --git a/libs/langchain/langchain_classic/memory/vectorstore_token_buffer_memory.py b/libs/langchain/langchain_classic/memory/vectorstore_token_buffer_memory.py
index ede7673f9a5..dcf97ac4fe2 100644
--- a/libs/langchain/langchain_classic/memory/vectorstore_token_buffer_memory.py
+++ b/libs/langchain/langchain_classic/memory/vectorstore_token_buffer_memory.py
@@ -2,8 +2,8 @@
This implements a conversation memory in which the messages are stored in a memory
buffer up to a specified token limit. When the limit is exceeded, older messages are
-saved to a vectorstore backing database. The vectorstore can be made persistent across
-sessions.
+saved to a `VectorStore` backing database. The `VectorStore` can be made persistent
+across sessions.
"""
import warnings
@@ -53,13 +53,13 @@ class ConversationVectorStoreTokenBufferMemory(ConversationTokenBufferMemory):
accepts the following additional arguments
retriever: (required) A VectorStoreRetriever object to use
- as the vector backing store
+ as the vector backing store
split_chunk_size: (optional, 1000) Token chunk split size
- for long messages generated by the AI
+ for long messages generated by the AI
previous_history_template: (optional) Template used to format
- the contents of the prompt history
+ the contents of the prompt history
Example using ChromaDB:
@@ -157,7 +157,7 @@ class ConversationVectorStoreTokenBufferMemory(ConversationTokenBufferMemory):
def save_remainder(self) -> None:
"""Save the remainder of the conversation buffer to the vector store.
- This is useful if you have made the vectorstore persistent, in which
+ Useful if you have made the VectorStore persistent, in which
case this can be called before the end of the session to store the
remainder of the conversation.
"""
diff --git a/libs/langchain/langchain_classic/model_laboratory.py b/libs/langchain/langchain_classic/model_laboratory.py
index 81f903a51be..31700b5bafc 100644
--- a/libs/langchain/langchain_classic/model_laboratory.py
+++ b/libs/langchain/langchain_classic/model_laboratory.py
@@ -21,8 +21,8 @@ class ModelLaboratory:
Args:
chains: A sequence of chains to experiment with.
Each chain must have exactly one input and one output variable.
- names (list[str] | None): Optional list of names corresponding to each
- chain. If provided, its length must match the number of chains.
+ names: Optional list of names corresponding to each chain.
+ If provided, its length must match the number of chains.
Raises:
@@ -72,7 +72,7 @@ class ModelLaboratory:
If provided, the prompt must contain exactly one input variable.
Returns:
- ModelLaboratory: An instance of `ModelLaboratory` initialized with LLMs.
+ An instance of `ModelLaboratory` initialized with LLMs.
"""
if prompt is None:
prompt = PromptTemplate(input_variables=["_input"], template="{_input}")
diff --git a/libs/langchain/langchain_classic/output_parsers/regex_dict.py b/libs/langchain/langchain_classic/output_parsers/regex_dict.py
index 5bcafb37156..f0052958c33 100644
--- a/libs/langchain/langchain_classic/output_parsers/regex_dict.py
+++ b/libs/langchain/langchain_classic/output_parsers/regex_dict.py
@@ -8,7 +8,7 @@ from langchain_core.output_parsers import BaseOutputParser
class RegexDictParser(BaseOutputParser[dict[str, str]]):
"""Parse the output of an LLM call into a Dictionary using a regex."""
- regex_pattern: str = r"{}:\s?([^.'\n']*)\.?" # : :meta private:
+ regex_pattern: str = r"{}:\s?([^.'\n']*)\.?"
"""The regex pattern to use to parse the output."""
output_key_to_format: dict[str, str]
"""The keys to use for the output."""
diff --git a/libs/langchain/langchain_classic/output_parsers/structured.py b/libs/langchain/langchain_classic/output_parsers/structured.py
index b5efae9060c..346b0ff9f19 100644
--- a/libs/langchain/langchain_classic/output_parsers/structured.py
+++ b/libs/langchain/langchain_classic/output_parsers/structured.py
@@ -96,8 +96,8 @@ class StructuredOutputParser(BaseOutputParser[dict[str, Any]]):
# ```
Args:
- only_json (bool): If `True`, only the json in the Markdown code snippet
- will be returned, without the introducing text. Defaults to `False`.
+ only_json: If `True`, only the json in the Markdown code snippet
+ will be returned, without the introducing text.
"""
schema_str = "\n".join(
[_get_sub_string(schema) for schema in self.response_schemas],
diff --git a/libs/langchain/langchain_classic/output_parsers/yaml.py b/libs/langchain/langchain_classic/output_parsers/yaml.py
index d6c7bfebae9..b286cf5c875 100644
--- a/libs/langchain/langchain_classic/output_parsers/yaml.py
+++ b/libs/langchain/langchain_classic/output_parsers/yaml.py
@@ -16,10 +16,10 @@ T = TypeVar("T", bound=BaseModel)
class YamlOutputParser(BaseOutputParser[T]):
- """Parse YAML output using a pydantic model."""
+ """Parse YAML output using a Pydantic model."""
pydantic_object: type[T]
- """The pydantic model to parse."""
+ """The Pydantic model to parse."""
pattern: re.Pattern = re.compile(
r"^```(?:ya?ml)?(?P[^`]*)",
re.MULTILINE | re.DOTALL,
diff --git a/libs/langchain/langchain_classic/pydantic_v1/__init__.py b/libs/langchain/langchain_classic/pydantic_v1/__init__.py
deleted file mode 100644
index 8fceb791cc2..00000000000
--- a/libs/langchain/langchain_classic/pydantic_v1/__init__.py
+++ /dev/null
@@ -1,38 +0,0 @@
-from importlib import metadata
-
-from langchain_core._api import warn_deprecated
-
-## Create namespaces for pydantic v1 and v2.
-# This code must stay at the top of the file before other modules may
-# attempt to import pydantic since it adds pydantic_v1 and pydantic_v2 to sys.modules.
-#
-# This hack is done for the following reasons:
-# * LangChain will attempt to remain compatible with both pydantic v1 and v2 since
-# both dependencies and dependents may be stuck on either version of v1 or v2.
-# * Creating namespaces for pydantic v1 and v2 should allow us to write code that
-# unambiguously uses either v1 or v2 API.
-# * This change is easier to roll out and roll back.
-from pydantic.v1 import * # noqa: F403
-
-try:
- _PYDANTIC_MAJOR_VERSION: int = int(metadata.version("pydantic").split(".")[0])
-except metadata.PackageNotFoundError:
- _PYDANTIC_MAJOR_VERSION = 0
-
-warn_deprecated(
- "0.3.0",
- removal="1.0.0",
- alternative="pydantic.v1 or pydantic",
- message=(
- "As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
- "The langchain.pydantic_v1 module was a "
- "compatibility shim for pydantic v1, and should no longer be used. "
- "Please update the code to import from Pydantic directly.\n\n"
- "For example, replace imports like: "
- "`from langchain_classic.pydantic_v1 import BaseModel`\n"
- "with: `from pydantic import BaseModel`\n"
- "or the v1 compatibility namespace if you are working in a code base "
- "that has not been fully upgraded to pydantic 2 yet. "
- "\tfrom pydantic.v1 import BaseModel\n"
- ),
-)
diff --git a/libs/langchain/langchain_classic/pydantic_v1/dataclasses.py b/libs/langchain/langchain_classic/pydantic_v1/dataclasses.py
deleted file mode 100644
index 78788016bb7..00000000000
--- a/libs/langchain/langchain_classic/pydantic_v1/dataclasses.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from langchain_core._api import warn_deprecated
-from pydantic.v1.dataclasses import * # noqa: F403
-
-warn_deprecated(
- "0.3.0",
- removal="1.0.0",
- alternative="pydantic.v1 or pydantic",
- message=(
- "As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
- "The langchain.pydantic_v1 module was a "
- "compatibility shim for pydantic v1, and should no longer be used. "
- "Please update the code to import from Pydantic directly.\n\n"
- "For example, replace imports like: "
- "`from langchain_classic.pydantic_v1 import BaseModel`\n"
- "with: `from pydantic import BaseModel`\n"
- "or the v1 compatibility namespace if you are working in a code base "
- "that has not been fully upgraded to pydantic 2 yet. "
- "\tfrom pydantic.v1 import BaseModel\n"
- ),
-)
diff --git a/libs/langchain/langchain_classic/pydantic_v1/main.py b/libs/langchain/langchain_classic/pydantic_v1/main.py
deleted file mode 100644
index 1895c08fb69..00000000000
--- a/libs/langchain/langchain_classic/pydantic_v1/main.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from langchain_core._api import warn_deprecated
-from pydantic.v1.main import * # noqa: F403
-
-warn_deprecated(
- "0.3.0",
- removal="1.0.0",
- alternative="pydantic.v1 or pydantic",
- message=(
- "As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
- "The langchain.pydantic_v1 module was a "
- "compatibility shim for pydantic v1, and should no longer be used. "
- "Please update the code to import from Pydantic directly.\n\n"
- "For example, replace imports like: "
- "`from langchain_classic.pydantic_v1 import BaseModel`\n"
- "with: `from pydantic import BaseModel`\n"
- "or the v1 compatibility namespace if you are working in a code base "
- "that has not been fully upgraded to pydantic 2 yet. "
- "\tfrom pydantic.v1 import BaseModel\n"
- ),
-)
diff --git a/libs/langchain/langchain_classic/retrievers/document_compressors/cohere_rerank.py b/libs/langchain/langchain_classic/retrievers/document_compressors/cohere_rerank.py
index 8930f052c15..9ceb69d8b6f 100644
--- a/libs/langchain/langchain_classic/retrievers/document_compressors/cohere_rerank.py
+++ b/libs/langchain/langchain_classic/retrievers/document_compressors/cohere_rerank.py
@@ -75,7 +75,6 @@ class CohereRerank(BaseDocumentCompressor):
documents: A sequence of documents to rerank.
model: The model to use for re-ranking. Default to self.model.
top_n : The number of results to return. If `None` returns all results.
- Defaults to self.top_n.
max_chunks_per_doc : The maximum number of chunks derived from a document.
""" # noqa: E501
if len(documents) == 0: # to avoid empty api call
diff --git a/libs/langchain/langchain_classic/retrievers/document_compressors/embeddings_filter.py b/libs/langchain/langchain_classic/retrievers/document_compressors/embeddings_filter.py
index 3bb99c00946..885596ddae3 100644
--- a/libs/langchain/langchain_classic/retrievers/document_compressors/embeddings_filter.py
+++ b/libs/langchain/langchain_classic/retrievers/document_compressors/embeddings_filter.py
@@ -33,8 +33,8 @@ class EmbeddingsFilter(BaseDocumentCompressor):
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity."""
k: int | None = 20
- """The number of relevant documents to return. Can be set to None, in which case
- `similarity_threshold` must be specified. Defaults to 20."""
+ """The number of relevant documents to return. Can be set to `None`, in which case
+ `similarity_threshold` must be specified."""
similarity_threshold: float | None = None
"""Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to `None`, must be specified if `k` is set
diff --git a/libs/langchain/langchain_classic/retrievers/ensemble.py b/libs/langchain/langchain_classic/retrievers/ensemble.py
index bdce26c6e2e..2e0ea312f15 100644
--- a/libs/langchain/langchain_classic/retrievers/ensemble.py
+++ b/libs/langchain/langchain_classic/retrievers/ensemble.py
@@ -61,7 +61,6 @@ class EnsembleRetriever(BaseRetriever):
weighting for all retrievers.
c: A constant added to the rank, controlling the balance between the importance
of high-ranked items and the consideration given to lower-ranked items.
- Default is 60.
id_key: The key in the document's metadata used to determine unique documents.
If not specified, page_content is used.
"""
@@ -299,8 +298,8 @@ class EnsembleRetriever(BaseRetriever):
doc_lists: A list of rank lists, where each rank list contains unique items.
Returns:
- list: The final aggregated list of items sorted by their weighted RRF
- scores in descending order.
+ The final aggregated list of items sorted by their weighted RRF
+ scores in descending order.
"""
if len(doc_lists) != len(self.weights):
msg = "Number of rank lists must be equal to the number of weights."
diff --git a/libs/langchain/langchain_classic/retrievers/multi_vector.py b/libs/langchain/langchain_classic/retrievers/multi_vector.py
index d603424f7a3..1a569b7e13b 100644
--- a/libs/langchain/langchain_classic/retrievers/multi_vector.py
+++ b/libs/langchain/langchain_classic/retrievers/multi_vector.py
@@ -30,7 +30,7 @@ class MultiVectorRetriever(BaseRetriever):
"""Retrieve from a set of multiple embeddings for the same document."""
vectorstore: VectorStore
- """The underlying vectorstore to use to store small chunks
+ """The underlying `VectorStore` to use to store small chunks
and their embedding vectors"""
byte_store: ByteStore | None = None
"""The lower-level backing storage layer for the parent documents"""
@@ -86,7 +86,7 @@ class MultiVectorRetriever(BaseRetriever):
else:
sub_docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
- # We do this to maintain the order of the ids that are returned
+ # We do this to maintain the order of the IDs that are returned
ids = []
for d in sub_docs:
if self.id_key in d.metadata and d.metadata[self.id_key] not in ids:
@@ -128,7 +128,7 @@ class MultiVectorRetriever(BaseRetriever):
**self.search_kwargs,
)
- # We do this to maintain the order of the ids that are returned
+ # We do this to maintain the order of the IDs that are returned
ids = []
for d in sub_docs:
if self.id_key in d.metadata and d.metadata[self.id_key] not in ids:
diff --git a/libs/langchain/langchain_classic/retrievers/parent_document_retriever.py b/libs/langchain/langchain_classic/retrievers/parent_document_retriever.py
index 9d4e47d2bae..635f320f21f 100644
--- a/libs/langchain/langchain_classic/retrievers/parent_document_retriever.py
+++ b/libs/langchain/langchain_classic/retrievers/parent_document_retriever.py
@@ -21,7 +21,7 @@ class ParentDocumentRetriever(MultiVectorRetriever):
The ParentDocumentRetriever strikes that balance by splitting and storing
small chunks of data. During retrieval, it first fetches the small chunks
- but then looks up the parent ids for those chunks and returns those larger
+ but then looks up the parent IDs for those chunks and returns those larger
documents.
Note that "parent document" refers to the document that a small chunk
@@ -44,7 +44,7 @@ class ParentDocumentRetriever(MultiVectorRetriever):
child_splitter = RecursiveCharacterTextSplitter(
chunk_size=400, add_start_index=True
)
- # The vectorstore to use to index the child chunks
+ # The VectorStore to use to index the child chunks
vectorstore = Chroma(embedding_function=OpenAIEmbeddings())
# The storage layer for the parent documents
store = InMemoryStore()
@@ -85,7 +85,7 @@ class ParentDocumentRetriever(MultiVectorRetriever):
if ids is None:
doc_ids = [str(uuid.uuid4()) for _ in documents]
if not add_to_docstore:
- msg = "If ids are not passed in, `add_to_docstore` MUST be True"
+ msg = "If IDs are not passed in, `add_to_docstore` MUST be True"
raise ValueError(msg)
else:
if len(documents) != len(ids):
@@ -124,16 +124,16 @@ class ParentDocumentRetriever(MultiVectorRetriever):
Args:
documents: List of documents to add
- ids: Optional list of ids for documents. If provided should be the same
+ ids: Optional list of IDs for documents. If provided should be the same
length as the list of documents. Can be provided if parent documents
are already in the document store and you don't want to re-add
to the docstore. If not provided, random UUIDs will be used as
- ids.
+ IDs.
add_to_docstore: Boolean of whether to add documents to docstore.
This can be false if and only if `ids` are provided. You may want
to set this to False if the documents are already in the docstore
and you don't want to re-add them.
- **kwargs: additional keyword arguments passed to the vectorstore.
+ **kwargs: additional keyword arguments passed to the `VectorStore`.
"""
docs, full_docs = self._split_docs_for_adding(
documents,
@@ -155,16 +155,16 @@ class ParentDocumentRetriever(MultiVectorRetriever):
Args:
documents: List of documents to add
- ids: Optional list of ids for documents. If provided should be the same
+ ids: Optional list of IDs for documents. If provided should be the same
length as the list of documents. Can be provided if parent documents
are already in the document store and you don't want to re-add
to the docstore. If not provided, random UUIDs will be used as
- ids.
+ idIDss.
add_to_docstore: Boolean of whether to add documents to docstore.
This can be false if and only if `ids` are provided. You may want
to set this to False if the documents are already in the docstore
and you don't want to re-add them.
- **kwargs: additional keyword arguments passed to the vectorstore.
+ **kwargs: additional keyword arguments passed to the `VectorStore`.
"""
docs, full_docs = self._split_docs_for_adding(
documents,
diff --git a/libs/langchain/langchain_classic/retrievers/self_query/base.py b/libs/langchain/langchain_classic/retrievers/self_query/base.py
index 624143dd188..152a7de6949 100644
--- a/libs/langchain/langchain_classic/retrievers/self_query/base.py
+++ b/libs/langchain/langchain_classic/retrievers/self_query/base.py
@@ -251,7 +251,7 @@ class SelfQueryRetriever(BaseRetriever):
search_kwargs: dict = Field(default_factory=dict)
"""Keyword arguments to pass in to the vector store search."""
structured_query_translator: Visitor
- """Translator for turning internal query language into vectorstore search params."""
+ """Translator for turning internal query language into `VectorStore` search params.""" # noqa: E501
verbose: bool = False
use_original_query: bool = False
@@ -360,7 +360,7 @@ class SelfQueryRetriever(BaseRetriever):
queried.
metadata_field_info: Metadata field information for the documents.
structured_query_translator: Optional translator for turning internal query
- language into vectorstore search params.
+ language into `VectorStore` search params.
chain_kwargs: Additional keyword arguments for the query constructor.
enable_limit: Whether to enable the limit operator.
use_original_query: Whether to use the original query instead of the revised
diff --git a/libs/langchain/langchain_classic/retrievers/time_weighted_retriever.py b/libs/langchain/langchain_classic/retrievers/time_weighted_retriever.py
index 9823318baa8..7cba23e4bd3 100644
--- a/libs/langchain/langchain_classic/retrievers/time_weighted_retriever.py
+++ b/libs/langchain/langchain_classic/retrievers/time_weighted_retriever.py
@@ -25,17 +25,17 @@ class TimeWeightedVectorStoreRetriever(BaseRetriever):
"""
vectorstore: VectorStore
- """The vectorstore to store documents and determine salience."""
+ """The `VectorStore` to store documents and determine salience."""
search_kwargs: dict = Field(default_factory=lambda: {"k": 100})
- """Keyword arguments to pass to the vectorstore similarity search."""
+ """Keyword arguments to pass to the `VectorStore` similarity search."""
# TODO: abstract as a queue
memory_stream: list[Document] = Field(default_factory=list)
"""The memory_stream of documents to search through."""
decay_rate: float = Field(default=0.01)
- """The exponential decay factor used as (1.0-decay_rate)**(hrs_passed)."""
+ """The exponential decay factor used as `(1.0-decay_rate)**(hrs_passed)`."""
k: int = 4
"""The maximum number of documents to retrieve in a given call."""
diff --git a/libs/langchain/langchain_classic/runnables/hub.py b/libs/langchain/langchain_classic/runnables/hub.py
index 9d0bdbf56c0..4c6a358fce9 100644
--- a/libs/langchain/langchain_classic/runnables/hub.py
+++ b/libs/langchain/langchain_classic/runnables/hub.py
@@ -17,7 +17,7 @@ class HubRunnable(RunnableBindingBase[Input, Output]): # type: ignore[no-redef]
api_key: str | None = None,
**kwargs: Any,
) -> None:
- """Initialize the HubRunnable.
+ """Initialize the `HubRunnable`.
Args:
owner_repo_commit: The full name of the prompt to pull from in the format of
diff --git a/libs/langchain/langchain_classic/runnables/openai_functions.py b/libs/langchain/langchain_classic/runnables/openai_functions.py
index 38434559a98..29f2194c855 100644
--- a/libs/langchain/langchain_classic/runnables/openai_functions.py
+++ b/libs/langchain/langchain_classic/runnables/openai_functions.py
@@ -10,7 +10,7 @@ from typing_extensions import TypedDict
class OpenAIFunction(TypedDict):
- """A function description for ChatOpenAI."""
+ """A function description for `ChatOpenAI`."""
name: str
"""The name of the function."""
@@ -33,7 +33,7 @@ class OpenAIFunctionsRouter(RunnableBindingBase[BaseMessage, Any]): # type: ign
],
functions: list[OpenAIFunction] | None = None,
):
- """Initialize the OpenAIFunctionsRouter.
+ """Initialize the `OpenAIFunctionsRouter`.
Args:
runnables: A mapping of function names to runnables.
diff --git a/libs/langchain/langchain_classic/schema/__init__.py b/libs/langchain/langchain_classic/schema/__init__.py
index 6c8c4f61a0a..c957fe8b9a5 100644
--- a/libs/langchain/langchain_classic/schema/__init__.py
+++ b/libs/langchain/langchain_classic/schema/__init__.py
@@ -5,7 +5,6 @@ from langchain_core.caches import BaseCache
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_core.exceptions import LangChainException, OutputParserException
-from langchain_core.memory import BaseMemory
from langchain_core.messages import (
AIMessage,
BaseMessage,
@@ -36,6 +35,8 @@ from langchain_core.prompts import BasePromptTemplate, format_document
from langchain_core.retrievers import BaseRetriever
from langchain_core.stores import BaseStore
+from langchain_classic.base_memory import BaseMemory
+
RUN_KEY = "__run"
# Backwards compatibility.
diff --git a/libs/langchain/langchain_classic/schema/callbacks/tracers/schemas.py b/libs/langchain/langchain_classic/schema/callbacks/tracers/schemas.py
index e8f34027d34..32e6b2e4f13 100644
--- a/libs/langchain/langchain_classic/schema/callbacks/tracers/schemas.py
+++ b/libs/langchain/langchain_classic/schema/callbacks/tracers/schemas.py
@@ -1,27 +1,5 @@
-from langchain_core.tracers.schemas import (
- BaseRun,
- ChainRun,
- LLMRun,
- Run,
- RunTypeEnum,
- ToolRun,
- TracerSession,
- TracerSessionBase,
- TracerSessionV1,
- TracerSessionV1Base,
- TracerSessionV1Create,
-)
+from langchain_core.tracers.schemas import Run
__all__ = [
- "BaseRun",
- "ChainRun",
- "LLMRun",
"Run",
- "RunTypeEnum",
- "ToolRun",
- "TracerSession",
- "TracerSessionBase",
- "TracerSessionV1",
- "TracerSessionV1Base",
- "TracerSessionV1Create",
]
diff --git a/libs/langchain/langchain_classic/schema/memory.py b/libs/langchain/langchain_classic/schema/memory.py
index d2f3d73e613..238d3283936 100644
--- a/libs/langchain/langchain_classic/schema/memory.py
+++ b/libs/langchain/langchain_classic/schema/memory.py
@@ -1,3 +1,3 @@
-from langchain_core.memory import BaseMemory
+from langchain_classic.base_memory import BaseMemory
__all__ = ["BaseMemory"]
diff --git a/libs/langchain/langchain_classic/smith/__init__.py b/libs/langchain/langchain_classic/smith/__init__.py
index 4f06ae3af34..a744151182c 100644
--- a/libs/langchain/langchain_classic/smith/__init__.py
+++ b/libs/langchain/langchain_classic/smith/__init__.py
@@ -1,9 +1,7 @@
"""**LangSmith** utilities.
This module provides utilities for connecting to
-[LangSmith](https://smith.langchain.com/).
-For more information on LangSmith,
-see the [LangSmith documentation](https://docs.smith.langchain.com/).
+[LangSmith](https://docs.langchain.com/langsmith/home).
**Evaluation**
@@ -22,8 +20,8 @@ from langchain_classic.smith import RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
- llm = ChatOpenAI(temperature=0)
- chain = LLMChain.from_string(llm, "What's the answer to {your_input_key}")
+ model = ChatOpenAI(temperature=0)
+ chain = LLMChain.from_string(model, "What's the answer to {your_input_key}")
return chain
diff --git a/libs/langchain/langchain_classic/smith/evaluation/__init__.py b/libs/langchain/langchain_classic/smith/evaluation/__init__.py
index 78e7fc70ab5..755b3c14d17 100644
--- a/libs/langchain/langchain_classic/smith/evaluation/__init__.py
+++ b/libs/langchain/langchain_classic/smith/evaluation/__init__.py
@@ -4,7 +4,7 @@ This module provides utilities for evaluating Chains and other language model
applications using LangChain evaluators and LangSmith.
For more information on the LangSmith API, see the
-[LangSmith API documentation](https://docs.smith.langchain.com/docs/).
+[LangSmith API documentation](https://docs.langchain.com/langsmith/home).
**Example**
@@ -16,8 +16,8 @@ from langchain_classic.smith import EvaluatorType, RunEvalConfig, run_on_dataset
def construct_chain():
- llm = ChatOpenAI(temperature=0)
- chain = LLMChain.from_string(llm, "What's the answer to {your_input_key}")
+ model = ChatOpenAI(temperature=0)
+ chain = LLMChain.from_string(model, "What's the answer to {your_input_key}")
return chain
diff --git a/libs/langchain/langchain_classic/smith/evaluation/config.py b/libs/langchain/langchain_classic/smith/evaluation/config.py
index 6f3063ce6f2..6d1df6232b3 100644
--- a/libs/langchain/langchain_classic/smith/evaluation/config.py
+++ b/libs/langchain/langchain_classic/smith/evaluation/config.py
@@ -34,28 +34,17 @@ BATCH_EVALUATOR_LIKE = Callable[
class EvalConfig(BaseModel):
"""Configuration for a given run evaluator.
- Parameters
- ----------
- evaluator_type : EvaluatorType
- The type of evaluator to use.
-
- Methods:
- -------
- get_kwargs()
- Get the keyword arguments for the evaluator configuration.
-
+ Attributes:
+ evaluator_type: The type of evaluator to use.
"""
evaluator_type: EvaluatorType
def get_kwargs(self) -> dict[str, Any]:
- """Get the keyword arguments for the load_evaluator call.
+ """Get the keyword arguments for the `load_evaluator` call.
Returns:
- -------
- Dict[str, Any]
- The keyword arguments for the load_evaluator call.
-
+ The keyword arguments for the `load_evaluator` call.
"""
kwargs = {}
for field, val in self:
@@ -110,7 +99,7 @@ class RunEvalConfig(BaseModel):
batch_evaluators: list[BATCH_EVALUATOR_LIKE] | None = None
"""Evaluators that run on an aggregate/batch level.
- These generate 1 or more metrics that are assigned to the full test run.
+ These generate one or more metrics that are assigned to the full test run.
As a result, they are not associated with individual traces.
"""
@@ -134,13 +123,9 @@ class RunEvalConfig(BaseModel):
class Criteria(SingleKeyEvalConfig):
"""Configuration for a reference-free criteria evaluator.
- Parameters
- ----------
- criteria : CRITERIA_TYPE | None
- The criteria to evaluate.
- llm : BaseLanguageModel | None
- The language model to use for the evaluation chain.
-
+ Attributes:
+ criteria: The criteria to evaluate.
+ llm: The language model to use for the evaluation chain.
"""
criteria: CRITERIA_TYPE | None = None
@@ -150,12 +135,9 @@ class RunEvalConfig(BaseModel):
class LabeledCriteria(SingleKeyEvalConfig):
"""Configuration for a labeled (with references) criteria evaluator.
- Parameters
- ----------
- criteria : CRITERIA_TYPE | None
- The criteria to evaluate.
- llm : BaseLanguageModel | None
- The language model to use for the evaluation chain.
+ Attributes:
+ criteria: The criteria to evaluate.
+ llm: The language model to use for the evaluation chain.
"""
criteria: CRITERIA_TYPE | None = None
@@ -165,14 +147,9 @@ class RunEvalConfig(BaseModel):
class EmbeddingDistance(SingleKeyEvalConfig):
"""Configuration for an embedding distance evaluator.
- Parameters
- ----------
- embeddings : Optional[Embeddings]
- The embeddings to use for computing the distance.
-
- distance_metric : Optional[EmbeddingDistanceEnum]
- The distance metric to use for computing the distance.
-
+ Attributes:
+ embeddings: The embeddings to use for computing the distance.
+ distance_metric: The distance metric to use for computing the distance.
"""
evaluator_type: EvaluatorType = EvaluatorType.EMBEDDING_DISTANCE
@@ -186,34 +163,23 @@ class RunEvalConfig(BaseModel):
class StringDistance(SingleKeyEvalConfig):
"""Configuration for a string distance evaluator.
- Parameters
- ----------
- distance : Optional[StringDistanceEnum]
- The string distance metric to use.
-
+ Attributes:
+ distance: The string distance metric to use (`damerau_levenshtein`,
+ `levenshtein`, `jaro`, or `jaro_winkler`).
+ normalize_score: Whether to normalize the distance to between 0 and 1.
+ Applies only to the Levenshtein and Damerau-Levenshtein distances.
"""
evaluator_type: EvaluatorType = EvaluatorType.STRING_DISTANCE
distance: StringDistanceEnum | None = None
- """The string distance metric to use.
- damerau_levenshtein: The Damerau-Levenshtein distance.
- levenshtein: The Levenshtein distance.
- jaro: The Jaro distance.
- jaro_winkler: The Jaro-Winkler distance.
- """
normalize_score: bool = True
- """Whether to normalize the distance to between 0 and 1.
- Applies only to the Levenshtein and Damerau-Levenshtein distances."""
class QA(SingleKeyEvalConfig):
"""Configuration for a QA evaluator.
- Parameters
- ----------
- prompt : Optional[BasePromptTemplate]
- The prompt template to use for generating the question.
- llm : BaseLanguageModel | None
- The language model to use for the evaluation chain.
+ Attributes:
+ prompt: The prompt template to use for generating the question.
+ llm: The language model to use for the evaluation chain.
"""
evaluator_type: EvaluatorType = EvaluatorType.QA
@@ -223,13 +189,9 @@ class RunEvalConfig(BaseModel):
class ContextQA(SingleKeyEvalConfig):
"""Configuration for a context-based QA evaluator.
- Parameters
- ----------
- prompt : Optional[BasePromptTemplate]
- The prompt template to use for generating the question.
- llm : BaseLanguageModel | None
- The language model to use for the evaluation chain.
-
+ Attributes:
+ prompt: The prompt template to use for generating the question.
+ llm: The language model to use for the evaluation chain.
"""
evaluator_type: EvaluatorType = EvaluatorType.CONTEXT_QA
@@ -239,13 +201,9 @@ class RunEvalConfig(BaseModel):
class CoTQA(SingleKeyEvalConfig):
"""Configuration for a context-based QA evaluator.
- Parameters
- ----------
- prompt : Optional[BasePromptTemplate]
- The prompt template to use for generating the question.
- llm : BaseLanguageModel | None
- The language model to use for the evaluation chain.
-
+ Attributes:
+ prompt: The prompt template to use for generating the question.
+ llm: The language model to use for the evaluation chain.
"""
evaluator_type: EvaluatorType = EvaluatorType.CONTEXT_QA
@@ -253,34 +211,22 @@ class RunEvalConfig(BaseModel):
prompt: BasePromptTemplate | None = None
class JsonValidity(SingleKeyEvalConfig):
- """Configuration for a json validity evaluator.
-
- Parameters
- ----------
- """
+ """Configuration for a json validity evaluator."""
evaluator_type: EvaluatorType = EvaluatorType.JSON_VALIDITY
class JsonEqualityEvaluator(EvalConfig):
- """Configuration for a json equality evaluator.
-
- Parameters
- ----------
- """
+ """Configuration for a json equality evaluator."""
evaluator_type: EvaluatorType = EvaluatorType.JSON_EQUALITY
class ExactMatch(SingleKeyEvalConfig):
"""Configuration for an exact match string evaluator.
- Parameters
- ----------
- ignore_case : bool
- Whether to ignore case when comparing strings.
- ignore_punctuation : bool
- Whether to ignore punctuation when comparing strings.
- ignore_numbers : bool
- Whether to ignore numbers when comparing strings.
+ Attributes:
+ ignore_case: Whether to ignore case when comparing strings.
+ ignore_punctuation: Whether to ignore punctuation when comparing strings.
+ ignore_numbers: Whether to ignore numbers when comparing strings.
"""
evaluator_type: EvaluatorType = EvaluatorType.EXACT_MATCH
@@ -291,10 +237,8 @@ class RunEvalConfig(BaseModel):
class RegexMatch(SingleKeyEvalConfig):
"""Configuration for a regex match string evaluator.
- Parameters
- ----------
- flags : int
- The flags to pass to the regex. Example: re.IGNORECASE.
+ Attributes:
+ flags: The flags to pass to the regex. Example: `re.IGNORECASE`.
"""
evaluator_type: EvaluatorType = EvaluatorType.REGEX_MATCH
@@ -309,17 +253,12 @@ class RunEvalConfig(BaseModel):
It is recommended to normalize these scores
by setting `normalize_by` to 10.
- Parameters
- ----------
- criteria : CRITERIA_TYPE | None
- The criteria to evaluate.
- llm : BaseLanguageModel | None
- The language model to use for the evaluation chain.
- normalize_by: int | None = None
- If you want to normalize the score, the denominator to use.
- If not provided, the score will be between 1 and 10 (by default).
- prompt : Optional[BasePromptTemplate]
-
+ Attributes:
+ criteria: The criteria to evaluate.
+ llm: The language model to use for the evaluation chain.
+ normalize_by: If you want to normalize the score, the denominator to use.
+ If not provided, the score will be between 1 and 10.
+ prompt: The prompt template to use for evaluation.
"""
evaluator_type: EvaluatorType = EvaluatorType.SCORE_STRING
diff --git a/libs/langchain/langchain_classic/smith/evaluation/progress.py b/libs/langchain/langchain_classic/smith/evaluation/progress.py
index 42862e3d1af..d0d2e750aa6 100644
--- a/libs/langchain/langchain_classic/smith/evaluation/progress.py
+++ b/libs/langchain/langchain_classic/smith/evaluation/progress.py
@@ -23,10 +23,10 @@ class ProgressBarCallback(base_callbacks.BaseCallbackHandler):
"""Initialize the progress bar.
Args:
- total: int, the total number of items to be processed.
- ncols: int, the character width of the progress bar.
- end_with: str, last string to print after progress bar reaches end.
- **kwargs: additional keyword arguments.
+ total: The total number of items to be processed.
+ ncols: The character width of the progress bar.
+ end_with: Last string to print after progress bar reaches end.
+ **kwargs: Additional keyword arguments.
"""
self.total = total
self.ncols = ncols
diff --git a/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py b/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py
index d52e037eb8b..e8f25cb2be7 100644
--- a/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py
+++ b/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py
@@ -153,7 +153,7 @@ class EvalError(dict):
"""Your architecture raised an error."""
def __init__(self, Error: BaseException, **kwargs: Any) -> None: # noqa: N803
- """Initialize the EvalError with an error and additional attributes.
+ """Initialize the `EvalError` with an error and additional attributes.
Args:
Error: The error that occurred.
@@ -162,7 +162,7 @@ class EvalError(dict):
super().__init__(Error=Error, **kwargs)
def __getattr__(self, name: str) -> Any:
- """Get an attribute from the EvalError.
+ """Get an attribute from the `EvalError`.
Args:
name: The name of the attribute to get.
@@ -982,8 +982,7 @@ def _run_llm_or_chain(
input_mapper: Optional function to map the input to the expected format.
Returns:
- Union[List[dict], List[str], List[LLMResult], List[ChatResult]]:
- The outputs of the model or chain.
+ The outputs of the model or chain.
"""
chain_or_llm = (
"LLM" if isinstance(llm_or_chain_factory, BaseLanguageModel) else "Chain"
@@ -1372,7 +1371,7 @@ async def arun_on_dataset(
dataset_version: Optional version of the dataset.
concurrency_level: The number of async tasks to run concurrently.
project_name: Name of the project to store the traces in.
- Defaults to {dataset_name}-{chain class name}-{datetime}.
+ Defaults to `{dataset_name}-{chain class name}-{datetime}`.
project_metadata: Optional metadata to add to the project.
Useful for storing information the test variant.
(prompt version, model version, etc.)
@@ -1384,7 +1383,7 @@ async def arun_on_dataset(
**kwargs: Should not be used, but is provided for backwards compatibility.
Returns:
- A dictionary containing the run's project name and the resulting model outputs.
+ `dict` containing the run's project name and the resulting model outputs.
Examples:
```python
@@ -1396,9 +1395,9 @@ async def arun_on_dataset(
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
- llm = ChatOpenAI(temperature=0)
+ model = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
- llm,
+ model,
"What's the answer to {your_input_key}"
)
return chain
@@ -1424,9 +1423,8 @@ async def arun_on_dataset(
evaluation=evaluation_config,
)
```
- You can also create custom evaluators by subclassing the
- `StringEvaluator `
- or LangSmith's `RunEvaluator` classes.
+ You can also create custom evaluators by subclassing the `StringEvaluator or
+ LangSmith's `RunEvaluator` classes.
```python
from typing import Optional
@@ -1547,7 +1545,7 @@ def run_on_dataset(
dataset_version: Optional version of the dataset.
concurrency_level: The number of async tasks to run concurrently.
project_name: Name of the project to store the traces in.
- Defaults to {dataset_name}-{chain class name}-{datetime}.
+ Defaults to `{dataset_name}-{chain class name}-{datetime}`.
project_metadata: Optional metadata to add to the project.
Useful for storing information the test variant.
(prompt version, model version, etc.)
@@ -1559,7 +1557,7 @@ def run_on_dataset(
**kwargs: Should not be used, but is provided for backwards compatibility.
Returns:
- A dictionary containing the run's project name and the resulting model outputs.
+ `dict` containing the run's project name and the resulting model outputs.
Examples:
```python
@@ -1571,9 +1569,9 @@ def run_on_dataset(
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
- llm = ChatOpenAI(temperature=0)
+ model = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
- llm,
+ model,
"What's the answer to {your_input_key}"
)
return chain
@@ -1600,9 +1598,8 @@ def run_on_dataset(
)
```
- You can also create custom evaluators by subclassing the
- `StringEvaluator `
- or LangSmith's `RunEvaluator` classes.
+ You can also create custom evaluators by subclassing the `StringEvaluator` or
+ LangSmith's `RunEvaluator` classes.
```python
from typing import Optional
diff --git a/libs/langchain/langchain_classic/storage/_lc_store.py b/libs/langchain/langchain_classic/storage/_lc_store.py
index f4f64939ea9..00a50b15e47 100644
--- a/libs/langchain/langchain_classic/storage/_lc_store.py
+++ b/libs/langchain/langchain_classic/storage/_lc_store.py
@@ -11,12 +11,12 @@ from langchain_classic.storage.encoder_backed import EncoderBackedStore
def _dump_as_bytes(obj: Serializable) -> bytes:
- """Return a bytes representation of a document."""
+ """Return a bytes representation of a `Document`."""
return dumps(obj).encode("utf-8")
def _dump_document_as_bytes(obj: Any) -> bytes:
- """Return a bytes representation of a document."""
+ """Return a bytes representation of a `Document`."""
if not isinstance(obj, Document):
msg = "Expected a Document instance"
raise TypeError(msg)
@@ -50,14 +50,14 @@ def create_lc_store(
*,
key_encoder: Callable[[str], str] | None = None,
) -> BaseStore[str, Serializable]:
- """Create a store for langchain serializable objects from a bytes store.
+ """Create a store for LangChain serializable objects from a bytes store.
Args:
store: A bytes store to use as the underlying store.
- key_encoder: A function to encode keys; if None uses identity function.
+ key_encoder: A function to encode keys; if `None` uses identity function.
Returns:
- A key-value store for documents.
+ A key-value store for `Document` objects.
"""
return EncoderBackedStore(
store,
@@ -72,17 +72,17 @@ def create_kv_docstore(
*,
key_encoder: Callable[[str], str] | None = None,
) -> BaseStore[str, Document]:
- """Create a store for langchain Document objects from a bytes store.
+ """Create a store for langchain `Document` objects from a bytes store.
This store does run time type checking to ensure that the values are
- Document objects.
+ `Document` objects.
Args:
store: A bytes store to use as the underlying store.
- key_encoder: A function to encode keys; if None uses identity function.
+ key_encoder: A function to encode keys; if `None`, uses identity function.
Returns:
- A key-value store for documents.
+ A key-value store for `Document` objects.
"""
return EncoderBackedStore(
store,
diff --git a/libs/langchain/langchain_classic/storage/encoder_backed.py b/libs/langchain/langchain_classic/storage/encoder_backed.py
index c3a02acadff..0e5cefb7a82 100644
--- a/libs/langchain/langchain_classic/storage/encoder_backed.py
+++ b/libs/langchain/langchain_classic/storage/encoder_backed.py
@@ -56,14 +56,29 @@ class EncoderBackedStore(BaseStore[K, V]):
value_serializer: Callable[[V], bytes],
value_deserializer: Callable[[Any], V],
) -> None:
- """Initialize an EncodedStore."""
+ """Initialize an `EncodedStore`.
+
+ Args:
+ store: The underlying byte store to wrap.
+ key_encoder: Function to encode keys from type `K` to strings.
+ value_serializer: Function to serialize values from type `V` to bytes.
+ value_deserializer: Function to deserialize bytes back to type V.
+ """
self.store = store
self.key_encoder = key_encoder
self.value_serializer = value_serializer
self.value_deserializer = value_deserializer
def mget(self, keys: Sequence[K]) -> list[V | None]:
- """Get the values associated with the given keys."""
+ """Get the values associated with the given keys.
+
+ Args:
+ keys: A sequence of keys.
+
+ Returns:
+ A sequence of optional values associated with the keys.
+ If a key is not found, the corresponding value will be `None`.
+ """
encoded_keys: list[str] = [self.key_encoder(key) for key in keys]
values = self.store.mget(encoded_keys)
return [
@@ -72,7 +87,15 @@ class EncoderBackedStore(BaseStore[K, V]):
]
async def amget(self, keys: Sequence[K]) -> list[V | None]:
- """Get the values associated with the given keys."""
+ """Async get the values associated with the given keys.
+
+ Args:
+ keys: A sequence of keys.
+
+ Returns:
+ A sequence of optional values associated with the keys.
+ If a key is not found, the corresponding value will be `None`.
+ """
encoded_keys: list[str] = [self.key_encoder(key) for key in keys]
values = await self.store.amget(encoded_keys)
return [
@@ -81,7 +104,11 @@ class EncoderBackedStore(BaseStore[K, V]):
]
def mset(self, key_value_pairs: Sequence[tuple[K, V]]) -> None:
- """Set the values for the given keys."""
+ """Set the values for the given keys.
+
+ Args:
+ key_value_pairs: A sequence of key-value pairs.
+ """
encoded_pairs = [
(self.key_encoder(key), self.value_serializer(value))
for key, value in key_value_pairs
@@ -89,7 +116,11 @@ class EncoderBackedStore(BaseStore[K, V]):
self.store.mset(encoded_pairs)
async def amset(self, key_value_pairs: Sequence[tuple[K, V]]) -> None:
- """Set the values for the given keys."""
+ """Async set the values for the given keys.
+
+ Args:
+ key_value_pairs: A sequence of key-value pairs.
+ """
encoded_pairs = [
(self.key_encoder(key), self.value_serializer(value))
for key, value in key_value_pairs
@@ -97,12 +128,20 @@ class EncoderBackedStore(BaseStore[K, V]):
await self.store.amset(encoded_pairs)
def mdelete(self, keys: Sequence[K]) -> None:
- """Delete the given keys and their associated values."""
+ """Delete the given keys and their associated values.
+
+ Args:
+ keys: A sequence of keys to delete.
+ """
encoded_keys = [self.key_encoder(key) for key in keys]
self.store.mdelete(encoded_keys)
async def amdelete(self, keys: Sequence[K]) -> None:
- """Delete the given keys and their associated values."""
+ """Async delete the given keys and their associated values.
+
+ Args:
+ keys: A sequence of keys to delete.
+ """
encoded_keys = [self.key_encoder(key) for key in keys]
await self.store.amdelete(encoded_keys)
@@ -111,7 +150,14 @@ class EncoderBackedStore(BaseStore[K, V]):
*,
prefix: str | None = None,
) -> Iterator[K] | Iterator[str]:
- """Get an iterator over keys that match the given prefix."""
+ """Get an iterator over keys that match the given prefix.
+
+ Args:
+ prefix: The prefix to match.
+
+ Yields:
+ Keys that match the given prefix.
+ """
# For the time being this does not return K, but str
# it's for debugging purposes. Should fix this.
yield from self.store.yield_keys(prefix=prefix)
@@ -121,7 +167,14 @@ class EncoderBackedStore(BaseStore[K, V]):
*,
prefix: str | None = None,
) -> AsyncIterator[K] | AsyncIterator[str]:
- """Get an iterator over keys that match the given prefix."""
+ """Async get an iterator over keys that match the given prefix.
+
+ Args:
+ prefix: The prefix to match.
+
+ Yields:
+ Keys that match the given prefix.
+ """
# For the time being this does not return K, but str
# it's for debugging purposes. Should fix this.
async for key in self.store.ayield_keys(prefix=prefix):
diff --git a/libs/langchain/langchain_classic/storage/file_system.py b/libs/langchain/langchain_classic/storage/file_system.py
index 226768451ef..67a0a76f5a2 100644
--- a/libs/langchain/langchain_classic/storage/file_system.py
+++ b/libs/langchain/langchain_classic/storage/file_system.py
@@ -10,10 +10,10 @@ from langchain_classic.storage.exceptions import InvalidKeyException
class LocalFileStore(ByteStore):
- """BaseStore interface that works on the local file system.
+ """`BaseStore` interface that works on the local file system.
Examples:
- Create a LocalFileStore instance and perform operations on it:
+ Create a `LocalFileStore` instance and perform operations on it:
```python
from langchain_classic.storage import LocalFileStore
@@ -44,19 +44,18 @@ class LocalFileStore(ByteStore):
chmod_dir: int | None = None,
update_atime: bool = False,
) -> None:
- """Implement the BaseStore interface for the local file system.
+ """Implement the `BaseStore` interface for the local file system.
Args:
- root_path (Union[str, Path]): The root path of the file store. All keys are
- interpreted as paths relative to this root.
- chmod_file: (optional, defaults to `None`) If specified, sets permissions
- for newly created files, overriding the current `umask` if needed.
- chmod_dir: (optional, defaults to `None`) If specified, sets permissions
- for newly created dirs, overriding the current `umask` if needed.
- update_atime: (optional, defaults to `False`) If `True`, updates the
- filesystem access time (but not the modified time) when a file is read.
- This allows MRU/LRU cache policies to be implemented for filesystems
- where access time updates are disabled.
+ root_path: The root path of the file store. All keys are interpreted as
+ paths relative to this root.
+ chmod_file: Sets permissions for newly created files, overriding the
+ current `umask` if needed.
+ chmod_dir: Sets permissions for newly created dirs, overriding the
+ current `umask` if needed.
+ update_atime: Updates the filesystem access time (but not the modified
+ time) when a file is read. This allows MRU/LRU cache policies to be
+ implemented for filesystems where access time updates are disabled.
"""
self.root_path = Path(root_path).absolute()
self.chmod_file = chmod_file
@@ -67,10 +66,10 @@ class LocalFileStore(ByteStore):
"""Get the full path for a given key relative to the root path.
Args:
- key (str): The key relative to the root path.
+ key: The key relative to the root path.
Returns:
- Path: The full path for the given key.
+ The full path for the given key.
"""
if not re.match(r"^[a-zA-Z0-9_.\-/]+$", key):
msg = f"Invalid characters in key: {key}"
@@ -94,10 +93,7 @@ class LocalFileStore(ByteStore):
whereas the explicit `os.chmod()` used here is not.
Args:
- dir_path: (Path) The store directory to make
-
- Returns:
- None
+ dir_path: The store directory to make.
"""
if not dir_path.exists():
self._mkdir_for_store(dir_path.parent)
@@ -113,7 +109,7 @@ class LocalFileStore(ByteStore):
Returns:
A sequence of optional values associated with the keys.
- If a key is not found, the corresponding value will be None.
+ If a key is not found, the corresponding value will be `None`.
"""
values: list[bytes | None] = []
for key in keys:
@@ -133,9 +129,6 @@ class LocalFileStore(ByteStore):
Args:
key_value_pairs: A sequence of key-value pairs.
-
- Returns:
- None
"""
for key, value in key_value_pairs:
full_path = self._get_full_path(key)
@@ -148,10 +141,7 @@ class LocalFileStore(ByteStore):
"""Delete the given keys and their associated values.
Args:
- keys (Sequence[str]): A sequence of keys to delete.
-
- Returns:
- None
+ keys: A sequence of keys to delete.
"""
for key in keys:
full_path = self._get_full_path(key)
@@ -162,10 +152,10 @@ class LocalFileStore(ByteStore):
"""Get an iterator over keys that match the given prefix.
Args:
- prefix (str | None): The prefix to match.
+ prefix: The prefix to match.
- Returns:
- Iterator[str]: An iterator over keys that match the given prefix.
+ Yields:
+ Keys that match the given prefix.
"""
prefix_path = self._get_full_path(prefix) if prefix else self.root_path
for file in prefix_path.rglob("*"):
diff --git a/libs/langchain/langchain_classic/tools/convert_to_openai.py b/libs/langchain/langchain_classic/tools/convert_to_openai.py
index d9f639a382f..1e185e3d248 100644
--- a/libs/langchain/langchain_classic/tools/convert_to_openai.py
+++ b/libs/langchain/langchain_classic/tools/convert_to_openai.py
@@ -1,4 +1,6 @@
-from langchain_core.utils.function_calling import format_tool_to_openai_function
+from langchain_core.utils.function_calling import (
+ convert_to_openai_function as format_tool_to_openai_function,
+)
# For backwards compatibility
__all__ = ["format_tool_to_openai_function"]
diff --git a/libs/langchain/langchain_classic/tools/jira/tool.py b/libs/langchain/langchain_classic/tools/jira/tool.py
index f9216f08a42..e5f70b14187 100644
--- a/libs/langchain/langchain_classic/tools/jira/tool.py
+++ b/libs/langchain/langchain_classic/tools/jira/tool.py
@@ -27,10 +27,10 @@ def __getattr__(name: str) -> Any:
"""Dynamically retrieve attributes from the updated module path.
Args:
- name (str): The name of the attribute to import.
+ name: The name of the attribute to import.
Returns:
- Any: The resolved attribute from the updated path.
+ The resolved attribute from the updated path.
"""
return _import_attribute(name)
diff --git a/libs/langchain/langchain_classic/tools/json/tool.py b/libs/langchain/langchain_classic/tools/json/tool.py
index ca85daed0a5..2ff1e4b8444 100644
--- a/libs/langchain/langchain_classic/tools/json/tool.py
+++ b/libs/langchain/langchain_classic/tools/json/tool.py
@@ -35,10 +35,10 @@ def __getattr__(name: str) -> Any:
at runtime and forward them to their new locations.
Args:
- name (str): The name of the attribute to import.
+ name: The name of the attribute to import.
Returns:
- Any: The resolved attribute from the appropriate updated module.
+ The resolved attribute from the appropriate updated module.
"""
return _import_attribute(name)
diff --git a/libs/langchain/langchain_classic/tools/render.py b/libs/langchain/langchain_classic/tools/render.py
index cbe6e2bb0e9..50604080499 100644
--- a/libs/langchain/langchain_classic/tools/render.py
+++ b/libs/langchain/langchain_classic/tools/render.py
@@ -11,8 +11,10 @@ from langchain_core.tools import (
render_text_description_and_args,
)
from langchain_core.utils.function_calling import (
- format_tool_to_openai_function,
- format_tool_to_openai_tool,
+ convert_to_openai_function as format_tool_to_openai_function,
+)
+from langchain_core.utils.function_calling import (
+ convert_to_openai_tool as format_tool_to_openai_tool,
)
__all__ = [
diff --git a/libs/langchain/langchain_classic/tools/zapier/tool.py b/libs/langchain/langchain_classic/tools/zapier/tool.py
index 1249bb8bcf3..d04673abe6b 100644
--- a/libs/langchain/langchain_classic/tools/zapier/tool.py
+++ b/libs/langchain/langchain_classic/tools/zapier/tool.py
@@ -33,10 +33,10 @@ def __getattr__(name: str) -> Any:
at runtime and forward them to their new locations.
Args:
- name (str): The name of the attribute to import.
+ name: The name of the attribute to import.
Returns:
- Any: The resolved attribute from the appropriate updated module.
+ The resolved attribute from the appropriate updated module.
"""
return _import_attribute(name)
diff --git a/libs/langchain/langchain_classic/utils/__init__.py b/libs/langchain/langchain_classic/utils/__init__.py
index 533d8db4ed7..0777d743a35 100644
--- a/libs/langchain/langchain_classic/utils/__init__.py
+++ b/libs/langchain/langchain_classic/utils/__init__.py
@@ -1,4 +1,4 @@
-"""**Utility functions** for LangChain.
+"""Utility functions for LangChain.
These functions do not depend on any other LangChain module.
"""
diff --git a/libs/langchain/langchain_classic/utils/openai_functions.py b/libs/langchain/langchain_classic/utils/openai_functions.py
index 6e093c35d6f..0e21c36857f 100644
--- a/libs/langchain/langchain_classic/utils/openai_functions.py
+++ b/libs/langchain/langchain_classic/utils/openai_functions.py
@@ -1,8 +1,9 @@
+from langchain_core.utils.function_calling import FunctionDescription, ToolDescription
from langchain_core.utils.function_calling import (
- FunctionDescription,
- ToolDescription,
- convert_pydantic_to_openai_function,
- convert_pydantic_to_openai_tool,
+ convert_to_openai_function as convert_pydantic_to_openai_function,
+)
+from langchain_core.utils.function_calling import (
+ convert_to_openai_tool as convert_pydantic_to_openai_tool,
)
__all__ = [
diff --git a/libs/langchain/langchain_classic/vectorstores/__init__.py b/libs/langchain/langchain_classic/vectorstores/__init__.py
index 6b61ff01cb1..74882a4987b 100644
--- a/libs/langchain/langchain_classic/vectorstores/__init__.py
+++ b/libs/langchain/langchain_classic/vectorstores/__init__.py
@@ -71,7 +71,6 @@ if TYPE_CHECKING:
SupabaseVectorStore,
Tair,
TencentVectorDB,
- Tigris,
TileDB,
TimescaleVector,
Typesense,
@@ -149,7 +148,6 @@ DEPRECATED_LOOKUP = {
"SupabaseVectorStore": "langchain_community.vectorstores",
"Tair": "langchain_community.vectorstores",
"TencentVectorDB": "langchain_community.vectorstores",
- "Tigris": "langchain_community.vectorstores",
"TileDB": "langchain_community.vectorstores",
"TimescaleVector": "langchain_community.vectorstores",
"Typesense": "langchain_community.vectorstores",
@@ -231,7 +229,6 @@ __all__ = [
"SupabaseVectorStore",
"Tair",
"TencentVectorDB",
- "Tigris",
"TileDB",
"TimescaleVector",
"Typesense",
diff --git a/libs/langchain/langchain_classic/vectorstores/tigris.py b/libs/langchain/langchain_classic/vectorstores/tigris.py
deleted file mode 100644
index 8ba9f5af3cc..00000000000
--- a/libs/langchain/langchain_classic/vectorstores/tigris.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from typing import TYPE_CHECKING, Any
-
-from langchain_classic._api import create_importer
-
-if TYPE_CHECKING:
- from langchain_community.vectorstores import Tigris
-
-# Create a way to dynamically look up deprecated imports.
-# Used to consolidate logic for raising deprecation warnings and
-# handling optional imports.
-DEPRECATED_LOOKUP = {"Tigris": "langchain_community.vectorstores"}
-
-_import_attribute = create_importer(__package__, deprecated_lookups=DEPRECATED_LOOKUP)
-
-
-def __getattr__(name: str) -> Any:
- """Look up attributes dynamically."""
- return _import_attribute(name)
-
-
-__all__ = [
- "Tigris",
-]
diff --git a/libs/langchain/pyproject.toml b/libs/langchain/pyproject.toml
index ddd7e2963cd..f259e7d7044 100644
--- a/libs/langchain/pyproject.toml
+++ b/libs/langchain/pyproject.toml
@@ -3,12 +3,17 @@ requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
-authors = []
+name = "langchain-classic"
+description = "Building applications with LLMs through composability"
license = { text = "MIT" }
+readme = "README.md"
+authors = []
+
+version = "1.0.0"
requires-python = ">=3.10.0,<4.0.0"
dependencies = [
- "langchain-core>=1.0.0a7,<2.0.0",
- "langchain-text-splitters>=1.0.0a1,<2.0.0",
+ "langchain-core>=1.0.0,<2.0.0",
+ "langchain-text-splitters>=1.0.0,<2.0.0",
"langsmith>=0.1.17,<1.0.0",
"pydantic>=2.7.4,<3.0.0",
"SQLAlchemy>=1.4.0,<3.0.0",
@@ -16,10 +21,6 @@ dependencies = [
"PyYAML>=5.3.0,<7.0.0",
"async-timeout>=4.0.0,<5.0.0; python_version < \"3.11\"",
]
-name = "langchain-classic"
-version = "1.0.0a1"
-description = "Building applications with LLMs through composability"
-readme = "README.md"
[project.optional-dependencies]
#community = ["langchain-community"]
@@ -33,7 +34,7 @@ fireworks = ["langchain-fireworks"]
ollama = ["langchain-ollama"]
together = ["langchain-together"]
mistralai = ["langchain-mistralai"]
-#huggingface = ["langchain-huggingface"]
+huggingface = ["langchain-huggingface"]
groq = ["langchain-groq"]
aws = ["langchain-aws"]
deepseek = ["langchain-deepseek"]
@@ -41,12 +42,13 @@ xai = ["langchain-xai"]
perplexity = ["langchain-perplexity"]
[project.urls]
-homepage = "https://docs.langchain.com/"
-repository = "https://github.com/langchain-ai/langchain/tree/master/libs/langchain"
-changelog = "https://github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain-classic%3D%3D1%22"
-twitter = "https://x.com/LangChainAI"
-slack = "https://www.langchain.com/join-community"
-reddit = "https://www.reddit.com/r/LangChain/"
+Homepage = "https://docs.langchain.com/"
+Documentation = "https://reference.langchain.com/python/langchain_classic/"
+Source = "https://github.com/langchain-ai/langchain/tree/master/libs/langchain"
+Changelog = "https://github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain-classic%3D%3D1%22"
+Twitter = "https://x.com/LangChainAI"
+Slack = "https://www.langchain.com/join-community"
+Reddit = "https://www.reddit.com/r/LangChain/"
[dependency-groups]
test = [
@@ -62,7 +64,6 @@ test = [
"numpy>=2.1.0; python_version>='3.13'",
"cffi<1.17.1; python_version < \"3.10\"",
"cffi; python_version >= \"3.10\"",
- "duckdb-engine>=0.9.2,<1.0.0",
"freezegun>=1.2.2,<2.0.0",
"responses>=0.22.0,<1.0.0",
"lark>=1.1.5,<2.0.0",
@@ -153,6 +154,7 @@ ignore = [
"TC003", # Doesn't play well with Pydantic
"TD002", # Missing author in TODO
"TD003", # Missing issue link in TODO
+ "RUF002", # Em-dash in docstring
# TODO rules
"ANN401", # No type Any
diff --git a/libs/langchain/scripts/lint_imports.sh b/libs/langchain/scripts/lint_imports.sh
index 1ab9dd7a733..b8cb03b69bc 100755
--- a/libs/langchain/scripts/lint_imports.sh
+++ b/libs/langchain/scripts/lint_imports.sh
@@ -6,23 +6,23 @@ set -eu
errors=0
# Check the conditions
-git grep '^from langchain import' langchain | grep -vE 'from langchain import (__version__|hub)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/pydantic_v1 | grep -vE 'from langchain.(pydantic_v1|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/load | grep -vE 'from langchain.(pydantic_v1|load|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/utils | grep -vE 'from langchain.(pydantic_v1|utils|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/schema | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|env|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/adapters | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/callbacks | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|_api)' && errors=$((errors+1))
+git grep '^from langchain import' langchain_classic | grep -vE 'from langchain import (__version__|hub)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/pydantic_v1 | grep -vE 'from langchain.(pydantic_v1|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/load | grep -vE 'from langchain.(pydantic_v1|load|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/utils | grep -vE 'from langchain.(pydantic_v1|utils|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/schema | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|env|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/adapters | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/callbacks | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|_api)' && errors=$((errors+1))
# TODO: it's probably not amazing so that so many other modules depend on `langchain_community.utilities`, because there can be a lot of imports there
-git grep '^from langchain\.' langchain/utilities | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|utilities|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/storage | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|storage|utilities|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/prompts | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|prompts|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/output_parsers | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|prompts|_api|output_parsers|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/llms | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|prompts|llms|utilities|globals|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/chat_models | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|llms|prompts|adapters|chat_models|utilities|globals|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/embeddings | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|storage|llms|embeddings|utilities|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/docstore | grep -vE 'from langchain.(pydantic_v1|utils|schema|docstore|_api)' && errors=$((errors+1))
-git grep '^from langchain\.' langchain/vectorstores | grep -vE 'from
+git grep '^from langchain\.' langchain_classic/utilities | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|utilities|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/storage | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|storage|utilities|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/prompts | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|prompts|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/output_parsers | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|prompts|_api|output_parsers|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/llms | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|prompts|llms|utilities|globals|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/chat_models | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|llms|prompts|adapters|chat_models|utilities|globals|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/embeddings | grep -vE 'from langchain.(pydantic_v1|utils|schema|load|callbacks|env|storage|llms|embeddings|utilities|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/docstore | grep -vE 'from langchain.(pydantic_v1|utils|schema|docstore|_api)' && errors=$((errors+1))
+git grep '^from langchain\.' langchain_classic/vectorstores | grep -vE 'from
langchain.(pydantic_v1|utils|schema|load|callbacks|env|_api|storage|llms|docstore|vectorstores|utilities|_api)' && errors=$((errors+1))
# make sure not importing from langchain_experimental
git --no-pager grep '^from langchain_experimental\.' . && errors=$((errors+1))
diff --git a/libs/langchain/tests/integration_tests/chat_models/test_base.py b/libs/langchain/tests/integration_tests/chat_models/test_base.py
index eddc6cb45bb..07c03b725ef 100644
--- a/libs/langchain/tests/integration_tests/chat_models/test_base.py
+++ b/libs/langchain/tests/integration_tests/chat_models/test_base.py
@@ -25,7 +25,7 @@ async def test_init_chat_model_chain() -> None:
model_with_config = model_with_tools.with_config(
RunnableConfig(tags=["foo"]),
- configurable={"bar_model": "claude-3-7-sonnet-20250219"},
+ configurable={"bar_model": "claude-sonnet-4-5-20250929"},
)
prompt = ChatPromptTemplate.from_messages([("system", "foo"), ("human", "{input}")])
chain = prompt | model_with_config
diff --git a/libs/langchain/tests/unit_tests/agents/test_agent.py b/libs/langchain/tests/unit_tests/agents/test_agent.py
index 072a3c7faf7..8cbcb198560 100644
--- a/libs/langchain/tests/unit_tests/agents/test_agent.py
+++ b/libs/langchain/tests/unit_tests/agents/test_agent.py
@@ -450,7 +450,7 @@ def test_agent_invalid_tool() -> None:
async def test_runnable_agent() -> None:
- """Simple test to verify that an agent built with LCEL works."""
+ """Simple test to verify that an agent built via composition works."""
# Will alternate between responding with hello and goodbye
infinite_cycle = cycle([AIMessage(content="hello world!")])
# When streaming GenericFakeChatModel breaks AIMessage into chunks based on spaces
diff --git a/libs/langchain/tests/unit_tests/chains/test_base.py b/libs/langchain/tests/unit_tests/chains/test_base.py
index a0e21fab2b6..a607eada5b9 100644
--- a/libs/langchain/tests/unit_tests/chains/test_base.py
+++ b/libs/langchain/tests/unit_tests/chains/test_base.py
@@ -6,10 +6,10 @@ from typing import Any
import pytest
from langchain_core.callbacks.manager import CallbackManagerForChainRun
-from langchain_core.memory import BaseMemory
from langchain_core.tracers.context import collect_runs
from typing_extensions import override
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.chains.base import Chain
from langchain_classic.schema import RUN_KEY
from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
diff --git a/libs/langchain/tests/unit_tests/chains/test_conversation.py b/libs/langchain/tests/unit_tests/chains/test_conversation.py
index 0913204b77e..7ff07f45da6 100644
--- a/libs/langchain/tests/unit_tests/chains/test_conversation.py
+++ b/libs/langchain/tests/unit_tests/chains/test_conversation.py
@@ -6,10 +6,10 @@ from typing import Any
import pytest
from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models import LLM
-from langchain_core.memory import BaseMemory
from langchain_core.prompts.prompt import PromptTemplate
from typing_extensions import override
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.chains.conversation.base import ConversationChain
from langchain_classic.memory.buffer import ConversationBufferMemory
from langchain_classic.memory.buffer_window import ConversationBufferWindowMemory
diff --git a/libs/langchain/tests/unit_tests/chains/test_memory.py b/libs/langchain/tests/unit_tests/chains/test_memory.py
index 2be4384c397..0894d5f92df 100644
--- a/libs/langchain/tests/unit_tests/chains/test_memory.py
+++ b/libs/langchain/tests/unit_tests/chains/test_memory.py
@@ -1,6 +1,6 @@
import pytest
-from langchain_core.memory import BaseMemory
+from langchain_classic.base_memory import BaseMemory
from langchain_classic.chains.conversation.memory import (
ConversationBufferMemory,
ConversationBufferWindowMemory,
diff --git a/libs/langchain/tests/unit_tests/chat_models/test_base.py b/libs/langchain/tests/unit_tests/chat_models/test_base.py
index 400deabe9c6..2b769aa4a38 100644
--- a/libs/langchain/tests/unit_tests/chat_models/test_base.py
+++ b/libs/langchain/tests/unit_tests/chat_models/test_base.py
@@ -32,7 +32,7 @@ def test_all_imports() -> None:
("model_name", "model_provider"),
[
("gpt-4o", "openai"),
- ("claude-3-opus-20240229", "anthropic"),
+ ("claude-opus-4-1", "anthropic"),
("accounts/fireworks/models/mixtral-8x7b-instruct", "fireworks"),
("mixtral-8x7b-32768", "groq"),
],
@@ -241,17 +241,17 @@ def test_configurable_with_default() -> None:
model_with_config = model_with_tools.with_config(
RunnableConfig(tags=["foo"]),
- configurable={"bar_model": "claude-3-7-sonnet-20250219"},
+ configurable={"bar_model": "claude-sonnet-4-5-20250929"},
)
- assert model_with_config.model == "claude-3-7-sonnet-20250219" # type: ignore[attr-defined]
+ assert model_with_config.model == "claude-sonnet-4-5-20250929" # type: ignore[attr-defined]
assert model_with_config.model_dump() == { # type: ignore[attr-defined]
"name": None,
"bound": {
"name": None,
"disable_streaming": False,
- "model": "claude-3-7-sonnet-20250219",
+ "model": "claude-sonnet-4-5-20250929",
"mcp_servers": None,
"max_tokens": 64000,
"temperature": None,
diff --git a/libs/langchain/tests/unit_tests/embeddings/test_imports.py b/libs/langchain/tests/unit_tests/embeddings/test_imports.py
index 1a2d86854a6..0b65208e43d 100644
--- a/libs/langchain/tests/unit_tests/embeddings/test_imports.py
+++ b/libs/langchain/tests/unit_tests/embeddings/test_imports.py
@@ -11,6 +11,7 @@ EXPECTED_ALL = [
"FastEmbedEmbeddings",
"HuggingFaceEmbeddings",
"HuggingFaceInferenceAPIEmbeddings",
+ "HypotheticalDocumentEmbedder",
"InfinityEmbeddings",
"GradientEmbeddings",
"JinaEmbeddings",
diff --git a/libs/langchain/tests/unit_tests/indexes/test_indexing.py b/libs/langchain/tests/unit_tests/indexes/test_indexing.py
index ebc87aee00f..e3d5c85c37d 100644
--- a/libs/langchain/tests/unit_tests/indexes/test_indexing.py
+++ b/libs/langchain/tests/unit_tests/indexes/test_indexing.py
@@ -446,7 +446,7 @@ def test_incremental_fails_with_bad_source_ids(
with pytest.raises(
ValueError,
- match="Source ids are required when cleanup mode is incremental or scoped_full",
+ match="Source IDs are required when cleanup mode is incremental or scoped_full",
):
# Should raise an error because no source id function was specified
index(
@@ -496,7 +496,7 @@ async def test_aincremental_fails_with_bad_source_ids(
with pytest.raises(
ValueError,
- match="Source ids are required when cleanup mode is incremental or scoped_full",
+ match="Source IDs are required when cleanup mode is incremental or scoped_full",
):
# Should raise an error because no source id function was specified
await aindex(
diff --git a/libs/langchain/tests/unit_tests/test_dependencies.py b/libs/langchain/tests/unit_tests/test_dependencies.py
index 7b9e12f0e4d..9bf5dcf95f8 100644
--- a/libs/langchain/tests/unit_tests/test_dependencies.py
+++ b/libs/langchain/tests/unit_tests/test_dependencies.py
@@ -57,7 +57,6 @@ def test_test_group_dependencies(uv_conf: Mapping[str, Any]) -> None:
assert sorted(test_group_deps) == sorted(
[
- "duckdb-engine",
"freezegun",
"langchain-core",
"langchain-tests",
diff --git a/libs/langchain/tests/unit_tests/test_imports.py b/libs/langchain/tests/unit_tests/test_imports.py
index f13d1e18f32..dc964846f01 100644
--- a/libs/langchain/tests/unit_tests/test_imports.py
+++ b/libs/langchain/tests/unit_tests/test_imports.py
@@ -96,7 +96,7 @@ def test_no_more_changes_to_proxy_community() -> None:
# most cases.
hash_ += len(str(sorted(deprecated_lookup.items())))
- evil_magic_number = 38620
+ evil_magic_number = 38644
assert hash_ == evil_magic_number, (
"If you're triggering this test, you're likely adding a new import "
@@ -108,15 +108,15 @@ def test_no_more_changes_to_proxy_community() -> None:
def extract_deprecated_lookup(file_path: str) -> dict[str, Any] | None:
- """Detect and extracts the value of a dictionary named DEPRECATED_LOOKUP.
+ """Detect and extracts the value of a dictionary named `DEPRECATED_LOOKUP`.
This variable is located in the global namespace of a Python file.
Args:
- file_path (str): The path to the Python file.
+ file_path: The path to the Python file.
Returns:
- dict or None: The value of DEPRECATED_LOOKUP if it exists, None otherwise.
+ The value of `DEPRECATED_LOOKUP` if it exists, `None` otherwise.
"""
tree = ast.parse(Path(file_path).read_text(encoding="utf-8"), filename=file_path)
@@ -136,10 +136,10 @@ def _dict_from_ast(node: ast.Dict) -> dict[str, str]:
"""Convert an AST dict node to a Python dictionary, assuming str to str format.
Args:
- node (ast.Dict): The AST node representing a dictionary.
+ node: The AST node representing a dictionary.
Returns:
- dict: The corresponding Python dictionary.
+ The corresponding Python dictionary.
"""
result: dict[str, str] = {}
for key, value in zip(node.keys, node.values, strict=False):
@@ -153,10 +153,10 @@ def _literal_eval_str(node: ast.AST) -> str:
"""Evaluate an AST literal node to its corresponding string value.
Args:
- node (ast.AST): The AST node representing a literal value.
+ node: The AST node representing a literal value.
Returns:
- str: The corresponding string value.
+ The corresponding string value.
"""
if isinstance(node, ast.Constant) and isinstance(node.value, str):
return node.value
diff --git a/libs/langchain/tests/unit_tests/utils/test_openai_functions.py b/libs/langchain/tests/unit_tests/utils/test_openai_functions.py
index ffe68e64ffa..570cd4d351d 100644
--- a/libs/langchain/tests/unit_tests/utils/test_openai_functions.py
+++ b/libs/langchain/tests/unit_tests/utils/test_openai_functions.py
@@ -1,4 +1,4 @@
-from langchain_core.utils.function_calling import convert_pydantic_to_openai_function
+from langchain_core.utils.function_calling import convert_to_openai_function
from pydantic import BaseModel, Field
@@ -9,7 +9,7 @@ def test_convert_pydantic_to_openai_function() -> None:
key: str = Field(..., description="API key")
days: int = Field(default=0, description="Number of days to forecast")
- actual = convert_pydantic_to_openai_function(Data)
+ actual = convert_to_openai_function(Data)
expected = {
"name": "Data",
"description": "The data to return.",
@@ -41,7 +41,7 @@ def test_convert_pydantic_to_openai_function_nested() -> None:
data: Data
- actual = convert_pydantic_to_openai_function(Model)
+ actual = convert_to_openai_function(Model)
expected = {
"name": "Model",
"description": "The model to return.",
diff --git a/libs/langchain/tests/unit_tests/vectorstores/test_public_api.py b/libs/langchain/tests/unit_tests/vectorstores/test_public_api.py
index c05bd2a95bd..d639bba807b 100644
--- a/libs/langchain/tests/unit_tests/vectorstores/test_public_api.py
+++ b/libs/langchain/tests/unit_tests/vectorstores/test_public_api.py
@@ -61,7 +61,6 @@ _EXPECTED = [
"SupabaseVectorStore",
"Tair",
"TencentVectorDB",
- "Tigris",
"TileDB",
"TimescaleVector",
"Typesense",
diff --git a/libs/langchain/uv.lock b/libs/langchain/uv.lock
index 933859e051c..1b9d095dce7 100644
--- a/libs/langchain/uv.lock
+++ b/libs/langchain/uv.lock
@@ -950,52 +950,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/55/e2/2537ebcff11c1ee1ff17d8d0b6f4db75873e3b0fb32c2d4a2ee31ecb310a/docstring_parser-0.17.0-py3-none-any.whl", hash = "sha256:cf2569abd23dce8099b300f9b4fa8191e9582dda731fd533daf54c4551658708", size = 36896, upload-time = "2025-07-21T07:35:00.684Z" },
]
-[[package]]
-name = "duckdb"
-version = "1.4.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/82/93/adc0d183642fc9a602ca9b97cb16754c84b8c1d92e5b99aec412e0c419a8/duckdb-1.4.0.tar.gz", hash = "sha256:bd5edee8bd5a73b5822f2b390668597b5fcdc2d3292c244d8d933bb87ad6ac4c", size = 18453175, upload-time = "2025-09-16T10:22:41.509Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/0f/4a/b2e17dbe2953481b084f355f162ed319a67ef760e28794c6870058583aec/duckdb-1.4.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e24e981a6c87e299201694b9bb24fff0beb04ccad399fca6f13072a59814488f", size = 31293005, upload-time = "2025-09-16T10:21:28.296Z" },
- { url = "https://files.pythonhosted.org/packages/a9/89/e34ed03cce7e35b83c1f056126aa4e8e8097eb93e7324463020f85d5cbfa/duckdb-1.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:db500ef2c8cb7dc1ca078740ecf1dceaa20d3f5dc5bce269be45d5cff4170c0f", size = 17288207, upload-time = "2025-09-16T10:21:31.129Z" },
- { url = "https://files.pythonhosted.org/packages/f8/17/7ff24799ee98c4dbb177c3ec6c93e38e9513828785c31757c727b47ad71e/duckdb-1.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a65739b8a7106634e6e77d0e110fc5e057b88edc9df6cb1683d499a1e5aa3177", size = 14817523, upload-time = "2025-09-16T10:21:33.397Z" },
- { url = "https://files.pythonhosted.org/packages/fc/ab/7a482a76ff75212b5cf4f2172a802f2a59b4ab096416e5821aa62a305bc4/duckdb-1.4.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1d59f7be24862adb803a1ddfc9c3b8cb09e6005bca0c9c6f7c631a1da1c3aa0c", size = 18410654, upload-time = "2025-09-16T10:21:35.864Z" },
- { url = "https://files.pythonhosted.org/packages/1e/f6/a235233b973652b31448b6d600604620d02fc552b90ab94ca7f645fd5ac0/duckdb-1.4.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7d052a87e9edf4eb3bab0b7a6ac995676018c6083b8049421628dfa3b983a2d4", size = 20399121, upload-time = "2025-09-16T10:21:38.524Z" },
- { url = "https://files.pythonhosted.org/packages/b1/cf/63fedb74d00d7c4e19ffc73a1d8d98ee8d3d6498cf2865509c104aa8e799/duckdb-1.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:0329b81e587f745b2fc6f3a488ea3188b0f029c3b5feef43792a25eaac84ac01", size = 12283288, upload-time = "2025-09-16T10:21:40.732Z" },
- { url = "https://files.pythonhosted.org/packages/60/e9/b29cc5bceac52e049b20d613551a2171a092df07f26d4315f3f9651c80d4/duckdb-1.4.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6505fed1ccae8df9f574e744c48fa32ee2feaeebe5346c2daf4d4d10a8dac5aa", size = 31290878, upload-time = "2025-09-16T10:21:43.256Z" },
- { url = "https://files.pythonhosted.org/packages/1f/68/d88a15dba48bf6a4b33f1be5097ef45c83f7b9e97c854cc638a85bb07d70/duckdb-1.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:36974a04b29c74ac2143457e95420a7422016d050e28573060b89a90b9cf2b57", size = 17288823, upload-time = "2025-09-16T10:21:45.716Z" },
- { url = "https://files.pythonhosted.org/packages/8c/7e/e3d2101dc6bbd60f2b3c1d748351ff541fc8c48790ac1218c0199cb930f6/duckdb-1.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:90484b896e5059f145d1facfabea38e22c54a2dcc2bd62dd6c290423f0aee258", size = 14819684, upload-time = "2025-09-16T10:21:48.117Z" },
- { url = "https://files.pythonhosted.org/packages/c4/bb/4ec8e4d03cb5b77d75b9ee0057c2c714cffaa9bda1e55ffec833458af0a3/duckdb-1.4.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a969d624b385853b31a43b0a23089683297da2f14846243921c6dbec8382d659", size = 18410075, upload-time = "2025-09-16T10:21:50.517Z" },
- { url = "https://files.pythonhosted.org/packages/ec/21/e896616d892d50dc1e0c142428e9359b483d4dd6e339231d822e57834ad3/duckdb-1.4.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5935644f96a75e9f6f3c3eeb3da14cdcaf7bad14d1199c08439103decb29466a", size = 20402984, upload-time = "2025-09-16T10:21:52.808Z" },
- { url = "https://files.pythonhosted.org/packages/c4/c0/b5eb9497e4a9167d23fbad745969eaa36e28d346648e17565471892d1b33/duckdb-1.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:300aa0e963af97969c38440877fffd576fc1f49c1f5914789a9d01f2fe7def91", size = 12282971, upload-time = "2025-09-16T10:21:55.314Z" },
- { url = "https://files.pythonhosted.org/packages/e8/6d/0c774d6af1aed82dbe855d266cb000a1c09ea31ed7d6c3a79e2167a38e7a/duckdb-1.4.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:18b3a048fca6cc7bafe08b10e1b0ab1509d7a0381ffb2c70359e7dc56d8a705d", size = 31307425, upload-time = "2025-09-16T10:21:57.83Z" },
- { url = "https://files.pythonhosted.org/packages/d3/c0/1fd7b7b2c0c53d8d748d2f28ea9096df5ee9dc39fa736cca68acabe69656/duckdb-1.4.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:2c1271cb85aeacccfd0b1284e816280a7450df1dd4dd85ccb2848563cfdf90e9", size = 17295727, upload-time = "2025-09-16T10:22:02.242Z" },
- { url = "https://files.pythonhosted.org/packages/98/d3/4d4c4bd667b7ada5f6c207c2f127591ebb8468333f207f8f10ff0532578e/duckdb-1.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55064dd2e25711eeaa6a72c25405bdd7994c81a3221657e94309a2faf65d25a6", size = 14826879, upload-time = "2025-09-16T10:22:05.162Z" },
- { url = "https://files.pythonhosted.org/packages/b0/48/e0c1b97d76fb7567c53db5739931323238fad54a642707008104f501db37/duckdb-1.4.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0536d7c81bc506532daccf373ddbc8c6add46aeb70ef3cd5ee70ad5c2b3165ea", size = 18417856, upload-time = "2025-09-16T10:22:07.919Z" },
- { url = "https://files.pythonhosted.org/packages/12/78/297b838f3b9511589badc8f472f70b31cf3bbf9eb99fa0a4d6e911d3114a/duckdb-1.4.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:784554e3ddfcfc5c5c7b1aa1f9925fedb7938f6628729adba48f7ea37554598f", size = 20427154, upload-time = "2025-09-16T10:22:10.216Z" },
- { url = "https://files.pythonhosted.org/packages/ea/57/500d251b886494f6c52d56eeab8a1860572ee62aed05d7d50c71ba2320f3/duckdb-1.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:c5d2aa4d6981f525ada95e6db41bb929403632bb5ff24bd6d6dd551662b1b613", size = 12290108, upload-time = "2025-09-16T10:22:12.668Z" },
- { url = "https://files.pythonhosted.org/packages/2f/64/ee22b2b8572746e1523143b9f28d606575782e0204de5020656a1d15dd14/duckdb-1.4.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:1d94d010a09b1a62d9021a2a71cf266188750f3c9b1912ccd6afe104a6ce8010", size = 31307662, upload-time = "2025-09-16T10:22:14.9Z" },
- { url = "https://files.pythonhosted.org/packages/76/2e/4241cd00046ca6b781bd1d9002e8223af061e85d1cc21830aa63e7a7db7c/duckdb-1.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:c61756fa8b3374627e5fa964b8e0d5b58e364dce59b87dba7fb7bc6ede196b26", size = 17295617, upload-time = "2025-09-16T10:22:17.239Z" },
- { url = "https://files.pythonhosted.org/packages/f7/98/5ab136bc7b12ac18580350a220db7c00606be9eac2d89de259cce733f64c/duckdb-1.4.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e70d7d9881ea2c0836695de70ea68c970e18a2856ba3d6502e276c85bd414ae7", size = 14826727, upload-time = "2025-09-16T10:22:19.415Z" },
- { url = "https://files.pythonhosted.org/packages/23/32/57866cf8881288b3dfb9212720221fb890daaa534dbdc6fe3fff3979ecd1/duckdb-1.4.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2de258a93435c977a0ec3a74ec8f60c2f215ddc73d427ee49adc4119558facd3", size = 18421289, upload-time = "2025-09-16T10:22:21.564Z" },
- { url = "https://files.pythonhosted.org/packages/a0/83/7438fb43be451a7d4a04650aaaf662b2ff2d95895bbffe3e0e28cbe030c9/duckdb-1.4.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a6d3659641d517dd9ed1ab66f110cdbdaa6900106f116effaf2dbedd83c38de3", size = 20426547, upload-time = "2025-09-16T10:22:23.759Z" },
- { url = "https://files.pythonhosted.org/packages/21/b2/98fb89ae81611855f35984e96f648d871f3967bb3f524b51d1372d052f0c/duckdb-1.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:07fcc612ea5f0fe6032b92bcc93693034eb00e7a23eb9146576911d5326af4f7", size = 12290467, upload-time = "2025-09-16T10:22:25.923Z" },
-]
-
-[[package]]
-name = "duckdb-engine"
-version = "0.17.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "duckdb" },
- { name = "packaging" },
- { name = "sqlalchemy" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/89/d5/c0d8d0a4ca3ffea92266f33d92a375e2794820ad89f9be97cf0c9a9697d0/duckdb_engine-0.17.0.tar.gz", hash = "sha256:396b23869754e536aa80881a92622b8b488015cf711c5a40032d05d2cf08f3cf", size = 48054, upload-time = "2025-03-29T09:49:17.663Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/2a/a2/e90242f53f7ae41554419b1695b4820b364df87c8350aa420b60b20cab92/duckdb_engine-0.17.0-py3-none-any.whl", hash = "sha256:3aa72085e536b43faab635f487baf77ddc5750069c16a2f8d9c6c3cb6083e979", size = 49676, upload-time = "2025-03-29T09:49:15.564Z" },
-]
-
[[package]]
name = "exceptiongroup"
version = "1.3.0"
@@ -1467,6 +1421,8 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/7f/91/ae2eb6b7979e2f9b035a9f612cf70f1bf54aad4e1d125129bef1eae96f19/greenlet-3.2.4-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c2ca18a03a8cfb5b25bc1cbe20f3d9a4c80d8c3b13ba3df49ac3961af0b1018d", size = 584358, upload-time = "2025-08-07T13:18:23.708Z" },
{ url = "https://files.pythonhosted.org/packages/f7/85/433de0c9c0252b22b16d413c9407e6cb3b41df7389afc366ca204dbc1393/greenlet-3.2.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fe0a28a7b952a21e2c062cd5756d34354117796c6d9215a87f55e38d15402c5", size = 1113550, upload-time = "2025-08-07T13:42:37.467Z" },
{ url = "https://files.pythonhosted.org/packages/a1/8d/88f3ebd2bc96bf7747093696f4335a0a8a4c5acfcf1b757717c0d2474ba3/greenlet-3.2.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8854167e06950ca75b898b104b63cc646573aa5fef1353d4508ecdd1ee76254f", size = 1137126, upload-time = "2025-08-07T13:18:20.239Z" },
+ { url = "https://files.pythonhosted.org/packages/f1/29/74242b7d72385e29bcc5563fba67dad94943d7cd03552bac320d597f29b2/greenlet-3.2.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f47617f698838ba98f4ff4189aef02e7343952df3a615f847bb575c3feb177a7", size = 1544904, upload-time = "2025-11-04T12:42:04.763Z" },
+ { url = "https://files.pythonhosted.org/packages/c8/e2/1572b8eeab0f77df5f6729d6ab6b141e4a84ee8eb9bc8c1e7918f94eda6d/greenlet-3.2.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:af41be48a4f60429d5cad9d22175217805098a9ef7c40bfef44f7669fb9d74d8", size = 1611228, upload-time = "2025-11-04T12:42:08.423Z" },
{ url = "https://files.pythonhosted.org/packages/d6/6f/b60b0291d9623c496638c582297ead61f43c4b72eef5e9c926ef4565ec13/greenlet-3.2.4-cp310-cp310-win_amd64.whl", hash = "sha256:73f49b5368b5359d04e18d15828eecc1806033db5233397748f4ca813ff1056c", size = 298654, upload-time = "2025-08-07T13:50:00.469Z" },
{ url = "https://files.pythonhosted.org/packages/a4/de/f28ced0a67749cac23fecb02b694f6473f47686dff6afaa211d186e2ef9c/greenlet-3.2.4-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2", size = 272305, upload-time = "2025-08-07T13:15:41.288Z" },
{ url = "https://files.pythonhosted.org/packages/09/16/2c3792cba130000bf2a31c5272999113f4764fd9d874fb257ff588ac779a/greenlet-3.2.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246", size = 632472, upload-time = "2025-08-07T13:42:55.044Z" },
@@ -1476,6 +1432,8 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/1f/8e/abdd3f14d735b2929290a018ecf133c901be4874b858dd1c604b9319f064/greenlet-3.2.4-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8", size = 587684, upload-time = "2025-08-07T13:18:25.164Z" },
{ url = "https://files.pythonhosted.org/packages/5d/65/deb2a69c3e5996439b0176f6651e0052542bb6c8f8ec2e3fba97c9768805/greenlet-3.2.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52", size = 1116647, upload-time = "2025-08-07T13:42:38.655Z" },
{ url = "https://files.pythonhosted.org/packages/3f/cc/b07000438a29ac5cfb2194bfc128151d52f333cee74dd7dfe3fb733fc16c/greenlet-3.2.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa", size = 1142073, upload-time = "2025-08-07T13:18:21.737Z" },
+ { url = "https://files.pythonhosted.org/packages/67/24/28a5b2fa42d12b3d7e5614145f0bd89714c34c08be6aabe39c14dd52db34/greenlet-3.2.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c9c6de1940a7d828635fbd254d69db79e54619f165ee7ce32fda763a9cb6a58c", size = 1548385, upload-time = "2025-11-04T12:42:11.067Z" },
+ { url = "https://files.pythonhosted.org/packages/6a/05/03f2f0bdd0b0ff9a4f7b99333d57b53a7709c27723ec8123056b084e69cd/greenlet-3.2.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03c5136e7be905045160b1b9fdca93dd6727b180feeafda6818e6496434ed8c5", size = 1613329, upload-time = "2025-11-04T12:42:12.928Z" },
{ url = "https://files.pythonhosted.org/packages/d8/0f/30aef242fcab550b0b3520b8e3561156857c94288f0332a79928c31a52cf/greenlet-3.2.4-cp311-cp311-win_amd64.whl", hash = "sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9", size = 299100, upload-time = "2025-08-07T13:44:12.287Z" },
{ url = "https://files.pythonhosted.org/packages/44/69/9b804adb5fd0671f367781560eb5eb586c4d495277c93bde4307b9e28068/greenlet-3.2.4-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:3b67ca49f54cede0186854a008109d6ee71f66bd57bb36abd6d0a0267b540cdd", size = 274079, upload-time = "2025-08-07T13:15:45.033Z" },
{ url = "https://files.pythonhosted.org/packages/46/e9/d2a80c99f19a153eff70bc451ab78615583b8dac0754cfb942223d2c1a0d/greenlet-3.2.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ddf9164e7a5b08e9d22511526865780a576f19ddd00d62f8a665949327fde8bb", size = 640997, upload-time = "2025-08-07T13:42:56.234Z" },
@@ -1485,6 +1443,8 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/19/0d/6660d55f7373b2ff8152401a83e02084956da23ae58cddbfb0b330978fe9/greenlet-3.2.4-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3b3812d8d0c9579967815af437d96623f45c0f2ae5f04e366de62a12d83a8fb0", size = 607586, upload-time = "2025-08-07T13:18:28.544Z" },
{ url = "https://files.pythonhosted.org/packages/8e/1a/c953fdedd22d81ee4629afbb38d2f9d71e37d23caace44775a3a969147d4/greenlet-3.2.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:abbf57b5a870d30c4675928c37278493044d7c14378350b3aa5d484fa65575f0", size = 1123281, upload-time = "2025-08-07T13:42:39.858Z" },
{ url = "https://files.pythonhosted.org/packages/3f/c7/12381b18e21aef2c6bd3a636da1088b888b97b7a0362fac2e4de92405f97/greenlet-3.2.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:20fb936b4652b6e307b8f347665e2c615540d4b42b3b4c8a321d8286da7e520f", size = 1151142, upload-time = "2025-08-07T13:18:22.981Z" },
+ { url = "https://files.pythonhosted.org/packages/27/45/80935968b53cfd3f33cf99ea5f08227f2646e044568c9b1555b58ffd61c2/greenlet-3.2.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ee7a6ec486883397d70eec05059353b8e83eca9168b9f3f9a361971e77e0bcd0", size = 1564846, upload-time = "2025-11-04T12:42:15.191Z" },
+ { url = "https://files.pythonhosted.org/packages/69/02/b7c30e5e04752cb4db6202a3858b149c0710e5453b71a3b2aec5d78a1aab/greenlet-3.2.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:326d234cbf337c9c3def0676412eb7040a35a768efc92504b947b3e9cfc7543d", size = 1633814, upload-time = "2025-11-04T12:42:17.175Z" },
{ url = "https://files.pythonhosted.org/packages/e9/08/b0814846b79399e585f974bbeebf5580fbe59e258ea7be64d9dfb253c84f/greenlet-3.2.4-cp312-cp312-win_amd64.whl", hash = "sha256:a7d4e128405eea3814a12cc2605e0e6aedb4035bf32697f72deca74de4105e02", size = 299899, upload-time = "2025-08-07T13:38:53.448Z" },
{ url = "https://files.pythonhosted.org/packages/49/e8/58c7f85958bda41dafea50497cbd59738c5c43dbbea5ee83d651234398f4/greenlet-3.2.4-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:1a921e542453fe531144e91e1feedf12e07351b1cf6c9e8a3325ea600a715a31", size = 272814, upload-time = "2025-08-07T13:15:50.011Z" },
{ url = "https://files.pythonhosted.org/packages/62/dd/b9f59862e9e257a16e4e610480cfffd29e3fae018a68c2332090b53aac3d/greenlet-3.2.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cd3c8e693bff0fff6ba55f140bf390fa92c994083f838fece0f63be121334945", size = 641073, upload-time = "2025-08-07T13:42:57.23Z" },
@@ -1494,6 +1454,8 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ee/43/3cecdc0349359e1a527cbf2e3e28e5f8f06d3343aaf82ca13437a9aa290f/greenlet-3.2.4-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671", size = 610497, upload-time = "2025-08-07T13:18:31.636Z" },
{ url = "https://files.pythonhosted.org/packages/b8/19/06b6cf5d604e2c382a6f31cafafd6f33d5dea706f4db7bdab184bad2b21d/greenlet-3.2.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b", size = 1121662, upload-time = "2025-08-07T13:42:41.117Z" },
{ url = "https://files.pythonhosted.org/packages/a2/15/0d5e4e1a66fab130d98168fe984c509249c833c1a3c16806b90f253ce7b9/greenlet-3.2.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d25c5091190f2dc0eaa3f950252122edbbadbb682aa7b1ef2f8af0f8c0afefae", size = 1149210, upload-time = "2025-08-07T13:18:24.072Z" },
+ { url = "https://files.pythonhosted.org/packages/1c/53/f9c440463b3057485b8594d7a638bed53ba531165ef0ca0e6c364b5cc807/greenlet-3.2.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6e343822feb58ac4d0a1211bd9399de2b3a04963ddeec21530fc426cc121f19b", size = 1564759, upload-time = "2025-11-04T12:42:19.395Z" },
+ { url = "https://files.pythonhosted.org/packages/47/e4/3bb4240abdd0a8d23f4f88adec746a3099f0d86bfedb623f063b2e3b4df0/greenlet-3.2.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ca7f6f1f2649b89ce02f6f229d7c19f680a6238af656f61e0115b24857917929", size = 1634288, upload-time = "2025-11-04T12:42:21.174Z" },
{ url = "https://files.pythonhosted.org/packages/0b/55/2321e43595e6801e105fcfdee02b34c0f996eb71e6ddffca6b10b7e1d771/greenlet-3.2.4-cp313-cp313-win_amd64.whl", hash = "sha256:554b03b6e73aaabec3745364d6239e9e012d64c68ccd0b8430c64ccc14939a8b", size = 299685, upload-time = "2025-08-07T13:24:38.824Z" },
{ url = "https://files.pythonhosted.org/packages/22/5c/85273fd7cc388285632b0498dbbab97596e04b154933dfe0f3e68156c68c/greenlet-3.2.4-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0", size = 273586, upload-time = "2025-08-07T13:16:08.004Z" },
{ url = "https://files.pythonhosted.org/packages/d1/75/10aeeaa3da9332c2e761e4c50d4c3556c21113ee3f0afa2cf5769946f7a3/greenlet-3.2.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f", size = 686346, upload-time = "2025-08-07T13:42:59.944Z" },
@@ -1501,6 +1463,8 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/dc/8b/29aae55436521f1d6f8ff4e12fb676f3400de7fcf27fccd1d4d17fd8fecd/greenlet-3.2.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1", size = 694659, upload-time = "2025-08-07T13:53:17.759Z" },
{ url = "https://files.pythonhosted.org/packages/92/2e/ea25914b1ebfde93b6fc4ff46d6864564fba59024e928bdc7de475affc25/greenlet-3.2.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735", size = 695355, upload-time = "2025-08-07T13:18:34.517Z" },
{ url = "https://files.pythonhosted.org/packages/72/60/fc56c62046ec17f6b0d3060564562c64c862948c9d4bc8aa807cf5bd74f4/greenlet-3.2.4-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337", size = 657512, upload-time = "2025-08-07T13:18:33.969Z" },
+ { url = "https://files.pythonhosted.org/packages/23/6e/74407aed965a4ab6ddd93a7ded3180b730d281c77b765788419484cdfeef/greenlet-3.2.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:2917bdf657f5859fbf3386b12d68ede4cf1f04c90c3a6bc1f013dd68a22e2269", size = 1612508, upload-time = "2025-11-04T12:42:23.427Z" },
+ { url = "https://files.pythonhosted.org/packages/0d/da/343cd760ab2f92bac1845ca07ee3faea9fe52bee65f7bcb19f16ad7de08b/greenlet-3.2.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:015d48959d4add5d6c9f6c5210ee3803a830dce46356e3bc326d6776bde54681", size = 1680760, upload-time = "2025-11-04T12:42:25.341Z" },
{ url = "https://files.pythonhosted.org/packages/e3/a5/6ddab2b4c112be95601c13428db1d8b6608a8b6039816f2ba09c346c08fc/greenlet-3.2.4-cp314-cp314-win_amd64.whl", hash = "sha256:e37ab26028f12dbb0ff65f29a8d3d44a765c61e729647bf2ddfbbed621726f01", size = 303425, upload-time = "2025-08-07T13:32:27.59Z" },
]
@@ -2299,7 +2263,7 @@ wheels = [
[[package]]
name = "langchain-classic"
-version = "1.0.0a1"
+version = "1.0.0"
source = { editable = "." }
dependencies = [
{ name = "async-timeout", marker = "python_full_version < '3.11'" },
@@ -2334,6 +2298,9 @@ google-vertexai = [
groq = [
{ name = "langchain-groq" },
]
+huggingface = [
+ { name = "langchain-huggingface" },
+]
mistralai = [
{ name = "langchain-mistralai" },
]
@@ -2368,7 +2335,6 @@ lint = [
test = [
{ name = "blockbuster" },
{ name = "cffi" },
- { name = "duckdb-engine" },
{ name = "freezegun" },
{ name = "langchain-core" },
{ name = "langchain-openai" },
@@ -2428,6 +2394,7 @@ requires-dist = [
{ name = "langchain-google-genai", marker = "extra == 'google-genai'" },
{ name = "langchain-google-vertexai", marker = "extra == 'google-vertexai'" },
{ name = "langchain-groq", marker = "extra == 'groq'" },
+ { name = "langchain-huggingface", marker = "extra == 'huggingface'" },
{ name = "langchain-mistralai", marker = "extra == 'mistralai'" },
{ name = "langchain-ollama", marker = "extra == 'ollama'" },
{ name = "langchain-openai", marker = "extra == 'openai'", editable = "../partners/openai" },
@@ -2441,7 +2408,7 @@ requires-dist = [
{ name = "requests", specifier = ">=2.0.0,<3.0.0" },
{ name = "sqlalchemy", specifier = ">=1.4.0,<3.0.0" },
]
-provides-extras = ["anthropic", "openai", "google-vertexai", "google-genai", "fireworks", "ollama", "together", "mistralai", "groq", "aws", "deepseek", "xai", "perplexity"]
+provides-extras = ["anthropic", "openai", "google-vertexai", "google-genai", "fireworks", "ollama", "together", "mistralai", "huggingface", "groq", "aws", "deepseek", "xai", "perplexity"]
[package.metadata.requires-dev]
dev = [
@@ -2460,7 +2427,6 @@ test = [
{ name = "blockbuster", specifier = ">=1.5.18,<1.6.0" },
{ name = "cffi", marker = "python_full_version < '3.10'", specifier = "<1.17.1" },
{ name = "cffi", marker = "python_full_version >= '3.10'" },
- { name = "duckdb-engine", specifier = ">=0.9.2,<1.0.0" },
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
{ name = "langchain-core", editable = "../core" },
{ name = "langchain-openai", editable = "../partners/openai" },
@@ -2512,7 +2478,7 @@ typing = [
[[package]]
name = "langchain-core"
-version = "1.0.0a8"
+version = "1.0.3"
source = { editable = "../core" }
dependencies = [
{ name = "jsonpatch" },
@@ -2546,6 +2512,7 @@ test = [
{ name = "blockbuster", specifier = ">=1.5.18,<1.6.0" },
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
{ name = "grandalf", specifier = ">=0.8.0,<1.0.0" },
+ { name = "langchain-model-profiles", directory = "../model-profiles" },
{ name = "langchain-tests", directory = "../standard-tests" },
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
@@ -2562,6 +2529,7 @@ test = [
]
test-integration = []
typing = [
+ { name = "langchain-model-profiles", directory = "../model-profiles" },
{ name = "langchain-text-splitters", directory = "../text-splitters" },
{ name = "mypy", specifier = ">=1.18.1,<1.19.0" },
{ name = "types-pyyaml", specifier = ">=6.0.12.2,<7.0.0.0" },
@@ -2646,6 +2614,20 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d7/7c/3bbe58c4da4fd2e86ac35d7305027e1fe9cba49e0d7a9271ae90ea21b47a/langchain_groq-1.0.0a1-py3-none-any.whl", hash = "sha256:2067e25f2be394a5cde6270d047757c2feb622f9f419d704778c355ce8d9d084", size = 17516, upload-time = "2025-10-02T23:21:33.892Z" },
]
+[[package]]
+name = "langchain-huggingface"
+version = "1.0.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "huggingface-hub" },
+ { name = "langchain-core" },
+ { name = "tokenizers" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/1d/0d/2a22659534cc70410573a53d32756cf7971de7923ba2ccf940c03ecbe12a/langchain_huggingface-1.0.0.tar.gz", hash = "sha256:0d2eb924ff77dc08bb7dd340ab10d47c1b71372e4297bc12e84c4a66df9a4414", size = 247750, upload-time = "2025-10-17T15:30:35.179Z" }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/ac/30/7476a926e31498ebc4be3131e06650c5136ffb8ba12067f56212e187dd7c/langchain_huggingface-1.0.0-py3-none-any.whl", hash = "sha256:06d6ac57951c6a1c47d329c38f2b32472a839eab2fa14883be784916ea075da0", size = 27493, upload-time = "2025-10-17T15:30:34.023Z" },
+]
+
[[package]]
name = "langchain-mistralai"
version = "0.2.12"
@@ -2677,7 +2659,7 @@ wheels = [
[[package]]
name = "langchain-openai"
-version = "1.0.0a4"
+version = "1.0.2"
source = { editable = "../partners/openai" }
dependencies = [
{ name = "langchain-core" },
@@ -2697,6 +2679,7 @@ dev = [{ name = "langchain-core", editable = "../core" }]
lint = [{ name = "ruff", specifier = ">=0.13.1,<0.14.0" }]
test = [
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
+ { name = "langchain", editable = "../langchain_v1" },
{ name = "langchain-core", editable = "../core" },
{ name = "langchain-tests", editable = "../standard-tests" },
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
@@ -2716,7 +2699,7 @@ test-integration = [
{ name = "httpx", specifier = ">=0.27.0,<1.0.0" },
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
- { name = "pillow", specifier = ">=10.3.0,<11.0.0" },
+ { name = "pillow", specifier = ">=10.3.0,<12.0.0" },
]
typing = [
{ name = "langchain-core", editable = "../core" },
@@ -2739,7 +2722,7 @@ wheels = [
[[package]]
name = "langchain-tests"
-version = "1.0.0a2"
+version = "1.0.1"
source = { editable = "../standard-tests" }
dependencies = [
{ name = "httpx" },
@@ -2784,7 +2767,7 @@ typing = [
[[package]]
name = "langchain-text-splitters"
-version = "1.0.0a1"
+version = "1.0.0"
source = { editable = "../text-splitters" }
dependencies = [
{ name = "langchain-core" },
@@ -2817,8 +2800,8 @@ test-integration = [
{ name = "nltk", specifier = ">=3.9.1,<4.0.0" },
{ name = "scipy", marker = "python_full_version == '3.12.*'", specifier = ">=1.7.0,<2.0.0" },
{ name = "scipy", marker = "python_full_version >= '3.13'", specifier = ">=1.14.1,<2.0.0" },
- { name = "sentence-transformers", specifier = ">=3.0.1,<4.0.0" },
- { name = "spacy", specifier = ">=3.8.7,<4.0.0" },
+ { name = "sentence-transformers", marker = "python_full_version < '3.14'", specifier = ">=3.0.1,<4.0.0" },
+ { name = "spacy", marker = "python_full_version < '3.14'", specifier = ">=3.8.7,<4.0.0" },
{ name = "thinc", specifier = ">=8.3.6,<9.0.0" },
{ name = "tiktoken", specifier = ">=0.8.0,<1.0.0" },
{ name = "transformers", specifier = ">=4.51.3,<5.0.0" },
@@ -3701,7 +3684,7 @@ wheels = [
[[package]]
name = "pandas"
-version = "2.3.2"
+version = "2.3.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy", version = "1.26.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.12'" },
@@ -3710,42 +3693,55 @@ dependencies = [
{ name = "pytz" },
{ name = "tzdata" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/79/8e/0e90233ac205ad182bd6b422532695d2b9414944a280488105d598c70023/pandas-2.3.2.tar.gz", hash = "sha256:ab7b58f8f82706890924ccdfb5f48002b83d2b5a3845976a9fb705d36c34dcdb", size = 4488684, upload-time = "2025-08-21T10:28:29.257Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/33/01/d40b85317f86cf08d853a4f495195c73815fdf205eef3993821720274518/pandas-2.3.3.tar.gz", hash = "sha256:e05e1af93b977f7eafa636d043f9f94c7ee3ac81af99c13508215942e64c993b", size = 4495223, upload-time = "2025-09-29T23:34:51.853Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/2e/16/a8eeb70aad84ccbf14076793f90e0031eded63c1899aeae9fdfbf37881f4/pandas-2.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:52bc29a946304c360561974c6542d1dd628ddafa69134a7131fdfd6a5d7a1a35", size = 11539648, upload-time = "2025-08-21T10:26:36.236Z" },
- { url = "https://files.pythonhosted.org/packages/47/f1/c5bdaea13bf3708554d93e948b7ea74121ce6e0d59537ca4c4f77731072b/pandas-2.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:220cc5c35ffaa764dd5bb17cf42df283b5cb7fdf49e10a7b053a06c9cb48ee2b", size = 10786923, upload-time = "2025-08-21T10:26:40.518Z" },
- { url = "https://files.pythonhosted.org/packages/bb/10/811fa01476d29ffed692e735825516ad0e56d925961819e6126b4ba32147/pandas-2.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42c05e15111221384019897df20c6fe893b2f697d03c811ee67ec9e0bb5a3424", size = 11726241, upload-time = "2025-08-21T10:26:43.175Z" },
- { url = "https://files.pythonhosted.org/packages/c4/6a/40b043b06e08df1ea1b6d20f0e0c2f2c4ec8c4f07d1c92948273d943a50b/pandas-2.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc03acc273c5515ab69f898df99d9d4f12c4d70dbfc24c3acc6203751d0804cf", size = 12349533, upload-time = "2025-08-21T10:26:46.611Z" },
- { url = "https://files.pythonhosted.org/packages/e2/ea/2e081a2302e41a9bca7056659fdd2b85ef94923723e41665b42d65afd347/pandas-2.3.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:d25c20a03e8870f6339bcf67281b946bd20b86f1a544ebbebb87e66a8d642cba", size = 13202407, upload-time = "2025-08-21T10:26:49.068Z" },
- { url = "https://files.pythonhosted.org/packages/f4/12/7ff9f6a79e2ee8869dcf70741ef998b97ea20050fe25f83dc759764c1e32/pandas-2.3.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:21bb612d148bb5860b7eb2c10faacf1a810799245afd342cf297d7551513fbb6", size = 13837212, upload-time = "2025-08-21T10:26:51.832Z" },
- { url = "https://files.pythonhosted.org/packages/d8/df/5ab92fcd76455a632b3db34a746e1074d432c0cdbbd28d7cd1daba46a75d/pandas-2.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:b62d586eb25cb8cb70a5746a378fc3194cb7f11ea77170d59f889f5dfe3cec7a", size = 11338099, upload-time = "2025-08-21T10:26:54.382Z" },
- { url = "https://files.pythonhosted.org/packages/7a/59/f3e010879f118c2d400902d2d871c2226cef29b08c09fb8dc41111730400/pandas-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1333e9c299adcbb68ee89a9bb568fc3f20f9cbb419f1dd5225071e6cddb2a743", size = 11563308, upload-time = "2025-08-21T10:26:56.656Z" },
- { url = "https://files.pythonhosted.org/packages/38/18/48f10f1cc5c397af59571d638d211f494dba481f449c19adbd282aa8f4ca/pandas-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:76972bcbd7de8e91ad5f0ca884a9f2c477a2125354af624e022c49e5bd0dfff4", size = 10820319, upload-time = "2025-08-21T10:26:59.162Z" },
- { url = "https://files.pythonhosted.org/packages/95/3b/1e9b69632898b048e223834cd9702052bcf06b15e1ae716eda3196fb972e/pandas-2.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b98bdd7c456a05eef7cd21fd6b29e3ca243591fe531c62be94a2cc987efb5ac2", size = 11790097, upload-time = "2025-08-21T10:27:02.204Z" },
- { url = "https://files.pythonhosted.org/packages/8b/ef/0e2ffb30b1f7fbc9a588bd01e3c14a0d96854d09a887e15e30cc19961227/pandas-2.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d81573b3f7db40d020983f78721e9bfc425f411e616ef019a10ebf597aedb2e", size = 12397958, upload-time = "2025-08-21T10:27:05.409Z" },
- { url = "https://files.pythonhosted.org/packages/23/82/e6b85f0d92e9afb0e7f705a51d1399b79c7380c19687bfbf3d2837743249/pandas-2.3.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:e190b738675a73b581736cc8ec71ae113d6c3768d0bd18bffa5b9a0927b0b6ea", size = 13225600, upload-time = "2025-08-21T10:27:07.791Z" },
- { url = "https://files.pythonhosted.org/packages/e8/f1/f682015893d9ed51611948bd83683670842286a8edd4f68c2c1c3b231eef/pandas-2.3.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c253828cb08f47488d60f43c5fc95114c771bbfff085da54bfc79cb4f9e3a372", size = 13879433, upload-time = "2025-08-21T10:27:10.347Z" },
- { url = "https://files.pythonhosted.org/packages/a7/e7/ae86261695b6c8a36d6a4c8d5f9b9ede8248510d689a2f379a18354b37d7/pandas-2.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:9467697b8083f9667b212633ad6aa4ab32436dcbaf4cd57325debb0ddef2012f", size = 11336557, upload-time = "2025-08-21T10:27:12.983Z" },
- { url = "https://files.pythonhosted.org/packages/ec/db/614c20fb7a85a14828edd23f1c02db58a30abf3ce76f38806155d160313c/pandas-2.3.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3fbb977f802156e7a3f829e9d1d5398f6192375a3e2d1a9ee0803e35fe70a2b9", size = 11587652, upload-time = "2025-08-21T10:27:15.888Z" },
- { url = "https://files.pythonhosted.org/packages/99/b0/756e52f6582cade5e746f19bad0517ff27ba9c73404607c0306585c201b3/pandas-2.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1b9b52693123dd234b7c985c68b709b0b009f4521000d0525f2b95c22f15944b", size = 10717686, upload-time = "2025-08-21T10:27:18.486Z" },
- { url = "https://files.pythonhosted.org/packages/37/4c/dd5ccc1e357abfeee8353123282de17997f90ff67855f86154e5a13b81e5/pandas-2.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bd281310d4f412733f319a5bc552f86d62cddc5f51d2e392c8787335c994175", size = 11278722, upload-time = "2025-08-21T10:27:21.149Z" },
- { url = "https://files.pythonhosted.org/packages/d3/a4/f7edcfa47e0a88cda0be8b068a5bae710bf264f867edfdf7b71584ace362/pandas-2.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96d31a6b4354e3b9b8a2c848af75d31da390657e3ac6f30c05c82068b9ed79b9", size = 11987803, upload-time = "2025-08-21T10:27:23.767Z" },
- { url = "https://files.pythonhosted.org/packages/f6/61/1bce4129f93ab66f1c68b7ed1c12bac6a70b1b56c5dab359c6bbcd480b52/pandas-2.3.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:df4df0b9d02bb873a106971bb85d448378ef14b86ba96f035f50bbd3688456b4", size = 12766345, upload-time = "2025-08-21T10:27:26.6Z" },
- { url = "https://files.pythonhosted.org/packages/8e/46/80d53de70fee835531da3a1dae827a1e76e77a43ad22a8cd0f8142b61587/pandas-2.3.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:213a5adf93d020b74327cb2c1b842884dbdd37f895f42dcc2f09d451d949f811", size = 13439314, upload-time = "2025-08-21T10:27:29.213Z" },
- { url = "https://files.pythonhosted.org/packages/28/30/8114832daff7489f179971dbc1d854109b7f4365a546e3ea75b6516cea95/pandas-2.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:8c13b81a9347eb8c7548f53fd9a4f08d4dfe996836543f805c987bafa03317ae", size = 10983326, upload-time = "2025-08-21T10:27:31.901Z" },
- { url = "https://files.pythonhosted.org/packages/27/64/a2f7bf678af502e16b472527735d168b22b7824e45a4d7e96a4fbb634b59/pandas-2.3.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0c6ecbac99a354a051ef21c5307601093cb9e0f4b1855984a084bfec9302699e", size = 11531061, upload-time = "2025-08-21T10:27:34.647Z" },
- { url = "https://files.pythonhosted.org/packages/54/4c/c3d21b2b7769ef2f4c2b9299fcadd601efa6729f1357a8dbce8dd949ed70/pandas-2.3.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:c6f048aa0fd080d6a06cc7e7537c09b53be6642d330ac6f54a600c3ace857ee9", size = 10668666, upload-time = "2025-08-21T10:27:37.203Z" },
- { url = "https://files.pythonhosted.org/packages/50/e2/f775ba76ecfb3424d7f5862620841cf0edb592e9abd2d2a5387d305fe7a8/pandas-2.3.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0064187b80a5be6f2f9c9d6bdde29372468751dfa89f4211a3c5871854cfbf7a", size = 11332835, upload-time = "2025-08-21T10:27:40.188Z" },
- { url = "https://files.pythonhosted.org/packages/8f/52/0634adaace9be2d8cac9ef78f05c47f3a675882e068438b9d7ec7ef0c13f/pandas-2.3.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4ac8c320bded4718b298281339c1a50fb00a6ba78cb2a63521c39bec95b0209b", size = 12057211, upload-time = "2025-08-21T10:27:43.117Z" },
- { url = "https://files.pythonhosted.org/packages/0b/9d/2df913f14b2deb9c748975fdb2491da1a78773debb25abbc7cbc67c6b549/pandas-2.3.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:114c2fe4f4328cf98ce5716d1532f3ab79c5919f95a9cfee81d9140064a2e4d6", size = 12749277, upload-time = "2025-08-21T10:27:45.474Z" },
- { url = "https://files.pythonhosted.org/packages/87/af/da1a2417026bd14d98c236dba88e39837182459d29dcfcea510b2ac9e8a1/pandas-2.3.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:48fa91c4dfb3b2b9bfdb5c24cd3567575f4e13f9636810462ffed8925352be5a", size = 13415256, upload-time = "2025-08-21T10:27:49.885Z" },
- { url = "https://files.pythonhosted.org/packages/22/3c/f2af1ce8840ef648584a6156489636b5692c162771918aa95707c165ad2b/pandas-2.3.2-cp313-cp313-win_amd64.whl", hash = "sha256:12d039facec710f7ba305786837d0225a3444af7bbd9c15c32ca2d40d157ed8b", size = 10982579, upload-time = "2025-08-21T10:28:08.435Z" },
- { url = "https://files.pythonhosted.org/packages/f3/98/8df69c4097a6719e357dc249bf437b8efbde808038268e584421696cbddf/pandas-2.3.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:c624b615ce97864eb588779ed4046186f967374185c047070545253a52ab2d57", size = 12028163, upload-time = "2025-08-21T10:27:52.232Z" },
- { url = "https://files.pythonhosted.org/packages/0e/23/f95cbcbea319f349e10ff90db488b905c6883f03cbabd34f6b03cbc3c044/pandas-2.3.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:0cee69d583b9b128823d9514171cabb6861e09409af805b54459bd0c821a35c2", size = 11391860, upload-time = "2025-08-21T10:27:54.673Z" },
- { url = "https://files.pythonhosted.org/packages/ad/1b/6a984e98c4abee22058aa75bfb8eb90dce58cf8d7296f8bc56c14bc330b0/pandas-2.3.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2319656ed81124982900b4c37f0e0c58c015af9a7bbc62342ba5ad07ace82ba9", size = 11309830, upload-time = "2025-08-21T10:27:56.957Z" },
- { url = "https://files.pythonhosted.org/packages/15/d5/f0486090eb18dd8710bf60afeaf638ba6817047c0c8ae5c6a25598665609/pandas-2.3.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b37205ad6f00d52f16b6d09f406434ba928c1a1966e2771006a9033c736d30d2", size = 11883216, upload-time = "2025-08-21T10:27:59.302Z" },
- { url = "https://files.pythonhosted.org/packages/10/86/692050c119696da19e20245bbd650d8dfca6ceb577da027c3a73c62a047e/pandas-2.3.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:837248b4fc3a9b83b9c6214699a13f069dc13510a6a6d7f9ba33145d2841a012", size = 12699743, upload-time = "2025-08-21T10:28:02.447Z" },
- { url = "https://files.pythonhosted.org/packages/cd/d7/612123674d7b17cf345aad0a10289b2a384bff404e0463a83c4a3a59d205/pandas-2.3.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:d2c3554bd31b731cd6490d94a28f3abb8dd770634a9e06eb6d2911b9827db370", size = 13186141, upload-time = "2025-08-21T10:28:05.377Z" },
+ { url = "https://files.pythonhosted.org/packages/3d/f7/f425a00df4fcc22b292c6895c6831c0c8ae1d9fac1e024d16f98a9ce8749/pandas-2.3.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:376c6446ae31770764215a6c937f72d917f214b43560603cd60da6408f183b6c", size = 11555763, upload-time = "2025-09-29T23:16:53.287Z" },
+ { url = "https://files.pythonhosted.org/packages/13/4f/66d99628ff8ce7857aca52fed8f0066ce209f96be2fede6cef9f84e8d04f/pandas-2.3.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e19d192383eab2f4ceb30b412b22ea30690c9e618f78870357ae1d682912015a", size = 10801217, upload-time = "2025-09-29T23:17:04.522Z" },
+ { url = "https://files.pythonhosted.org/packages/1d/03/3fc4a529a7710f890a239cc496fc6d50ad4a0995657dccc1d64695adb9f4/pandas-2.3.3-cp310-cp310-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5caf26f64126b6c7aec964f74266f435afef1c1b13da3b0636c7518a1fa3e2b1", size = 12148791, upload-time = "2025-09-29T23:17:18.444Z" },
+ { url = "https://files.pythonhosted.org/packages/40/a8/4dac1f8f8235e5d25b9955d02ff6f29396191d4e665d71122c3722ca83c5/pandas-2.3.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:dd7478f1463441ae4ca7308a70e90b33470fa593429f9d4c578dd00d1fa78838", size = 12769373, upload-time = "2025-09-29T23:17:35.846Z" },
+ { url = "https://files.pythonhosted.org/packages/df/91/82cc5169b6b25440a7fc0ef3a694582418d875c8e3ebf796a6d6470aa578/pandas-2.3.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4793891684806ae50d1288c9bae9330293ab4e083ccd1c5e383c34549c6e4250", size = 13200444, upload-time = "2025-09-29T23:17:49.341Z" },
+ { url = "https://files.pythonhosted.org/packages/10/ae/89b3283800ab58f7af2952704078555fa60c807fff764395bb57ea0b0dbd/pandas-2.3.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:28083c648d9a99a5dd035ec125d42439c6c1c525098c58af0fc38dd1a7a1b3d4", size = 13858459, upload-time = "2025-09-29T23:18:03.722Z" },
+ { url = "https://files.pythonhosted.org/packages/85/72/530900610650f54a35a19476eca5104f38555afccda1aa11a92ee14cb21d/pandas-2.3.3-cp310-cp310-win_amd64.whl", hash = "sha256:503cf027cf9940d2ceaa1a93cfb5f8c8c7e6e90720a2850378f0b3f3b1e06826", size = 11346086, upload-time = "2025-09-29T23:18:18.505Z" },
+ { url = "https://files.pythonhosted.org/packages/c1/fa/7ac648108144a095b4fb6aa3de1954689f7af60a14cf25583f4960ecb878/pandas-2.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:602b8615ebcc4a0c1751e71840428ddebeb142ec02c786e8ad6b1ce3c8dec523", size = 11578790, upload-time = "2025-09-29T23:18:30.065Z" },
+ { url = "https://files.pythonhosted.org/packages/9b/35/74442388c6cf008882d4d4bdfc4109be87e9b8b7ccd097ad1e7f006e2e95/pandas-2.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8fe25fc7b623b0ef6b5009149627e34d2a4657e880948ec3c840e9402e5c1b45", size = 10833831, upload-time = "2025-09-29T23:38:56.071Z" },
+ { url = "https://files.pythonhosted.org/packages/fe/e4/de154cbfeee13383ad58d23017da99390b91d73f8c11856f2095e813201b/pandas-2.3.3-cp311-cp311-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b468d3dad6ff947df92dcb32ede5b7bd41a9b3cceef0a30ed925f6d01fb8fa66", size = 12199267, upload-time = "2025-09-29T23:18:41.627Z" },
+ { url = "https://files.pythonhosted.org/packages/bf/c9/63f8d545568d9ab91476b1818b4741f521646cbdd151c6efebf40d6de6f7/pandas-2.3.3-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b98560e98cb334799c0b07ca7967ac361a47326e9b4e5a7dfb5ab2b1c9d35a1b", size = 12789281, upload-time = "2025-09-29T23:18:56.834Z" },
+ { url = "https://files.pythonhosted.org/packages/f2/00/a5ac8c7a0e67fd1a6059e40aa08fa1c52cc00709077d2300e210c3ce0322/pandas-2.3.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37b5848ba49824e5c30bedb9c830ab9b7751fd049bc7914533e01c65f79791", size = 13240453, upload-time = "2025-09-29T23:19:09.247Z" },
+ { url = "https://files.pythonhosted.org/packages/27/4d/5c23a5bc7bd209231618dd9e606ce076272c9bc4f12023a70e03a86b4067/pandas-2.3.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:db4301b2d1f926ae677a751eb2bd0e8c5f5319c9cb3f88b0becbbb0b07b34151", size = 13890361, upload-time = "2025-09-29T23:19:25.342Z" },
+ { url = "https://files.pythonhosted.org/packages/8e/59/712db1d7040520de7a4965df15b774348980e6df45c129b8c64d0dbe74ef/pandas-2.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:f086f6fe114e19d92014a1966f43a3e62285109afe874f067f5abbdcbb10e59c", size = 11348702, upload-time = "2025-09-29T23:19:38.296Z" },
+ { url = "https://files.pythonhosted.org/packages/9c/fb/231d89e8637c808b997d172b18e9d4a4bc7bf31296196c260526055d1ea0/pandas-2.3.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d21f6d74eb1725c2efaa71a2bfc661a0689579b58e9c0ca58a739ff0b002b53", size = 11597846, upload-time = "2025-09-29T23:19:48.856Z" },
+ { url = "https://files.pythonhosted.org/packages/5c/bd/bf8064d9cfa214294356c2d6702b716d3cf3bb24be59287a6a21e24cae6b/pandas-2.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3fd2f887589c7aa868e02632612ba39acb0b8948faf5cc58f0850e165bd46f35", size = 10729618, upload-time = "2025-09-29T23:39:08.659Z" },
+ { url = "https://files.pythonhosted.org/packages/57/56/cf2dbe1a3f5271370669475ead12ce77c61726ffd19a35546e31aa8edf4e/pandas-2.3.3-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ecaf1e12bdc03c86ad4a7ea848d66c685cb6851d807a26aa245ca3d2017a1908", size = 11737212, upload-time = "2025-09-29T23:19:59.765Z" },
+ { url = "https://files.pythonhosted.org/packages/e5/63/cd7d615331b328e287d8233ba9fdf191a9c2d11b6af0c7a59cfcec23de68/pandas-2.3.3-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b3d11d2fda7eb164ef27ffc14b4fcab16a80e1ce67e9f57e19ec0afaf715ba89", size = 12362693, upload-time = "2025-09-29T23:20:14.098Z" },
+ { url = "https://files.pythonhosted.org/packages/a6/de/8b1895b107277d52f2b42d3a6806e69cfef0d5cf1d0ba343470b9d8e0a04/pandas-2.3.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a68e15f780eddf2b07d242e17a04aa187a7ee12b40b930bfdd78070556550e98", size = 12771002, upload-time = "2025-09-29T23:20:26.76Z" },
+ { url = "https://files.pythonhosted.org/packages/87/21/84072af3187a677c5893b170ba2c8fbe450a6ff911234916da889b698220/pandas-2.3.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:371a4ab48e950033bcf52b6527eccb564f52dc826c02afd9a1bc0ab731bba084", size = 13450971, upload-time = "2025-09-29T23:20:41.344Z" },
+ { url = "https://files.pythonhosted.org/packages/86/41/585a168330ff063014880a80d744219dbf1dd7a1c706e75ab3425a987384/pandas-2.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:a16dcec078a01eeef8ee61bf64074b4e524a2a3f4b3be9326420cabe59c4778b", size = 10992722, upload-time = "2025-09-29T23:20:54.139Z" },
+ { url = "https://files.pythonhosted.org/packages/cd/4b/18b035ee18f97c1040d94debd8f2e737000ad70ccc8f5513f4eefad75f4b/pandas-2.3.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:56851a737e3470de7fa88e6131f41281ed440d29a9268dcbf0002da5ac366713", size = 11544671, upload-time = "2025-09-29T23:21:05.024Z" },
+ { url = "https://files.pythonhosted.org/packages/31/94/72fac03573102779920099bcac1c3b05975c2cb5f01eac609faf34bed1ca/pandas-2.3.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:bdcd9d1167f4885211e401b3036c0c8d9e274eee67ea8d0758a256d60704cfe8", size = 10680807, upload-time = "2025-09-29T23:21:15.979Z" },
+ { url = "https://files.pythonhosted.org/packages/16/87/9472cf4a487d848476865321de18cc8c920b8cab98453ab79dbbc98db63a/pandas-2.3.3-cp313-cp313-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e32e7cc9af0f1cc15548288a51a3b681cc2a219faa838e995f7dc53dbab1062d", size = 11709872, upload-time = "2025-09-29T23:21:27.165Z" },
+ { url = "https://files.pythonhosted.org/packages/15/07/284f757f63f8a8d69ed4472bfd85122bd086e637bf4ed09de572d575a693/pandas-2.3.3-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:318d77e0e42a628c04dc56bcef4b40de67918f7041c2b061af1da41dcff670ac", size = 12306371, upload-time = "2025-09-29T23:21:40.532Z" },
+ { url = "https://files.pythonhosted.org/packages/33/81/a3afc88fca4aa925804a27d2676d22dcd2031c2ebe08aabd0ae55b9ff282/pandas-2.3.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4e0a175408804d566144e170d0476b15d78458795bb18f1304fb94160cabf40c", size = 12765333, upload-time = "2025-09-29T23:21:55.77Z" },
+ { url = "https://files.pythonhosted.org/packages/8d/0f/b4d4ae743a83742f1153464cf1a8ecfafc3ac59722a0b5c8602310cb7158/pandas-2.3.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:93c2d9ab0fc11822b5eece72ec9587e172f63cff87c00b062f6e37448ced4493", size = 13418120, upload-time = "2025-09-29T23:22:10.109Z" },
+ { url = "https://files.pythonhosted.org/packages/4f/c7/e54682c96a895d0c808453269e0b5928a07a127a15704fedb643e9b0a4c8/pandas-2.3.3-cp313-cp313-win_amd64.whl", hash = "sha256:f8bfc0e12dc78f777f323f55c58649591b2cd0c43534e8355c51d3fede5f4dee", size = 10993991, upload-time = "2025-09-29T23:25:04.889Z" },
+ { url = "https://files.pythonhosted.org/packages/f9/ca/3f8d4f49740799189e1395812f3bf23b5e8fc7c190827d55a610da72ce55/pandas-2.3.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:75ea25f9529fdec2d2e93a42c523962261e567d250b0013b16210e1d40d7c2e5", size = 12048227, upload-time = "2025-09-29T23:22:24.343Z" },
+ { url = "https://files.pythonhosted.org/packages/0e/5a/f43efec3e8c0cc92c4663ccad372dbdff72b60bdb56b2749f04aa1d07d7e/pandas-2.3.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:74ecdf1d301e812db96a465a525952f4dde225fdb6d8e5a521d47e1f42041e21", size = 11411056, upload-time = "2025-09-29T23:22:37.762Z" },
+ { url = "https://files.pythonhosted.org/packages/46/b1/85331edfc591208c9d1a63a06baa67b21d332e63b7a591a5ba42a10bb507/pandas-2.3.3-cp313-cp313t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6435cb949cb34ec11cc9860246ccb2fdc9ecd742c12d3304989017d53f039a78", size = 11645189, upload-time = "2025-09-29T23:22:51.688Z" },
+ { url = "https://files.pythonhosted.org/packages/44/23/78d645adc35d94d1ac4f2a3c4112ab6f5b8999f4898b8cdf01252f8df4a9/pandas-2.3.3-cp313-cp313t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:900f47d8f20860de523a1ac881c4c36d65efcb2eb850e6948140fa781736e110", size = 12121912, upload-time = "2025-09-29T23:23:05.042Z" },
+ { url = "https://files.pythonhosted.org/packages/53/da/d10013df5e6aaef6b425aa0c32e1fc1f3e431e4bcabd420517dceadce354/pandas-2.3.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a45c765238e2ed7d7c608fc5bc4a6f88b642f2f01e70c0c23d2224dd21829d86", size = 12712160, upload-time = "2025-09-29T23:23:28.57Z" },
+ { url = "https://files.pythonhosted.org/packages/bd/17/e756653095a083d8a37cbd816cb87148debcfcd920129b25f99dd8d04271/pandas-2.3.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c4fc4c21971a1a9f4bdb4c73978c7f7256caa3e62b323f70d6cb80db583350bc", size = 13199233, upload-time = "2025-09-29T23:24:24.876Z" },
+ { url = "https://files.pythonhosted.org/packages/04/fd/74903979833db8390b73b3a8a7d30d146d710bd32703724dd9083950386f/pandas-2.3.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:ee15f284898e7b246df8087fc82b87b01686f98ee67d85a17b7ab44143a3a9a0", size = 11540635, upload-time = "2025-09-29T23:25:52.486Z" },
+ { url = "https://files.pythonhosted.org/packages/21/00/266d6b357ad5e6d3ad55093a7e8efc7dd245f5a842b584db9f30b0f0a287/pandas-2.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1611aedd912e1ff81ff41c745822980c49ce4a7907537be8692c8dbc31924593", size = 10759079, upload-time = "2025-09-29T23:26:33.204Z" },
+ { url = "https://files.pythonhosted.org/packages/ca/05/d01ef80a7a3a12b2f8bbf16daba1e17c98a2f039cbc8e2f77a2c5a63d382/pandas-2.3.3-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6d2cefc361461662ac48810cb14365a365ce864afe85ef1f447ff5a1e99ea81c", size = 11814049, upload-time = "2025-09-29T23:27:15.384Z" },
+ { url = "https://files.pythonhosted.org/packages/15/b2/0e62f78c0c5ba7e3d2c5945a82456f4fac76c480940f805e0b97fcbc2f65/pandas-2.3.3-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ee67acbbf05014ea6c763beb097e03cd629961c8a632075eeb34247120abcb4b", size = 12332638, upload-time = "2025-09-29T23:27:51.625Z" },
+ { url = "https://files.pythonhosted.org/packages/c5/33/dd70400631b62b9b29c3c93d2feee1d0964dc2bae2e5ad7a6c73a7f25325/pandas-2.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c46467899aaa4da076d5abc11084634e2d197e9460643dd455ac3db5856b24d6", size = 12886834, upload-time = "2025-09-29T23:28:21.289Z" },
+ { url = "https://files.pythonhosted.org/packages/d3/18/b5d48f55821228d0d2692b34fd5034bb185e854bdb592e9c640f6290e012/pandas-2.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:6253c72c6a1d990a410bc7de641d34053364ef8bcd3126f7e7450125887dffe3", size = 13409925, upload-time = "2025-09-29T23:28:58.261Z" },
+ { url = "https://files.pythonhosted.org/packages/a6/3d/124ac75fcd0ecc09b8fdccb0246ef65e35b012030defb0e0eba2cbbbe948/pandas-2.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:1b07204a219b3b7350abaae088f451860223a52cfb8a6c53358e7948735158e5", size = 11109071, upload-time = "2025-09-29T23:32:27.484Z" },
+ { url = "https://files.pythonhosted.org/packages/89/9c/0e21c895c38a157e0faa1fb64587a9226d6dd46452cac4532d80c3c4a244/pandas-2.3.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2462b1a365b6109d275250baaae7b760fd25c726aaca0054649286bcfbb3e8ec", size = 12048504, upload-time = "2025-09-29T23:29:31.47Z" },
+ { url = "https://files.pythonhosted.org/packages/d7/82/b69a1c95df796858777b68fbe6a81d37443a33319761d7c652ce77797475/pandas-2.3.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:0242fe9a49aa8b4d78a4fa03acb397a58833ef6199e9aa40a95f027bb3a1b6e7", size = 11410702, upload-time = "2025-09-29T23:29:54.591Z" },
+ { url = "https://files.pythonhosted.org/packages/f9/88/702bde3ba0a94b8c73a0181e05144b10f13f29ebfc2150c3a79062a8195d/pandas-2.3.3-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a21d830e78df0a515db2b3d2f5570610f5e6bd2e27749770e8bb7b524b89b450", size = 11634535, upload-time = "2025-09-29T23:30:21.003Z" },
+ { url = "https://files.pythonhosted.org/packages/a4/1e/1bac1a839d12e6a82ec6cb40cda2edde64a2013a66963293696bbf31fbbb/pandas-2.3.3-cp314-cp314t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2e3ebdb170b5ef78f19bfb71b0dc5dc58775032361fa188e814959b74d726dd5", size = 12121582, upload-time = "2025-09-29T23:30:43.391Z" },
+ { url = "https://files.pythonhosted.org/packages/44/91/483de934193e12a3b1d6ae7c8645d083ff88dec75f46e827562f1e4b4da6/pandas-2.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:d051c0e065b94b7a3cea50eb1ec32e912cd96dba41647eb24104b6c6c14c5788", size = 12699963, upload-time = "2025-09-29T23:31:10.009Z" },
+ { url = "https://files.pythonhosted.org/packages/70/44/5191d2e4026f86a2a109053e194d3ba7a31a2d10a9c2348368c63ed4e85a/pandas-2.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:3869faf4bd07b3b66a9f462417d0ca3a9df29a9f6abd5d0d0dbab15dac7abe87", size = 13202175, upload-time = "2025-09-29T23:31:59.173Z" },
]
[[package]]
@@ -4180,7 +4176,7 @@ wheels = [
[[package]]
name = "pydantic"
-version = "2.11.9"
+version = "2.12.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
@@ -4188,96 +4184,123 @@ dependencies = [
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/ff/5d/09a551ba512d7ca404d785072700d3f6727a02f6f3c24ecfd081c7cf0aa8/pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2", size = 788495, upload-time = "2025-09-13T11:26:39.325Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/3c/a7/d0d7b3c128948ece6676a6a21b9036e3ca53765d35052dbcc8c303886a44/pydantic-2.12.1.tar.gz", hash = "sha256:0af849d00e1879199babd468ec9db13b956f6608e9250500c1a9d69b6a62824e", size = 815997, upload-time = "2025-10-13T21:00:41.219Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/3e/d3/108f2006987c58e76691d5ae5d200dd3e0f532cb4e5fa3560751c3a1feba/pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2", size = 444855, upload-time = "2025-09-13T11:26:36.909Z" },
+ { url = "https://files.pythonhosted.org/packages/f5/69/ce4e60e5e67aa0c339a5dc3391a02b4036545efb6308c54dc4aa9425386f/pydantic-2.12.1-py3-none-any.whl", hash = "sha256:665931f5b4ab40c411439e66f99060d631d1acc58c3d481957b9123343d674d1", size = 460511, upload-time = "2025-10-13T21:00:38.935Z" },
]
[[package]]
name = "pydantic-core"
-version = "2.33.2"
+version = "2.41.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195, upload-time = "2025-04-23T18:33:52.104Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/00/e9/3916abb671bffb00845408c604ff03480dc8dc273310d8268547a37be0fb/pydantic_core-2.41.3.tar.gz", hash = "sha256:cdebb34b36ad05e8d77b4e797ad38a2a775c2a07a8fa386d4f6943b7778dcd39", size = 457489, upload-time = "2025-10-13T19:34:51.666Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/e5/92/b31726561b5dae176c2d2c2dc43a9c5bfba5d32f96f8b4c0a600dd492447/pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8", size = 2028817, upload-time = "2025-04-23T18:30:43.919Z" },
- { url = "https://files.pythonhosted.org/packages/a3/44/3f0b95fafdaca04a483c4e685fe437c6891001bf3ce8b2fded82b9ea3aa1/pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d", size = 1861357, upload-time = "2025-04-23T18:30:46.372Z" },
- { url = "https://files.pythonhosted.org/packages/30/97/e8f13b55766234caae05372826e8e4b3b96e7b248be3157f53237682e43c/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d", size = 1898011, upload-time = "2025-04-23T18:30:47.591Z" },
- { url = "https://files.pythonhosted.org/packages/9b/a3/99c48cf7bafc991cc3ee66fd544c0aae8dc907b752f1dad2d79b1b5a471f/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572", size = 1982730, upload-time = "2025-04-23T18:30:49.328Z" },
- { url = "https://files.pythonhosted.org/packages/de/8e/a5b882ec4307010a840fb8b58bd9bf65d1840c92eae7534c7441709bf54b/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02", size = 2136178, upload-time = "2025-04-23T18:30:50.907Z" },
- { url = "https://files.pythonhosted.org/packages/e4/bb/71e35fc3ed05af6834e890edb75968e2802fe98778971ab5cba20a162315/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b", size = 2736462, upload-time = "2025-04-23T18:30:52.083Z" },
- { url = "https://files.pythonhosted.org/packages/31/0d/c8f7593e6bc7066289bbc366f2235701dcbebcd1ff0ef8e64f6f239fb47d/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2", size = 2005652, upload-time = "2025-04-23T18:30:53.389Z" },
- { url = "https://files.pythonhosted.org/packages/d2/7a/996d8bd75f3eda405e3dd219ff5ff0a283cd8e34add39d8ef9157e722867/pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a", size = 2113306, upload-time = "2025-04-23T18:30:54.661Z" },
- { url = "https://files.pythonhosted.org/packages/ff/84/daf2a6fb2db40ffda6578a7e8c5a6e9c8affb251a05c233ae37098118788/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac", size = 2073720, upload-time = "2025-04-23T18:30:56.11Z" },
- { url = "https://files.pythonhosted.org/packages/77/fb/2258da019f4825128445ae79456a5499c032b55849dbd5bed78c95ccf163/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a", size = 2244915, upload-time = "2025-04-23T18:30:57.501Z" },
- { url = "https://files.pythonhosted.org/packages/d8/7a/925ff73756031289468326e355b6fa8316960d0d65f8b5d6b3a3e7866de7/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b", size = 2241884, upload-time = "2025-04-23T18:30:58.867Z" },
- { url = "https://files.pythonhosted.org/packages/0b/b0/249ee6d2646f1cdadcb813805fe76265745c4010cf20a8eba7b0e639d9b2/pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22", size = 1910496, upload-time = "2025-04-23T18:31:00.078Z" },
- { url = "https://files.pythonhosted.org/packages/66/ff/172ba8f12a42d4b552917aa65d1f2328990d3ccfc01d5b7c943ec084299f/pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640", size = 1955019, upload-time = "2025-04-23T18:31:01.335Z" },
- { url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584, upload-time = "2025-04-23T18:31:03.106Z" },
- { url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071, upload-time = "2025-04-23T18:31:04.621Z" },
- { url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823, upload-time = "2025-04-23T18:31:06.377Z" },
- { url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792, upload-time = "2025-04-23T18:31:07.93Z" },
- { url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338, upload-time = "2025-04-23T18:31:09.283Z" },
- { url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998, upload-time = "2025-04-23T18:31:11.7Z" },
- { url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200, upload-time = "2025-04-23T18:31:13.536Z" },
- { url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890, upload-time = "2025-04-23T18:31:15.011Z" },
- { url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359, upload-time = "2025-04-23T18:31:16.393Z" },
- { url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883, upload-time = "2025-04-23T18:31:17.892Z" },
- { url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074, upload-time = "2025-04-23T18:31:19.205Z" },
- { url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538, upload-time = "2025-04-23T18:31:20.541Z" },
- { url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909, upload-time = "2025-04-23T18:31:22.371Z" },
- { url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786, upload-time = "2025-04-23T18:31:24.161Z" },
- { url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000, upload-time = "2025-04-23T18:31:25.863Z" },
- { url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996, upload-time = "2025-04-23T18:31:27.341Z" },
- { url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957, upload-time = "2025-04-23T18:31:28.956Z" },
- { url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199, upload-time = "2025-04-23T18:31:31.025Z" },
- { url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296, upload-time = "2025-04-23T18:31:32.514Z" },
- { url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109, upload-time = "2025-04-23T18:31:33.958Z" },
- { url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028, upload-time = "2025-04-23T18:31:39.095Z" },
- { url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044, upload-time = "2025-04-23T18:31:41.034Z" },
- { url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881, upload-time = "2025-04-23T18:31:42.757Z" },
- { url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034, upload-time = "2025-04-23T18:31:44.304Z" },
- { url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187, upload-time = "2025-04-23T18:31:45.891Z" },
- { url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628, upload-time = "2025-04-23T18:31:47.819Z" },
- { url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866, upload-time = "2025-04-23T18:31:49.635Z" },
- { url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894, upload-time = "2025-04-23T18:31:51.609Z" },
- { url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688, upload-time = "2025-04-23T18:31:53.175Z" },
- { url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808, upload-time = "2025-04-23T18:31:54.79Z" },
- { url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580, upload-time = "2025-04-23T18:31:57.393Z" },
- { url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859, upload-time = "2025-04-23T18:31:59.065Z" },
- { url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810, upload-time = "2025-04-23T18:32:00.78Z" },
- { url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498, upload-time = "2025-04-23T18:32:02.418Z" },
- { url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611, upload-time = "2025-04-23T18:32:04.152Z" },
- { url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924, upload-time = "2025-04-23T18:32:06.129Z" },
- { url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196, upload-time = "2025-04-23T18:32:08.178Z" },
- { url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389, upload-time = "2025-04-23T18:32:10.242Z" },
- { url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223, upload-time = "2025-04-23T18:32:12.382Z" },
- { url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473, upload-time = "2025-04-23T18:32:14.034Z" },
- { url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269, upload-time = "2025-04-23T18:32:15.783Z" },
- { url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921, upload-time = "2025-04-23T18:32:18.473Z" },
- { url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162, upload-time = "2025-04-23T18:32:20.188Z" },
- { url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560, upload-time = "2025-04-23T18:32:22.354Z" },
- { url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777, upload-time = "2025-04-23T18:32:25.088Z" },
- { url = "https://files.pythonhosted.org/packages/30/68/373d55e58b7e83ce371691f6eaa7175e3a24b956c44628eb25d7da007917/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa", size = 2023982, upload-time = "2025-04-23T18:32:53.14Z" },
- { url = "https://files.pythonhosted.org/packages/a4/16/145f54ac08c96a63d8ed6442f9dec17b2773d19920b627b18d4f10a061ea/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29", size = 1858412, upload-time = "2025-04-23T18:32:55.52Z" },
- { url = "https://files.pythonhosted.org/packages/41/b1/c6dc6c3e2de4516c0bb2c46f6a373b91b5660312342a0cf5826e38ad82fa/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d", size = 1892749, upload-time = "2025-04-23T18:32:57.546Z" },
- { url = "https://files.pythonhosted.org/packages/12/73/8cd57e20afba760b21b742106f9dbdfa6697f1570b189c7457a1af4cd8a0/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e", size = 2067527, upload-time = "2025-04-23T18:32:59.771Z" },
- { url = "https://files.pythonhosted.org/packages/e3/d5/0bb5d988cc019b3cba4a78f2d4b3854427fc47ee8ec8e9eaabf787da239c/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c", size = 2108225, upload-time = "2025-04-23T18:33:04.51Z" },
- { url = "https://files.pythonhosted.org/packages/f1/c5/00c02d1571913d496aabf146106ad8239dc132485ee22efe08085084ff7c/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec", size = 2069490, upload-time = "2025-04-23T18:33:06.391Z" },
- { url = "https://files.pythonhosted.org/packages/22/a8/dccc38768274d3ed3a59b5d06f59ccb845778687652daa71df0cab4040d7/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052", size = 2237525, upload-time = "2025-04-23T18:33:08.44Z" },
- { url = "https://files.pythonhosted.org/packages/d4/e7/4f98c0b125dda7cf7ccd14ba936218397b44f50a56dd8c16a3091df116c3/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c", size = 2238446, upload-time = "2025-04-23T18:33:10.313Z" },
- { url = "https://files.pythonhosted.org/packages/ce/91/2ec36480fdb0b783cd9ef6795753c1dea13882f2e68e73bce76ae8c21e6a/pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808", size = 2066678, upload-time = "2025-04-23T18:33:12.224Z" },
- { url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200, upload-time = "2025-04-23T18:33:14.199Z" },
- { url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123, upload-time = "2025-04-23T18:33:16.555Z" },
- { url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852, upload-time = "2025-04-23T18:33:18.513Z" },
- { url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484, upload-time = "2025-04-23T18:33:20.475Z" },
- { url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896, upload-time = "2025-04-23T18:33:22.501Z" },
- { url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475, upload-time = "2025-04-23T18:33:24.528Z" },
- { url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013, upload-time = "2025-04-23T18:33:26.621Z" },
- { url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715, upload-time = "2025-04-23T18:33:28.656Z" },
- { url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757, upload-time = "2025-04-23T18:33:30.645Z" },
+ { url = "https://files.pythonhosted.org/packages/79/01/8346969d4eef68f385a7cf6d9d18a6a82129177f2ac9ea36cc2cec4a7b3a/pydantic_core-2.41.3-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:1a572d7d06b9fa6efeec32fbcd18c73081af66942b345664669867cf8e69c7b0", size = 2110164, upload-time = "2025-10-13T19:30:43.025Z" },
+ { url = "https://files.pythonhosted.org/packages/60/7d/7ac0e48368c67c1ce3b34ceae1949c780381ad45ae3662f4e63a3d9a1a51/pydantic_core-2.41.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63d787ea760052585c6bfc34310aa379346f2cec363fe178659664f80421804b", size = 1919153, upload-time = "2025-10-13T19:30:44.783Z" },
+ { url = "https://files.pythonhosted.org/packages/62/cb/592daea1d54b935f1f6c335d3c1db3c73207b834ce493fc82042fdb827e8/pydantic_core-2.41.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa5a2327538f6b3c040604618cd36a960224ad7c22be96717b444c269f1a8b2", size = 1970141, upload-time = "2025-10-13T19:30:46.569Z" },
+ { url = "https://files.pythonhosted.org/packages/90/5c/59a2a215ef344e08d3366a05171e0acdc33edc8584e5c22cb968f26598bf/pydantic_core-2.41.3-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:947e1c5e79c54e313742c9dc25a439d38c5dcfde14f6a9a9069b3295f190c444", size = 2051479, upload-time = "2025-10-13T19:30:47.966Z" },
+ { url = "https://files.pythonhosted.org/packages/18/8a/6877045de472cc3333c02f5a782fca6440ca0e012bea9a76b06093733979/pydantic_core-2.41.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d0a1e90642dd6040cfcf509230fb1c3df257f7420d52b5401b3ce164acb0a342", size = 2245684, upload-time = "2025-10-13T19:30:49.68Z" },
+ { url = "https://files.pythonhosted.org/packages/a5/92/8e65785a723594d4661d559c2d1fca52827f31f32b35b8944794d80da8f0/pydantic_core-2.41.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8f7d4504d7bdce582a2700615d52dbe5f9de4ffab4815431f6da7edf5acc1329", size = 2364241, upload-time = "2025-10-13T19:30:51.109Z" },
+ { url = "https://files.pythonhosted.org/packages/f5/b4/5949e8df13a19ecc954a92207204d87fe0af5ccb6a31f7c6308d0c810221/pydantic_core-2.41.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7528ff51a26985072291c4170bd1f16f396a46ef845a428ae97bdb01ebaee7f4", size = 2072847, upload-time = "2025-10-13T19:30:52.778Z" },
+ { url = "https://files.pythonhosted.org/packages/fe/8c/ba844701bf42418dcc9acd0f3e2d239f6f13fa2aba23c5fd3afdbb955a84/pydantic_core-2.41.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:21b3a07248e481c06c4f208c53402fc143e817ce652a114f0c5d2acfd97b8b91", size = 2185990, upload-time = "2025-10-13T19:30:54.35Z" },
+ { url = "https://files.pythonhosted.org/packages/2f/79/beb0030df8526d90667a94bdee5323b9a0063fbf3c5099693fddf478b434/pydantic_core-2.41.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:45b445c09095df0d422e8ef01065f1c0a7424a17b37646b71d857ead6428b084", size = 2150559, upload-time = "2025-10-13T19:30:55.727Z" },
+ { url = "https://files.pythonhosted.org/packages/8a/dd/da4bc82999b9e1c8f650c8b2d223ff343a369fbe3a1bcb574b48093f4e07/pydantic_core-2.41.3-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:c32474bb2324b574dc57aea40cb415c8ca81b73bc103f5644a15095d5552df8f", size = 2316646, upload-time = "2025-10-13T19:30:57.41Z" },
+ { url = "https://files.pythonhosted.org/packages/96/78/714aef0f059922ed3bfedb34befad5049ac78899a7a3bad941b19a28eadf/pydantic_core-2.41.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:91a38e48cdcc17763ac0abcb27c2b5fca47c2bc79ca0821b5211b2adeb06c4d0", size = 2325563, upload-time = "2025-10-13T19:30:59.162Z" },
+ { url = "https://files.pythonhosted.org/packages/36/08/78ad17af3d19fc25e4f0e2fc74ddb858b5c7da3ece394527d857b475791d/pydantic_core-2.41.3-cp310-cp310-win32.whl", hash = "sha256:b0947cd92f782cfc7bb595fd046a5a5c83e9f9524822f071f6b602f08d14b653", size = 1987506, upload-time = "2025-10-13T19:31:01.117Z" },
+ { url = "https://files.pythonhosted.org/packages/37/29/8d16b6f88284fe46392034fd20e08fe1228f5ed63726b8f5068cc73f9b46/pydantic_core-2.41.3-cp310-cp310-win_amd64.whl", hash = "sha256:6d972c97e91e294f1ce4c74034211b5c16d91b925c08704f5786e5e3743d8a20", size = 2025386, upload-time = "2025-10-13T19:31:03.055Z" },
+ { url = "https://files.pythonhosted.org/packages/47/60/f7291e1264831136917e417b1ec9ed70dd64174a4c8ff4d75cad3028aab5/pydantic_core-2.41.3-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:91dfe6a6e02916fd1fb630f1ebe0c18f9fd9d3cbfe84bb2599f195ebbb0edb9b", size = 2107996, upload-time = "2025-10-13T19:31:04.902Z" },
+ { url = "https://files.pythonhosted.org/packages/43/05/362832ea8b890f5821ada95cd72a0da1b2466f88f6ac1a47cf1350136722/pydantic_core-2.41.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e301551c63d46122972ab5523a1438772cdde5d62d34040dac6f11017f18cc5d", size = 1916194, upload-time = "2025-10-13T19:31:06.313Z" },
+ { url = "https://files.pythonhosted.org/packages/90/ca/893c63b84ca961d81ae33e4d1e3e00191e29845a874c7f4cc3ca1aa61157/pydantic_core-2.41.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d986b1defbe27867812dc3d8b3401d72be14449b255081e505046c02687010a", size = 1969065, upload-time = "2025-10-13T19:31:07.719Z" },
+ { url = "https://files.pythonhosted.org/packages/55/b9/fecd085420a500acbf3bfc542d2662f2b37497f740461b5e960277f199f0/pydantic_core-2.41.3-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:351b2c5c073ae8caaa11e4336f8419d844c9b936e123e72dbe2c43fa97e54781", size = 2049849, upload-time = "2025-10-13T19:31:09.166Z" },
+ { url = "https://files.pythonhosted.org/packages/26/55/e351b6f51c6b568a911c672c8e3fd809d10f6deaa475007b54e3c0b89f0f/pydantic_core-2.41.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7be34f5217ffc28404fc0ca6f07491a2a6a770faecfcf306384c142bccd2fdb4", size = 2244780, upload-time = "2025-10-13T19:31:11.174Z" },
+ { url = "https://files.pythonhosted.org/packages/e3/17/87873bb56e5055d1aadfd84affa33cbf164e923d674c17ca898ad53db08e/pydantic_core-2.41.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3cbcad992c281b4960cb5550e218ff39a679c730a59859faa0bc9b8d87efbe6a", size = 2362221, upload-time = "2025-10-13T19:31:13.183Z" },
+ { url = "https://files.pythonhosted.org/packages/4f/f9/2a3fb1e3b5f47754935a726ff77887246804156a029c5394daf4263a3e88/pydantic_core-2.41.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8741b0ab2acdd20c804432e08052791e66cf797afa5451e7e435367f88474b0b", size = 2070695, upload-time = "2025-10-13T19:31:14.849Z" },
+ { url = "https://files.pythonhosted.org/packages/78/ac/d66c1048fcd60e995913809f9e3fcca1e6890bc3588902eab9ade63aa6d8/pydantic_core-2.41.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1ac3ba94f3be9437da4ad611dacd356f040120668c5b1733b8ae035a13663c48", size = 2185138, upload-time = "2025-10-13T19:31:16.772Z" },
+ { url = "https://files.pythonhosted.org/packages/98/cf/6fbbd67d0629392ccd5eea8a8b4c005f0151c5505ad22f9b1ff74d63d9f1/pydantic_core-2.41.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:971efe83bac3d5db781ee1b4836ac2cdd53cf7f727edfd4bb0a18029f9409ef2", size = 2148858, upload-time = "2025-10-13T19:31:18.311Z" },
+ { url = "https://files.pythonhosted.org/packages/1c/08/453385212db8db39ed0b6a67f2282b825ad491fed46c88329a0b9d0e543e/pydantic_core-2.41.3-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:98c54e5ad0399ac79c0b6b567693d0f8c44b5a0d67539826cc1dd495e47d1307", size = 2315038, upload-time = "2025-10-13T19:31:19.95Z" },
+ { url = "https://files.pythonhosted.org/packages/53/b9/271298376dc561de57679a82bf4777b9cf7df23881d487b17f658ef78eab/pydantic_core-2.41.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:60110fe616b599c6e057142f2d75873e213bc0cbdac88f58dda8afb27a82f978", size = 2324458, upload-time = "2025-10-13T19:31:21.501Z" },
+ { url = "https://files.pythonhosted.org/packages/17/93/126ac22c310a64dc24d833d47bd175098daa3f9eab93043502a2c11348b4/pydantic_core-2.41.3-cp311-cp311-win32.whl", hash = "sha256:75428ae73865ee366f159b68b9281c754df832494419b4eb46b7c3fbdb27756c", size = 1986636, upload-time = "2025-10-13T19:31:23.08Z" },
+ { url = "https://files.pythonhosted.org/packages/1b/a7/703a31dc6ede00b4e394e5b81c14f462fe5654d3064def17dd64d4389a1a/pydantic_core-2.41.3-cp311-cp311-win_amd64.whl", hash = "sha256:c0178ad5e586d3e394f4b642f0bb7a434bcf34d1e9716cc4bd74e34e35283152", size = 2023792, upload-time = "2025-10-13T19:31:25.011Z" },
+ { url = "https://files.pythonhosted.org/packages/f4/e3/2166b56df1bbe92663b8971012bf7dbd28b6a95e1dc9ad1ec9c99511c41e/pydantic_core-2.41.3-cp311-cp311-win_arm64.whl", hash = "sha256:5dd40bb57cdae2a35e20d06910b93b13e8f57ffff5a0b0a45927953bad563a03", size = 1968147, upload-time = "2025-10-13T19:31:26.611Z" },
+ { url = "https://files.pythonhosted.org/packages/20/11/3149cae2a61ddd11c206cde9dab7598a53cfabe8e69850507876988d2047/pydantic_core-2.41.3-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:7bdc8b70bc4b68e4d891b46d018012cac7bbfe3b981a7c874716dde09ff09fd5", size = 2098919, upload-time = "2025-10-13T19:31:28.727Z" },
+ { url = "https://files.pythonhosted.org/packages/53/64/1717c7c5b092c64e5022b0d02b11703c2c94c31d897366b6c8d160b7d1de/pydantic_core-2.41.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:446361e93f4ffe509edae5862fb89a0d24cbc8f2935f05c6584c2f2ca6e7b6df", size = 1910372, upload-time = "2025-10-13T19:31:30.351Z" },
+ { url = "https://files.pythonhosted.org/packages/99/ba/0231b5dde6c1c436e0d58aed7d63f927694d92c51aff739bf692142ce6e6/pydantic_core-2.41.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9af9a9ae24b866ce58462a7de61c33ff035e052b7a9c05c29cf496bd6a16a63f", size = 1952392, upload-time = "2025-10-13T19:31:32.345Z" },
+ { url = "https://files.pythonhosted.org/packages/cd/5d/1adbfa682a56544d70b42931f19de44a4e58a4fc2152da343a2fdfd4cad5/pydantic_core-2.41.3-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fc836eb8561f04fede7b73747463bd08715be0f55c427e0f0198aa2f1d92f913", size = 2041093, upload-time = "2025-10-13T19:31:34.534Z" },
+ { url = "https://files.pythonhosted.org/packages/7f/d3/9d14041f0b125a5d6388957cace43f9dfb80d862e56a0685dde431a20b6a/pydantic_core-2.41.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:16f80f366472eb6a3744149289c263e5ef182c8b18422192166b67625fef3c50", size = 2214331, upload-time = "2025-10-13T19:31:36.575Z" },
+ { url = "https://files.pythonhosted.org/packages/5b/cd/384988d065596fafecf9baeab0c66ef31610013b26eec3b305a80ab5f669/pydantic_core-2.41.3-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d699904cd13d0f509bdbb17f0784abb332d4aa42df4b0a8b65932096fcd4b21", size = 2344450, upload-time = "2025-10-13T19:31:38.905Z" },
+ { url = "https://files.pythonhosted.org/packages/a3/13/1b0dd34fce51a746823a347d7f9e02c6ea09078ec91c5f656594c23d2047/pydantic_core-2.41.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:485398dacc5dddb2be280fd3998367531eccae8631f4985d048c2406a5ee5ecc", size = 2070507, upload-time = "2025-10-13T19:31:41.093Z" },
+ { url = "https://files.pythonhosted.org/packages/29/a6/0f8d6d67d917318d842fe8dba2489b0c5989ce01fc1ed58bf204f80663df/pydantic_core-2.41.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6dfe0898272bf675941cd1ea701677341357b77acadacabbd43d71e09763dceb", size = 2185401, upload-time = "2025-10-13T19:31:42.785Z" },
+ { url = "https://files.pythonhosted.org/packages/e9/23/b8a82253736f2efd3b79338dfe53866b341b68868fbce7111ff6b040b680/pydantic_core-2.41.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:86ffbf5291c367a56b5718590dc3452890f2c1ac7b76d8f4a1e66df90bd717f6", size = 2131929, upload-time = "2025-10-13T19:31:46.226Z" },
+ { url = "https://files.pythonhosted.org/packages/7c/16/efe252cbf852ebfcb4978820e7681d83ae45c526cbfc0cf847f70de49850/pydantic_core-2.41.3-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:c58c5acda77802eedde3aaf22be09e37cfec060696da64bf6e6ffb2480fdabd0", size = 2307223, upload-time = "2025-10-13T19:31:48.176Z" },
+ { url = "https://files.pythonhosted.org/packages/e9/ea/7d8eba2c37769d8768871575be449390beb2452a2289b0090ea7fa63f920/pydantic_core-2.41.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:40db5705aec66371ca5792415c3e869137ae2bab48c48608db3f84986ccaf016", size = 2312962, upload-time = "2025-10-13T19:31:50.028Z" },
+ { url = "https://files.pythonhosted.org/packages/02/c4/b617e33c3b6f4a99c7d252cc42df958d14627a09a1a935141fb9abe44189/pydantic_core-2.41.3-cp312-cp312-win32.whl", hash = "sha256:668fcb317a0b3c84781796891128111c32f83458d436b022014ed0ea07f66e1b", size = 1988735, upload-time = "2025-10-13T19:31:51.778Z" },
+ { url = "https://files.pythonhosted.org/packages/24/fc/05bb0249782893b52baa7732393c0bac9422d6aab46770253f57176cddba/pydantic_core-2.41.3-cp312-cp312-win_amd64.whl", hash = "sha256:248a5d1dac5382454927edf32660d0791d2df997b23b06a8cac6e3375bc79cee", size = 2032239, upload-time = "2025-10-13T19:31:53.915Z" },
+ { url = "https://files.pythonhosted.org/packages/75/1d/7637f6aaafdbc27205296bde9843096bd449192986b5523869444f844b82/pydantic_core-2.41.3-cp312-cp312-win_arm64.whl", hash = "sha256:347a23094c98b7ea2ba6fff93b52bd2931a48c9c1790722d9e841f30e4b7afcd", size = 1969072, upload-time = "2025-10-13T19:31:55.7Z" },
+ { url = "https://files.pythonhosted.org/packages/9f/a6/7533cba20b8b66e209d8d2acbb9ccc0bc1b883b0654776d676e02696ef5d/pydantic_core-2.41.3-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:a8596700fdd3ee12b0d9c1f2395f4c32557e7ebfbfacdc08055b0bcbe7d2827e", size = 2105686, upload-time = "2025-10-13T19:31:57.675Z" },
+ { url = "https://files.pythonhosted.org/packages/84/d7/2d15cb9dfb9f94422fb4a8820cbfeb397e3823087c2361ef46df5c172000/pydantic_core-2.41.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:624503f918e472c0eed6935020c01b6a6b4bcdb7955a848da5c8805d40f15c0f", size = 1910554, upload-time = "2025-10-13T19:32:00.037Z" },
+ { url = "https://files.pythonhosted.org/packages/4c/fc/cbd1caa19e88fd64df716a37b49e5864c1ac27dbb9eb870b8977a584fa42/pydantic_core-2.41.3-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:36388958d0c614df9f5de1a5f88f4b79359016b9ecdfc352037788a628616aa2", size = 1957559, upload-time = "2025-10-13T19:32:02.603Z" },
+ { url = "https://files.pythonhosted.org/packages/3b/fe/da942ae51f602173556c627304dc24b9fa8bd04423bce189bf397ba0419e/pydantic_core-2.41.3-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3c50eba144add9104cf43ef9a3d81c37ebf48bfd0924b584b78ec2e03ec91daf", size = 2051084, upload-time = "2025-10-13T19:32:05.056Z" },
+ { url = "https://files.pythonhosted.org/packages/c8/62/0abd59a7107d1ef502b9cfab68145c6bb87115c2d9e883afbf18b98fe6db/pydantic_core-2.41.3-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c6ea2102958eb5ad560d570c49996e215a6939d9bffd0e9fd3b9e808a55008cc", size = 2218098, upload-time = "2025-10-13T19:32:06.837Z" },
+ { url = "https://files.pythonhosted.org/packages/72/b1/93a36aa119b70126f3f0d06b6f9a81ca864115962669d8a85deb39c82ecc/pydantic_core-2.41.3-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd0d26f1e4335d5f84abfc880da0afa080c8222410482f9ee12043bb05f55ec8", size = 2341954, upload-time = "2025-10-13T19:32:08.583Z" },
+ { url = "https://files.pythonhosted.org/packages/0f/be/7c2563b53b71ff3e41950b0ffa9eeba3d702091c6d59036fff8a39050528/pydantic_core-2.41.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:41c38700094045b12c0cff35c8585954de66cf6dd63909fed1c2e6b8f38e1e1e", size = 2069474, upload-time = "2025-10-13T19:32:10.808Z" },
+ { url = "https://files.pythonhosted.org/packages/ba/ac/2394004db9f6e03712c1e52f40f0979750fa87721f6baf5f76ad92b8be46/pydantic_core-2.41.3-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4061cc82d7177417fdb90e23e67b27425ecde2652cfd2053b5b4661a489ddc19", size = 2190633, upload-time = "2025-10-13T19:32:12.731Z" },
+ { url = "https://files.pythonhosted.org/packages/7d/31/7b70c2d1fe41f450f8022f5523edaaea19c17a2d321fab03efd03aea1fe8/pydantic_core-2.41.3-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:b1d9699a4dae10a7719951cca1e30b591ef1dd9cdda9fec39282a283576c0241", size = 2137097, upload-time = "2025-10-13T19:32:14.634Z" },
+ { url = "https://files.pythonhosted.org/packages/4e/ae/f872198cffc8564f52c4ef83bcd3e324e5ac914e168c6b812f5ce3f80aab/pydantic_core-2.41.3-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:d5099f1b97e79f0e45cb6a236a5bd1a20078ed50b1b28f3d17f6c83ff3585baa", size = 2316771, upload-time = "2025-10-13T19:32:16.586Z" },
+ { url = "https://files.pythonhosted.org/packages/23/50/f0fce3a9a7554ced178d943e1eada58b15fca896e9eb75d50244fc12007c/pydantic_core-2.41.3-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:b5ff0467a8c1b6abb0ab9c9ea80e2e3a9788592e44c726c2db33fdaf1b5e7d0b", size = 2319449, upload-time = "2025-10-13T19:32:18.503Z" },
+ { url = "https://files.pythonhosted.org/packages/15/1f/86a6948408e8388604c02ffde651a2e39b711bd1ab6eeaff376094553a10/pydantic_core-2.41.3-cp313-cp313-win32.whl", hash = "sha256:edfe9b4cee4a91da7247c25732f24504071f3e101c050694d18194b7d2d320bf", size = 1995352, upload-time = "2025-10-13T19:32:20.5Z" },
+ { url = "https://files.pythonhosted.org/packages/1f/4b/6dac37c3f62684dc459a31623d8ae97ee433fd68bb827e5c64dd831a5087/pydantic_core-2.41.3-cp313-cp313-win_amd64.whl", hash = "sha256:44af3276c0c2c14efde6590523e4d7e04bcd0e46e0134f0dbef1be0b64b2d3e3", size = 2031894, upload-time = "2025-10-13T19:32:23.11Z" },
+ { url = "https://files.pythonhosted.org/packages/fd/75/3d9ba041a3fcb147279fbb37d2468efe62606809fec97b8de78174335ef4/pydantic_core-2.41.3-cp313-cp313-win_arm64.whl", hash = "sha256:59aeed341f92440d51fdcc82c8e930cfb234f1843ed1d4ae1074f5fb9789a64b", size = 1974036, upload-time = "2025-10-13T19:32:25.219Z" },
+ { url = "https://files.pythonhosted.org/packages/50/68/45842628ccdb384df029f884ef915306d195c4f08b66ca4d99867edc6338/pydantic_core-2.41.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:ef37228238b3a280170ac43a010835c4a7005742bc8831c2c1a9560de4595dbe", size = 1876856, upload-time = "2025-10-13T19:32:27.504Z" },
+ { url = "https://files.pythonhosted.org/packages/99/73/336a82910c6a482a0ba9a255c08dcc456ebca9735df96d7a82dffe17626a/pydantic_core-2.41.3-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5cb19f36253152c509abe76c1d1b185436e0c75f392a82934fe37f4a1264449", size = 1884665, upload-time = "2025-10-13T19:32:29.567Z" },
+ { url = "https://files.pythonhosted.org/packages/34/87/ec610a7849561e0ef7c25b74ef934d154454c3aac8fb595b899557f3c6ab/pydantic_core-2.41.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:91be4756e05367ce19a70e1db3b77f01f9e40ca70d26fb4cdfa993e53a08964a", size = 2043067, upload-time = "2025-10-13T19:32:31.506Z" },
+ { url = "https://files.pythonhosted.org/packages/db/b4/5f2b0cf78752f9111177423bd5f2bc0815129e587c13401636b8900a417e/pydantic_core-2.41.3-cp313-cp313t-win_amd64.whl", hash = "sha256:ce7d8f4353f82259b55055bd162bbaf599f6c40cd0c098e989eeb95f9fdc022f", size = 1996799, upload-time = "2025-10-13T19:32:33.612Z" },
+ { url = "https://files.pythonhosted.org/packages/49/7f/07e7f19a6a44a52abd48846e348e11fa1b3de5ed7c0231d53f055ffb365f/pydantic_core-2.41.3-cp313-cp313t-win_arm64.whl", hash = "sha256:f06a9e81da60e5a0ef584f6f4790f925c203880ae391bf363d97126fd1790b21", size = 1969574, upload-time = "2025-10-13T19:32:35.533Z" },
+ { url = "https://files.pythonhosted.org/packages/f1/d8/db32fbced75853c1d8e7ada8cb2b837ade99b2f281de569908de3e29f0bf/pydantic_core-2.41.3-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:0c77e8e72344e34052ea26905fa7551ecb75fc12795ca1a8e44f816918f4c718", size = 2103383, upload-time = "2025-10-13T19:32:37.522Z" },
+ { url = "https://files.pythonhosted.org/packages/de/28/5bcb3327b3777994633f4cb459c5dc34a9cbe6cf0ac449d3e8f1e74bdaaa/pydantic_core-2.41.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:32be442a017e82a6c496a52ef5db5f5ac9abf31c3064f5240ee15a1d27cc599e", size = 1904974, upload-time = "2025-10-13T19:32:39.513Z" },
+ { url = "https://files.pythonhosted.org/packages/71/8d/c9d8cad7c02d63869079fb6fb61b8ab27adbeeda0bf130c684fe43daa126/pydantic_core-2.41.3-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:af10c78f0e9086d2d883ddd5a6482a613ad435eb5739cf1467b1f86169e63d91", size = 1956879, upload-time = "2025-10-13T19:32:41.849Z" },
+ { url = "https://files.pythonhosted.org/packages/15/b1/8a84b55631a45375a467df288d8f905bec0abadb1e75bce3b32402b49733/pydantic_core-2.41.3-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6212874118704e27d177acee5b90b83556b14b2eb88aae01bae51cd9efe27019", size = 2051787, upload-time = "2025-10-13T19:32:43.86Z" },
+ { url = "https://files.pythonhosted.org/packages/c3/97/a84ea9cb7ba4dbfd43865e5dd536b22c78ee763d82d501c6f6a553403c00/pydantic_core-2.41.3-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c6a24c82674a3a8e7f7306e57e98219e5c1cdfc0f57bc70986930dda136230b2", size = 2217830, upload-time = "2025-10-13T19:32:46.053Z" },
+ { url = "https://files.pythonhosted.org/packages/1a/2c/64233c77410e314dbb7f2e8112be7f56de57cf64198a32d8ab3f7b74adf4/pydantic_core-2.41.3-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8e0c81dc047c18059410c959a437540abcefea6a882d6e43b9bf45c291eaacd9", size = 2341131, upload-time = "2025-10-13T19:32:48.402Z" },
+ { url = "https://files.pythonhosted.org/packages/23/3d/915b90eb0de93bd522b293fd1a986289f5d576c72e640f3bb426b496d095/pydantic_core-2.41.3-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c0d7e1a9f80f00a8180b9194ecef66958eb03f3c3ae2d77195c9d665ac0a61e", size = 2063797, upload-time = "2025-10-13T19:32:50.458Z" },
+ { url = "https://files.pythonhosted.org/packages/4d/25/a65665caa86e496e19feef48e6bd9263c1a46f222e8f9b0818f67bd98dc3/pydantic_core-2.41.3-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2868fabfc35ec0738539ce0d79aab37aeffdcb9682b9b91f0ac4b0ba31abb1eb", size = 2193041, upload-time = "2025-10-13T19:32:52.686Z" },
+ { url = "https://files.pythonhosted.org/packages/cd/46/a7f7e17f99ee691a7d93a53aa41bf7d1b1d425945b6e9bc8020498a413e1/pydantic_core-2.41.3-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:cb4f40c93307e1c50996e4edcddf338e1f3f1fb86fb69b654111c6050ae3b081", size = 2136119, upload-time = "2025-10-13T19:32:54.737Z" },
+ { url = "https://files.pythonhosted.org/packages/5f/92/c27c1f3edd06e04af71358aa8f4d244c8bc6726e3fb47e00157d3dffe66f/pydantic_core-2.41.3-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:287cbcd3407a875eaf0b1efa2e5288493d5b79bfd3629459cf0b329ad8a9071a", size = 2317223, upload-time = "2025-10-13T19:32:56.927Z" },
+ { url = "https://files.pythonhosted.org/packages/51/6c/20aabe3c32888fb13d4726e405716fed14b1d4d1d4292d585862c1458b7b/pydantic_core-2.41.3-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:5253835aa145049205a67056884555a936f9b3fea7c3ce860bff62be6a1ae4d1", size = 2320425, upload-time = "2025-10-13T19:32:59.454Z" },
+ { url = "https://files.pythonhosted.org/packages/67/d2/476d4bc6b3070e151ae920167f27f26415e12f8fcc6cf5a47a613aba7267/pydantic_core-2.41.3-cp314-cp314-win32.whl", hash = "sha256:69297795efe5349156d18eebea818b75d29a1d3d1d5f26a250f22ab4220aacd6", size = 1994216, upload-time = "2025-10-13T19:33:01.484Z" },
+ { url = "https://files.pythonhosted.org/packages/16/ca/2cd8515584b3d665ca3c4d946364c2a9932d0d5648694c2a10d273cde81c/pydantic_core-2.41.3-cp314-cp314-win_amd64.whl", hash = "sha256:e1c133e3447c2f6d95e47ede58fff0053370758112a1d39117d0af8c93584049", size = 2026522, upload-time = "2025-10-13T19:33:03.546Z" },
+ { url = "https://files.pythonhosted.org/packages/77/61/c9f2791d7188594f0abdc1b7fe8ec3efc123ee2d9c553fd3b6da2d9fd53d/pydantic_core-2.41.3-cp314-cp314-win_arm64.whl", hash = "sha256:54534eecbb7a331521f832e15fc307296f491ee1918dacfd4d5b900da6ee3332", size = 1969070, upload-time = "2025-10-13T19:33:05.604Z" },
+ { url = "https://files.pythonhosted.org/packages/b5/eb/45f9a91f8c09f4cfb62f78dce909b20b6047ce4fd8d89310fcac5ad62e54/pydantic_core-2.41.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:6b4be10152098b43c093a4b5e9e9da1ac7a1c954c1934d4438d07ba7b7bcf293", size = 1876593, upload-time = "2025-10-13T19:33:07.814Z" },
+ { url = "https://files.pythonhosted.org/packages/99/f8/5c9d0959e0e1f260eea297a5ecc1dc29a14e03ee6a533e805407e8403c1a/pydantic_core-2.41.3-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe4ebd676c158a7994253161151b476dbbef2acbd2f547cfcfdf332cf67cc29", size = 1882977, upload-time = "2025-10-13T19:33:10.109Z" },
+ { url = "https://files.pythonhosted.org/packages/8b/f4/7ab918e35f55e7beee471ba8c67dfc4c9c19a8904e4867bfda7f9c76a72e/pydantic_core-2.41.3-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:984ca0113b39dda1d7c358d6db03dd6539ef244d0558351806c1327239e035bf", size = 2041033, upload-time = "2025-10-13T19:33:12.216Z" },
+ { url = "https://files.pythonhosted.org/packages/a6/c8/5b12e5a36410ebcd0082ae5b0258150d72762e306f298cc3fe731b5574ec/pydantic_core-2.41.3-cp314-cp314t-win_amd64.whl", hash = "sha256:2a7dd8a6f5a9a2f8c7f36e4fc0982a985dbc4ac7176ee3df9f63179b7295b626", size = 1994462, upload-time = "2025-10-13T19:33:14.421Z" },
+ { url = "https://files.pythonhosted.org/packages/6b/f6/c6f3b7244a2a0524f4a04052e3d590d3be0ba82eb1a2f0fe5d068237701e/pydantic_core-2.41.3-cp314-cp314t-win_arm64.whl", hash = "sha256:b387f08b378924fa82bd86e03c9d61d6daca1a73ffb3947bdcfe12ea14c41f68", size = 1973551, upload-time = "2025-10-13T19:33:16.87Z" },
+ { url = "https://files.pythonhosted.org/packages/80/7c/837dc1d5f09728590ace987fcaad83ec4539dcd73ce4ea5a0b786ee0a921/pydantic_core-2.41.3-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:98ad9402d6cc194b21adb4626ead88fcce8bc287ef434502dbb4d5b71bdb9a47", size = 2122049, upload-time = "2025-10-13T19:33:49.808Z" },
+ { url = "https://files.pythonhosted.org/packages/00/7d/d9c6d70571219d826381049df60188777de0283d7f01077bfb7ec26cb121/pydantic_core-2.41.3-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:539b1c01251fbc0789ad4e1dccf3e888062dd342b2796f403406855498afbc36", size = 1936957, upload-time = "2025-10-13T19:33:52.768Z" },
+ { url = "https://files.pythonhosted.org/packages/7f/d3/5e69eba2752a47815adcf9ff7fcfdb81c600b7c87823037d8e746db835cf/pydantic_core-2.41.3-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12019e3a4ded7c4e84b11a761be843dfa9837444a1d7f621888ad499f0f72643", size = 1957032, upload-time = "2025-10-13T19:33:55.46Z" },
+ { url = "https://files.pythonhosted.org/packages/4c/98/799db4be56a16fb22152c5473f806c7bb818115f1648bee3ac29a7d5fb9e/pydantic_core-2.41.3-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d5e01519c8322a489167abb1aceaab1a9e4c7d3e665dc3f7b0b1355910fcb698", size = 2140010, upload-time = "2025-10-13T19:33:57.881Z" },
+ { url = "https://files.pythonhosted.org/packages/68/e6/a41dec3d50cfbd7445334459e847f97a62c5658d2c6da268886928ffd357/pydantic_core-2.41.3-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:a6ded5abbb7391c0db9e002aaa5f0e3a49a024b0a22e2ed09ab69087fd5ab8a8", size = 2112077, upload-time = "2025-10-13T19:34:00.77Z" },
+ { url = "https://files.pythonhosted.org/packages/44/38/e136a52ae85265a07999439cd8dcd24ba4e83e23d61e40000cd74b426f19/pydantic_core-2.41.3-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:43abc869cce9104ff35cb4eff3028e9a87346c95fe44e0173036bf4d782bdc3d", size = 1920464, upload-time = "2025-10-13T19:34:03.454Z" },
+ { url = "https://files.pythonhosted.org/packages/3e/5d/a3f509f682818ded836bd006adce08d731d81c77694a26a0a1a448f3e351/pydantic_core-2.41.3-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb3c63f4014a603caee687cd5c3c63298d2c8951b7acb2ccd0befbf2e1c0b8ad", size = 1951926, upload-time = "2025-10-13T19:34:05.983Z" },
+ { url = "https://files.pythonhosted.org/packages/59/0e/cb30ad2a0147cc7763c0c805ee1c534f6ed5d5db7bc8cf8ebaf34b4c9dab/pydantic_core-2.41.3-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:88461e25f62e58db4d8b180e2612684f31b5844db0a8f8c1c421498c97bc197b", size = 2139233, upload-time = "2025-10-13T19:34:08.396Z" },
+ { url = "https://files.pythonhosted.org/packages/61/39/92380b350c0f22ae2c8ca11acc8b45ac39de55b8b750680459527e224d86/pydantic_core-2.41.3-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:219a95d7638c6b3a50de749747afdf1c2bdf027653e4a3e1df2fefa1e238d8eb", size = 2108918, upload-time = "2025-10-13T19:34:10.79Z" },
+ { url = "https://files.pythonhosted.org/packages/bf/94/683a4efcbd1c890b88d6898a46e537b443eaf157bf78fb44f47a2474d47a/pydantic_core-2.41.3-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:21d4e730b75cfc62b3e24261030bd223ed5f867039f971027c551a7ab911f460", size = 1930618, upload-time = "2025-10-13T19:34:13.226Z" },
+ { url = "https://files.pythonhosted.org/packages/38/b4/44a6ce874bc629a0a4a42a0370955ff46b2db302bfcd895d69b28e73372a/pydantic_core-2.41.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79d9a98a80309189a49cffcd507c85032a2df35d005bd12d655f425ca80eec3d", size = 2135930, upload-time = "2025-10-13T19:34:15.592Z" },
+ { url = "https://files.pythonhosted.org/packages/a1/5f/1bf4ad96b1679e0889c21707c767f0b2a5910413b2587ea830eee620c74c/pydantic_core-2.41.3-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20f7d53153eb2a5c2f7a8cccf1a45022e2b75668cad274f998b43313da03053d", size = 2182112, upload-time = "2025-10-13T19:34:18.209Z" },
+ { url = "https://files.pythonhosted.org/packages/b8/ed/6c39d1ba28b00459baa452629d6cdf3fbbfd40d774655a6c15b8af3b7312/pydantic_core-2.41.3-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:e2135eff48d3b6a2abfe7b26395d350ea76a460d3de3cf2521fe2f15f222fa29", size = 2146549, upload-time = "2025-10-13T19:34:20.652Z" },
+ { url = "https://files.pythonhosted.org/packages/f0/fd/550a234486e69682311f060be25c2355fd28434d4506767a729a7902ee2d/pydantic_core-2.41.3-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:005bf20e48f6272803de8ba0be076e5bd7d015b7f02ebcc989bc24f85636d1d8", size = 2311299, upload-time = "2025-10-13T19:34:23.097Z" },
+ { url = "https://files.pythonhosted.org/packages/cb/5c/61cb3ad96dcba2fe4c5a618c9ad30661077da22fdae190c4aefbee5a1cc3/pydantic_core-2.41.3-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:d4ebfa1864046c44669cd789a613ec39ee194fe73842e369d129d716730216d9", size = 2321969, upload-time = "2025-10-13T19:34:25.52Z" },
+ { url = "https://files.pythonhosted.org/packages/45/99/6b10a391feb74d2ff21b5597a632f7f9ad50afe3a9bfe1de0a1b10aee0cb/pydantic_core-2.41.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cb82cd643a2ad7ebf94bdb7fa6c339801b0fe8c7920610d6da7b691647ef5842", size = 2150346, upload-time = "2025-10-13T19:34:28.101Z" },
+ { url = "https://files.pythonhosted.org/packages/1d/84/14c7ed3428feb718792fc2ecc5d04c12e46cb5c65620717c6826428ee468/pydantic_core-2.41.3-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5e67f86ffb40127851dba662b2d0ab400264ed37cfedeab6100515df41ccb325", size = 2106894, upload-time = "2025-10-13T19:34:30.905Z" },
+ { url = "https://files.pythonhosted.org/packages/ea/5d/d129794fc3990a49b12963d7cc25afc6a458fe85221b8a78cf46c5f22135/pydantic_core-2.41.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:ecad4d7d264f6df23db68ca3024919a7aab34b4c44d9a9280952863a7a0c5e81", size = 1929911, upload-time = "2025-10-13T19:34:33.399Z" },
+ { url = "https://files.pythonhosted.org/packages/d3/89/8fe254b1725a48f4da1978fa21268f142846c2d653715161afc394e67486/pydantic_core-2.41.3-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fce6e6505b9807d3c20476fa016d0bd4d54a858fe648d6f5ef065286410c3da7", size = 2133972, upload-time = "2025-10-13T19:34:35.994Z" },
+ { url = "https://files.pythonhosted.org/packages/75/26/eefc7f23167a8060e29fcbb99d15158729ea794ee5b5c11ecc4df73b21c9/pydantic_core-2.41.3-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:05974468cff84ea112ad4992823f1300d822ad51df0eba4c3af3c4a4cbe5eca0", size = 2181777, upload-time = "2025-10-13T19:34:38.762Z" },
+ { url = "https://files.pythonhosted.org/packages/67/ba/03c5a00a9251fc5fe22d5807bc52cf0863b9486f0086a45094adee77fa0b/pydantic_core-2.41.3-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:091d3966dc2379e07b45b4fd9651fbab5b24ea3c62cc40637beaf691695e5f5a", size = 2144699, upload-time = "2025-10-13T19:34:41.29Z" },
+ { url = "https://files.pythonhosted.org/packages/9e/4e/ee90dc6c99c8261c89ce1c2311395e7a0432dfc20db1bd6d9be917a92320/pydantic_core-2.41.3-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:16f216e4371a05ad3baa5aed152eae056c7e724663c2bcbb38edd607c17baa89", size = 2311388, upload-time = "2025-10-13T19:34:43.843Z" },
+ { url = "https://files.pythonhosted.org/packages/f5/01/7f3e4ed3963113e5e9df8077f3015facae0cd3a65ac5688d308010405a0e/pydantic_core-2.41.3-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:2e169371f88113c8e642f7ac42c798109f1270832b577b5144962a7a028bfb0c", size = 2320916, upload-time = "2025-10-13T19:34:46.417Z" },
+ { url = "https://files.pythonhosted.org/packages/eb/d7/91ef73afa5c275962edd708559148e153d95866f8baf96142ab4804da67a/pydantic_core-2.41.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:83847aa6026fb7149b9ef06e10c73ff83ac1d2aa478b28caa4f050670c1c9a37", size = 2148327, upload-time = "2025-10-13T19:34:48.929Z" },
]
[[package]]
@@ -5320,38 +5343,63 @@ wheels = [
[[package]]
name = "tiktoken"
-version = "0.11.0"
+version = "0.12.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "regex" },
{ name = "requests" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/a7/86/ad0155a37c4f310935d5ac0b1ccf9bdb635dcb906e0a9a26b616dd55825a/tiktoken-0.11.0.tar.gz", hash = "sha256:3c518641aee1c52247c2b97e74d8d07d780092af79d5911a6ab5e79359d9b06a", size = 37648, upload-time = "2025-08-08T23:58:08.495Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/7d/ab/4d017d0f76ec3171d469d80fc03dfbb4e48a4bcaddaa831b31d526f05edc/tiktoken-0.12.0.tar.gz", hash = "sha256:b18ba7ee2b093863978fcb14f74b3707cdc8d4d4d3836853ce7ec60772139931", size = 37806, upload-time = "2025-10-06T20:22:45.419Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/8b/4d/c6a2e7dca2b4f2e9e0bfd62b3fe4f114322e2c028cfba905a72bc76ce479/tiktoken-0.11.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:8a9b517d6331d7103f8bef29ef93b3cca95fa766e293147fe7bacddf310d5917", size = 1059937, upload-time = "2025-08-08T23:57:28.57Z" },
- { url = "https://files.pythonhosted.org/packages/41/54/3739d35b9f94cb8dc7b0db2edca7192d5571606aa2369a664fa27e811804/tiktoken-0.11.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b4ddb1849e6bf0afa6cc1c5d809fb980ca240a5fffe585a04e119519758788c0", size = 999230, upload-time = "2025-08-08T23:57:30.241Z" },
- { url = "https://files.pythonhosted.org/packages/dd/f4/ec8d43338d28d53513004ebf4cd83732a135d11011433c58bf045890cc10/tiktoken-0.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:10331d08b5ecf7a780b4fe4d0281328b23ab22cdb4ff65e68d56caeda9940ecc", size = 1130076, upload-time = "2025-08-08T23:57:31.706Z" },
- { url = "https://files.pythonhosted.org/packages/94/80/fb0ada0a882cb453caf519a4bf0d117c2a3ee2e852c88775abff5413c176/tiktoken-0.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b062c82300341dc87e0258c69f79bed725f87e753c21887aea90d272816be882", size = 1183942, upload-time = "2025-08-08T23:57:33.142Z" },
- { url = "https://files.pythonhosted.org/packages/2f/e9/6c104355b463601719582823f3ea658bc3aa7c73d1b3b7553ebdc48468ce/tiktoken-0.11.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:195d84bec46169af3b1349a1495c151d37a0ff4cba73fd08282736be7f92cc6c", size = 1244705, upload-time = "2025-08-08T23:57:34.594Z" },
- { url = "https://files.pythonhosted.org/packages/94/75/eaa6068f47e8b3f0aab9e05177cce2cf5aa2cc0ca93981792e620d4d4117/tiktoken-0.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:fe91581b0ecdd8783ce8cb6e3178f2260a3912e8724d2f2d49552b98714641a1", size = 884152, upload-time = "2025-08-08T23:57:36.18Z" },
- { url = "https://files.pythonhosted.org/packages/8a/91/912b459799a025d2842566fe1e902f7f50d54a1ce8a0f236ab36b5bd5846/tiktoken-0.11.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4ae374c46afadad0f501046db3da1b36cd4dfbfa52af23c998773682446097cf", size = 1059743, upload-time = "2025-08-08T23:57:37.516Z" },
- { url = "https://files.pythonhosted.org/packages/8c/e9/6faa6870489ce64f5f75dcf91512bf35af5864583aee8fcb0dcb593121f5/tiktoken-0.11.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:25a512ff25dc6c85b58f5dd4f3d8c674dc05f96b02d66cdacf628d26a4e4866b", size = 999334, upload-time = "2025-08-08T23:57:38.595Z" },
- { url = "https://files.pythonhosted.org/packages/a1/3e/a05d1547cf7db9dc75d1461cfa7b556a3b48e0516ec29dfc81d984a145f6/tiktoken-0.11.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2130127471e293d385179c1f3f9cd445070c0772be73cdafb7cec9a3684c0458", size = 1129402, upload-time = "2025-08-08T23:57:39.627Z" },
- { url = "https://files.pythonhosted.org/packages/34/9a/db7a86b829e05a01fd4daa492086f708e0a8b53952e1dbc9d380d2b03677/tiktoken-0.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21e43022bf2c33f733ea9b54f6a3f6b4354b909f5a73388fb1b9347ca54a069c", size = 1184046, upload-time = "2025-08-08T23:57:40.689Z" },
- { url = "https://files.pythonhosted.org/packages/9d/bb/52edc8e078cf062ed749248f1454e9e5cfd09979baadb830b3940e522015/tiktoken-0.11.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:adb4e308eb64380dc70fa30493e21c93475eaa11669dea313b6bbf8210bfd013", size = 1244691, upload-time = "2025-08-08T23:57:42.251Z" },
- { url = "https://files.pythonhosted.org/packages/60/d9/884b6cd7ae2570ecdcaffa02b528522b18fef1cbbfdbcaa73799807d0d3b/tiktoken-0.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:ece6b76bfeeb61a125c44bbefdfccc279b5288e6007fbedc0d32bfec602df2f2", size = 884392, upload-time = "2025-08-08T23:57:43.628Z" },
- { url = "https://files.pythonhosted.org/packages/e7/9e/eceddeffc169fc75fe0fd4f38471309f11cb1906f9b8aa39be4f5817df65/tiktoken-0.11.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fd9e6b23e860973cf9526544e220b223c60badf5b62e80a33509d6d40e6c8f5d", size = 1055199, upload-time = "2025-08-08T23:57:45.076Z" },
- { url = "https://files.pythonhosted.org/packages/4f/cf/5f02bfefffdc6b54e5094d2897bc80efd43050e5b09b576fd85936ee54bf/tiktoken-0.11.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6a76d53cee2da71ee2731c9caa747398762bda19d7f92665e882fef229cb0b5b", size = 996655, upload-time = "2025-08-08T23:57:46.304Z" },
- { url = "https://files.pythonhosted.org/packages/65/8e/c769b45ef379bc360c9978c4f6914c79fd432400a6733a8afc7ed7b0726a/tiktoken-0.11.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ef72aab3ea240646e642413cb363b73869fed4e604dcfd69eec63dc54d603e8", size = 1128867, upload-time = "2025-08-08T23:57:47.438Z" },
- { url = "https://files.pythonhosted.org/packages/d5/2d/4d77f6feb9292bfdd23d5813e442b3bba883f42d0ac78ef5fdc56873f756/tiktoken-0.11.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7f929255c705efec7a28bf515e29dc74220b2f07544a8c81b8d69e8efc4578bd", size = 1183308, upload-time = "2025-08-08T23:57:48.566Z" },
- { url = "https://files.pythonhosted.org/packages/7a/65/7ff0a65d3bb0fc5a1fb6cc71b03e0f6e71a68c5eea230d1ff1ba3fd6df49/tiktoken-0.11.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:61f1d15822e4404953d499fd1dcc62817a12ae9fb1e4898033ec8fe3915fdf8e", size = 1244301, upload-time = "2025-08-08T23:57:49.642Z" },
- { url = "https://files.pythonhosted.org/packages/f5/6e/5b71578799b72e5bdcef206a214c3ce860d999d579a3b56e74a6c8989ee2/tiktoken-0.11.0-cp312-cp312-win_amd64.whl", hash = "sha256:45927a71ab6643dfd3ef57d515a5db3d199137adf551f66453be098502838b0f", size = 884282, upload-time = "2025-08-08T23:57:50.759Z" },
- { url = "https://files.pythonhosted.org/packages/cc/cd/a9034bcee638716d9310443818d73c6387a6a96db93cbcb0819b77f5b206/tiktoken-0.11.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:a5f3f25ffb152ee7fec78e90a5e5ea5b03b4ea240beed03305615847f7a6ace2", size = 1055339, upload-time = "2025-08-08T23:57:51.802Z" },
- { url = "https://files.pythonhosted.org/packages/f1/91/9922b345f611b4e92581f234e64e9661e1c524875c8eadd513c4b2088472/tiktoken-0.11.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7dc6e9ad16a2a75b4c4be7208055a1f707c9510541d94d9cc31f7fbdc8db41d8", size = 997080, upload-time = "2025-08-08T23:57:53.442Z" },
- { url = "https://files.pythonhosted.org/packages/d0/9d/49cd047c71336bc4b4af460ac213ec1c457da67712bde59b892e84f1859f/tiktoken-0.11.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a0517634d67a8a48fd4a4ad73930c3022629a85a217d256a6e9b8b47439d1e4", size = 1128501, upload-time = "2025-08-08T23:57:54.808Z" },
- { url = "https://files.pythonhosted.org/packages/52/d5/a0dcdb40dd2ea357e83cb36258967f0ae96f5dd40c722d6e382ceee6bba9/tiktoken-0.11.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7fb4effe60574675118b73c6fbfd3b5868e5d7a1f570d6cc0d18724b09ecf318", size = 1182743, upload-time = "2025-08-08T23:57:56.307Z" },
- { url = "https://files.pythonhosted.org/packages/3b/17/a0fc51aefb66b7b5261ca1314afa83df0106b033f783f9a7bcbe8e741494/tiktoken-0.11.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:94f984c9831fd32688aef4348803b0905d4ae9c432303087bae370dc1381a2b8", size = 1244057, upload-time = "2025-08-08T23:57:57.628Z" },
- { url = "https://files.pythonhosted.org/packages/50/79/bcf350609f3a10f09fe4fc207f132085e497fdd3612f3925ab24d86a0ca0/tiktoken-0.11.0-cp313-cp313-win_amd64.whl", hash = "sha256:2177ffda31dec4023356a441793fed82f7af5291120751dee4d696414f54db0c", size = 883901, upload-time = "2025-08-08T23:57:59.359Z" },
+ { url = "https://files.pythonhosted.org/packages/89/b3/2cb7c17b6c4cf8ca983204255d3f1d95eda7213e247e6947a0ee2c747a2c/tiktoken-0.12.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3de02f5a491cfd179aec916eddb70331814bd6bf764075d39e21d5862e533970", size = 1051991, upload-time = "2025-10-06T20:21:34.098Z" },
+ { url = "https://files.pythonhosted.org/packages/27/0f/df139f1df5f6167194ee5ab24634582ba9a1b62c6b996472b0277ec80f66/tiktoken-0.12.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b6cfb6d9b7b54d20af21a912bfe63a2727d9cfa8fbda642fd8322c70340aad16", size = 995798, upload-time = "2025-10-06T20:21:35.579Z" },
+ { url = "https://files.pythonhosted.org/packages/ef/5d/26a691f28ab220d5edc09b9b787399b130f24327ef824de15e5d85ef21aa/tiktoken-0.12.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:cde24cdb1b8a08368f709124f15b36ab5524aac5fa830cc3fdce9c03d4fb8030", size = 1129865, upload-time = "2025-10-06T20:21:36.675Z" },
+ { url = "https://files.pythonhosted.org/packages/b2/94/443fab3d4e5ebecac895712abd3849b8da93b7b7dec61c7db5c9c7ebe40c/tiktoken-0.12.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6de0da39f605992649b9cfa6f84071e3f9ef2cec458d08c5feb1b6f0ff62e134", size = 1152856, upload-time = "2025-10-06T20:21:37.873Z" },
+ { url = "https://files.pythonhosted.org/packages/54/35/388f941251b2521c70dd4c5958e598ea6d2c88e28445d2fb8189eecc1dfc/tiktoken-0.12.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6faa0534e0eefbcafaccb75927a4a380463a2eaa7e26000f0173b920e98b720a", size = 1195308, upload-time = "2025-10-06T20:21:39.577Z" },
+ { url = "https://files.pythonhosted.org/packages/f8/00/c6681c7f833dd410576183715a530437a9873fa910265817081f65f9105f/tiktoken-0.12.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:82991e04fc860afb933efb63957affc7ad54f83e2216fe7d319007dab1ba5892", size = 1255697, upload-time = "2025-10-06T20:21:41.154Z" },
+ { url = "https://files.pythonhosted.org/packages/5f/d2/82e795a6a9bafa034bf26a58e68fe9a89eeaaa610d51dbeb22106ba04f0a/tiktoken-0.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:6fb2995b487c2e31acf0a9e17647e3b242235a20832642bb7a9d1a181c0c1bb1", size = 879375, upload-time = "2025-10-06T20:21:43.201Z" },
+ { url = "https://files.pythonhosted.org/packages/de/46/21ea696b21f1d6d1efec8639c204bdf20fde8bafb351e1355c72c5d7de52/tiktoken-0.12.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:6e227c7f96925003487c33b1b32265fad2fbcec2b7cf4817afb76d416f40f6bb", size = 1051565, upload-time = "2025-10-06T20:21:44.566Z" },
+ { url = "https://files.pythonhosted.org/packages/c9/d9/35c5d2d9e22bb2a5f74ba48266fb56c63d76ae6f66e02feb628671c0283e/tiktoken-0.12.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c06cf0fcc24c2cb2adb5e185c7082a82cba29c17575e828518c2f11a01f445aa", size = 995284, upload-time = "2025-10-06T20:21:45.622Z" },
+ { url = "https://files.pythonhosted.org/packages/01/84/961106c37b8e49b9fdcf33fe007bb3a8fdcc380c528b20cc7fbba80578b8/tiktoken-0.12.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:f18f249b041851954217e9fd8e5c00b024ab2315ffda5ed77665a05fa91f42dc", size = 1129201, upload-time = "2025-10-06T20:21:47.074Z" },
+ { url = "https://files.pythonhosted.org/packages/6a/d0/3d9275198e067f8b65076a68894bb52fd253875f3644f0a321a720277b8a/tiktoken-0.12.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:47a5bc270b8c3db00bb46ece01ef34ad050e364b51d406b6f9730b64ac28eded", size = 1152444, upload-time = "2025-10-06T20:21:48.139Z" },
+ { url = "https://files.pythonhosted.org/packages/78/db/a58e09687c1698a7c592e1038e01c206569b86a0377828d51635561f8ebf/tiktoken-0.12.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:508fa71810c0efdcd1b898fda574889ee62852989f7c1667414736bcb2b9a4bd", size = 1195080, upload-time = "2025-10-06T20:21:49.246Z" },
+ { url = "https://files.pythonhosted.org/packages/9e/1b/a9e4d2bf91d515c0f74afc526fd773a812232dd6cda33ebea7f531202325/tiktoken-0.12.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a1af81a6c44f008cba48494089dd98cccb8b313f55e961a52f5b222d1e507967", size = 1255240, upload-time = "2025-10-06T20:21:50.274Z" },
+ { url = "https://files.pythonhosted.org/packages/9d/15/963819345f1b1fb0809070a79e9dd96938d4ca41297367d471733e79c76c/tiktoken-0.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:3e68e3e593637b53e56f7237be560f7a394451cb8c11079755e80ae64b9e6def", size = 879422, upload-time = "2025-10-06T20:21:51.734Z" },
+ { url = "https://files.pythonhosted.org/packages/a4/85/be65d39d6b647c79800fd9d29241d081d4eeb06271f383bb87200d74cf76/tiktoken-0.12.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b97f74aca0d78a1ff21b8cd9e9925714c15a9236d6ceacf5c7327c117e6e21e8", size = 1050728, upload-time = "2025-10-06T20:21:52.756Z" },
+ { url = "https://files.pythonhosted.org/packages/4a/42/6573e9129bc55c9bf7300b3a35bef2c6b9117018acca0dc760ac2d93dffe/tiktoken-0.12.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2b90f5ad190a4bb7c3eb30c5fa32e1e182ca1ca79f05e49b448438c3e225a49b", size = 994049, upload-time = "2025-10-06T20:21:53.782Z" },
+ { url = "https://files.pythonhosted.org/packages/66/c5/ed88504d2f4a5fd6856990b230b56d85a777feab84e6129af0822f5d0f70/tiktoken-0.12.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:65b26c7a780e2139e73acc193e5c63ac754021f160df919add909c1492c0fb37", size = 1129008, upload-time = "2025-10-06T20:21:54.832Z" },
+ { url = "https://files.pythonhosted.org/packages/f4/90/3dae6cc5436137ebd38944d396b5849e167896fc2073da643a49f372dc4f/tiktoken-0.12.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:edde1ec917dfd21c1f2f8046b86348b0f54a2c0547f68149d8600859598769ad", size = 1152665, upload-time = "2025-10-06T20:21:56.129Z" },
+ { url = "https://files.pythonhosted.org/packages/a3/fe/26df24ce53ffde419a42f5f53d755b995c9318908288c17ec3f3448313a3/tiktoken-0.12.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:35a2f8ddd3824608b3d650a000c1ef71f730d0c56486845705a8248da00f9fe5", size = 1194230, upload-time = "2025-10-06T20:21:57.546Z" },
+ { url = "https://files.pythonhosted.org/packages/20/cc/b064cae1a0e9fac84b0d2c46b89f4e57051a5f41324e385d10225a984c24/tiktoken-0.12.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:83d16643edb7fa2c99eff2ab7733508aae1eebb03d5dfc46f5565862810f24e3", size = 1254688, upload-time = "2025-10-06T20:21:58.619Z" },
+ { url = "https://files.pythonhosted.org/packages/81/10/b8523105c590c5b8349f2587e2fdfe51a69544bd5a76295fc20f2374f470/tiktoken-0.12.0-cp312-cp312-win_amd64.whl", hash = "sha256:ffc5288f34a8bc02e1ea7047b8d041104791d2ddbf42d1e5fa07822cbffe16bd", size = 878694, upload-time = "2025-10-06T20:21:59.876Z" },
+ { url = "https://files.pythonhosted.org/packages/00/61/441588ee21e6b5cdf59d6870f86beb9789e532ee9718c251b391b70c68d6/tiktoken-0.12.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:775c2c55de2310cc1bc9a3ad8826761cbdc87770e586fd7b6da7d4589e13dab3", size = 1050802, upload-time = "2025-10-06T20:22:00.96Z" },
+ { url = "https://files.pythonhosted.org/packages/1f/05/dcf94486d5c5c8d34496abe271ac76c5b785507c8eae71b3708f1ad9b45a/tiktoken-0.12.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a01b12f69052fbe4b080a2cfb867c4de12c704b56178edf1d1d7b273561db160", size = 993995, upload-time = "2025-10-06T20:22:02.788Z" },
+ { url = "https://files.pythonhosted.org/packages/a0/70/5163fe5359b943f8db9946b62f19be2305de8c3d78a16f629d4165e2f40e/tiktoken-0.12.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:01d99484dc93b129cd0964f9d34eee953f2737301f18b3c7257bf368d7615baa", size = 1128948, upload-time = "2025-10-06T20:22:03.814Z" },
+ { url = "https://files.pythonhosted.org/packages/0c/da/c028aa0babf77315e1cef357d4d768800c5f8a6de04d0eac0f377cb619fa/tiktoken-0.12.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:4a1a4fcd021f022bfc81904a911d3df0f6543b9e7627b51411da75ff2fe7a1be", size = 1151986, upload-time = "2025-10-06T20:22:05.173Z" },
+ { url = "https://files.pythonhosted.org/packages/a0/5a/886b108b766aa53e295f7216b509be95eb7d60b166049ce2c58416b25f2a/tiktoken-0.12.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:981a81e39812d57031efdc9ec59fa32b2a5a5524d20d4776574c4b4bd2e9014a", size = 1194222, upload-time = "2025-10-06T20:22:06.265Z" },
+ { url = "https://files.pythonhosted.org/packages/f4/f8/4db272048397636ac7a078d22773dd2795b1becee7bc4922fe6207288d57/tiktoken-0.12.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9baf52f84a3f42eef3ff4e754a0db79a13a27921b457ca9832cf944c6be4f8f3", size = 1255097, upload-time = "2025-10-06T20:22:07.403Z" },
+ { url = "https://files.pythonhosted.org/packages/8e/32/45d02e2e0ea2be3a9ed22afc47d93741247e75018aac967b713b2941f8ea/tiktoken-0.12.0-cp313-cp313-win_amd64.whl", hash = "sha256:b8a0cd0c789a61f31bf44851defbd609e8dd1e2c8589c614cc1060940ef1f697", size = 879117, upload-time = "2025-10-06T20:22:08.418Z" },
+ { url = "https://files.pythonhosted.org/packages/ce/76/994fc868f88e016e6d05b0da5ac24582a14c47893f4474c3e9744283f1d5/tiktoken-0.12.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:d5f89ea5680066b68bcb797ae85219c72916c922ef0fcdd3480c7d2315ffff16", size = 1050309, upload-time = "2025-10-06T20:22:10.939Z" },
+ { url = "https://files.pythonhosted.org/packages/f6/b8/57ef1456504c43a849821920d582a738a461b76a047f352f18c0b26c6516/tiktoken-0.12.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:b4e7ed1c6a7a8a60a3230965bdedba8cc58f68926b835e519341413370e0399a", size = 993712, upload-time = "2025-10-06T20:22:12.115Z" },
+ { url = "https://files.pythonhosted.org/packages/72/90/13da56f664286ffbae9dbcfadcc625439142675845baa62715e49b87b68b/tiktoken-0.12.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:fc530a28591a2d74bce821d10b418b26a094bf33839e69042a6e86ddb7a7fb27", size = 1128725, upload-time = "2025-10-06T20:22:13.541Z" },
+ { url = "https://files.pythonhosted.org/packages/05/df/4f80030d44682235bdaecd7346c90f67ae87ec8f3df4a3442cb53834f7e4/tiktoken-0.12.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:06a9f4f49884139013b138920a4c393aa6556b2f8f536345f11819389c703ebb", size = 1151875, upload-time = "2025-10-06T20:22:14.559Z" },
+ { url = "https://files.pythonhosted.org/packages/22/1f/ae535223a8c4ef4c0c1192e3f9b82da660be9eb66b9279e95c99288e9dab/tiktoken-0.12.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:04f0e6a985d95913cabc96a741c5ffec525a2c72e9df086ff17ebe35985c800e", size = 1194451, upload-time = "2025-10-06T20:22:15.545Z" },
+ { url = "https://files.pythonhosted.org/packages/78/a7/f8ead382fce0243cb625c4f266e66c27f65ae65ee9e77f59ea1653b6d730/tiktoken-0.12.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:0ee8f9ae00c41770b5f9b0bb1235474768884ae157de3beb5439ca0fd70f3e25", size = 1253794, upload-time = "2025-10-06T20:22:16.624Z" },
+ { url = "https://files.pythonhosted.org/packages/93/e0/6cc82a562bc6365785a3ff0af27a2a092d57c47d7a81d9e2295d8c36f011/tiktoken-0.12.0-cp313-cp313t-win_amd64.whl", hash = "sha256:dc2dd125a62cb2b3d858484d6c614d136b5b848976794edfb63688d539b8b93f", size = 878777, upload-time = "2025-10-06T20:22:18.036Z" },
+ { url = "https://files.pythonhosted.org/packages/72/05/3abc1db5d2c9aadc4d2c76fa5640134e475e58d9fbb82b5c535dc0de9b01/tiktoken-0.12.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:a90388128df3b3abeb2bfd1895b0681412a8d7dc644142519e6f0a97c2111646", size = 1050188, upload-time = "2025-10-06T20:22:19.563Z" },
+ { url = "https://files.pythonhosted.org/packages/e3/7b/50c2f060412202d6c95f32b20755c7a6273543b125c0985d6fa9465105af/tiktoken-0.12.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:da900aa0ad52247d8794e307d6446bd3cdea8e192769b56276695d34d2c9aa88", size = 993978, upload-time = "2025-10-06T20:22:20.702Z" },
+ { url = "https://files.pythonhosted.org/packages/14/27/bf795595a2b897e271771cd31cb847d479073497344c637966bdf2853da1/tiktoken-0.12.0-cp314-cp314-manylinux_2_28_aarch64.whl", hash = "sha256:285ba9d73ea0d6171e7f9407039a290ca77efcdb026be7769dccc01d2c8d7fff", size = 1129271, upload-time = "2025-10-06T20:22:22.06Z" },
+ { url = "https://files.pythonhosted.org/packages/f5/de/9341a6d7a8f1b448573bbf3425fa57669ac58258a667eb48a25dfe916d70/tiktoken-0.12.0-cp314-cp314-manylinux_2_28_x86_64.whl", hash = "sha256:d186a5c60c6a0213f04a7a802264083dea1bbde92a2d4c7069e1a56630aef830", size = 1151216, upload-time = "2025-10-06T20:22:23.085Z" },
+ { url = "https://files.pythonhosted.org/packages/75/0d/881866647b8d1be4d67cb24e50d0c26f9f807f994aa1510cb9ba2fe5f612/tiktoken-0.12.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:604831189bd05480f2b885ecd2d1986dc7686f609de48208ebbbddeea071fc0b", size = 1194860, upload-time = "2025-10-06T20:22:24.602Z" },
+ { url = "https://files.pythonhosted.org/packages/b3/1e/b651ec3059474dab649b8d5b69f5c65cd8fcd8918568c1935bd4136c9392/tiktoken-0.12.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:8f317e8530bb3a222547b85a58583238c8f74fd7a7408305f9f63246d1a0958b", size = 1254567, upload-time = "2025-10-06T20:22:25.671Z" },
+ { url = "https://files.pythonhosted.org/packages/80/57/ce64fd16ac390fafde001268c364d559447ba09b509181b2808622420eec/tiktoken-0.12.0-cp314-cp314-win_amd64.whl", hash = "sha256:399c3dd672a6406719d84442299a490420b458c44d3ae65516302a99675888f3", size = 921067, upload-time = "2025-10-06T20:22:26.753Z" },
+ { url = "https://files.pythonhosted.org/packages/ac/a4/72eed53e8976a099539cdd5eb36f241987212c29629d0a52c305173e0a68/tiktoken-0.12.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:c2c714c72bc00a38ca969dae79e8266ddec999c7ceccd603cc4f0d04ccd76365", size = 1050473, upload-time = "2025-10-06T20:22:27.775Z" },
+ { url = "https://files.pythonhosted.org/packages/e6/d7/0110b8f54c008466b19672c615f2168896b83706a6611ba6e47313dbc6e9/tiktoken-0.12.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:cbb9a3ba275165a2cb0f9a83f5d7025afe6b9d0ab01a22b50f0e74fee2ad253e", size = 993855, upload-time = "2025-10-06T20:22:28.799Z" },
+ { url = "https://files.pythonhosted.org/packages/5f/77/4f268c41a3957c418b084dd576ea2fad2e95da0d8e1ab705372892c2ca22/tiktoken-0.12.0-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:dfdfaa5ffff8993a3af94d1125870b1d27aed7cb97aa7eb8c1cefdbc87dbee63", size = 1129022, upload-time = "2025-10-06T20:22:29.981Z" },
+ { url = "https://files.pythonhosted.org/packages/4e/2b/fc46c90fe5028bd094cd6ee25a7db321cb91d45dc87531e2bdbb26b4867a/tiktoken-0.12.0-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:584c3ad3d0c74f5269906eb8a659c8bfc6144a52895d9261cdaf90a0ae5f4de0", size = 1150736, upload-time = "2025-10-06T20:22:30.996Z" },
+ { url = "https://files.pythonhosted.org/packages/28/c0/3c7a39ff68022ddfd7d93f3337ad90389a342f761c4d71de99a3ccc57857/tiktoken-0.12.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:54c891b416a0e36b8e2045b12b33dd66fb34a4fe7965565f1b482da50da3e86a", size = 1194908, upload-time = "2025-10-06T20:22:32.073Z" },
+ { url = "https://files.pythonhosted.org/packages/ab/0d/c1ad6f4016a3968c048545f5d9b8ffebf577774b2ede3e2e352553b685fe/tiktoken-0.12.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5edb8743b88d5be814b1a8a8854494719080c28faaa1ccbef02e87354fe71ef0", size = 1253706, upload-time = "2025-10-06T20:22:33.385Z" },
+ { url = "https://files.pythonhosted.org/packages/af/df/c7891ef9d2712ad774777271d39fdef63941ffba0a9d59b7ad1fd2765e57/tiktoken-0.12.0-cp314-cp314t-win_amd64.whl", hash = "sha256:f61c0aea5565ac82e2ec50a05e02a6c44734e91b51c10510b084ea1b8e633a71", size = 920667, upload-time = "2025-10-06T20:22:34.444Z" },
]
[[package]]
@@ -5643,14 +5691,14 @@ wheels = [
[[package]]
name = "typing-inspection"
-version = "0.4.1"
+version = "0.4.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/f8/b1/0c11f5058406b3af7609f121aaa6b609744687f1d158b3c3a5bf4cc94238/typing_inspection-0.4.1.tar.gz", hash = "sha256:6ae134cc0203c33377d43188d4064e9b357dba58cff3185f22924610e70a9d28", size = 75726, upload-time = "2025-05-21T18:55:23.885Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/55/e3/70399cb7dd41c10ac53367ae42139cf4b1ca5f36bb3dc6c9d33acdb43655/typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464", size = 75949, upload-time = "2025-10-01T02:14:41.687Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/17/69/cd203477f944c353c31bade965f880aa1061fd6bf05ded0726ca845b6ff7/typing_inspection-0.4.1-py3-none-any.whl", hash = "sha256:389055682238f53b04f7badcb49b989835495a96700ced5dab2d8feae4b26f51", size = 14552, upload-time = "2025-05-21T18:55:22.152Z" },
+ { url = "https://files.pythonhosted.org/packages/dc/9b/47798a6c91d8bdb567fe2698fe81e0c6b7cb7ef4d13da4114b41d239f65d/typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7", size = 14611, upload-time = "2025-10-01T02:14:40.154Z" },
]
[[package]]
diff --git a/libs/langchain_v1/Makefile b/libs/langchain_v1/Makefile
index 7df0cec386f..032ffeeee64 100644
--- a/libs/langchain_v1/Makefile
+++ b/libs/langchain_v1/Makefile
@@ -28,7 +28,7 @@ coverage:
$(TEST_FILE)
test:
- make start_services && LANGGRAPH_TEST_FAST=0 uv run --group test pytest -n auto --disable-socket --allow-unix-socket $(TEST_FILE) --cov-report term-missing:skip-covered; \
+ make start_services && LANGGRAPH_TEST_FAST=0 uv run --no-sync --active --group test pytest -n auto --disable-socket --allow-unix-socket $(TEST_FILE) --cov-report term-missing:skip-covered; \
EXIT_CODE=$$?; \
make stop_services; \
exit $$EXIT_CODE
diff --git a/libs/langchain_v1/README.md b/libs/langchain_v1/README.md
index 1960f29e79b..49db352a0f0 100644
--- a/libs/langchain_v1/README.md
+++ b/libs/langchain_v1/README.md
@@ -1,8 +1,7 @@
# π¦οΈπ LangChain
-β‘ Building applications with LLMs through composability β‘
-
-[](https://opensource.org/licenses/MIT)
+[](https://pypi.org/project/langchain/#history)
+[](https://opensource.org/licenses/MIT)
[](https://pypistats.org/packages/langchain)
[](https://twitter.com/langchainai)
@@ -13,67 +12,28 @@ To help you ship LangChain apps to production faster, check out [LangSmith](http
## Quick Install
-`pip install langchain`
+```bash
+pip install langchain
+```
## π€ What is this?
-Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
+LangChain is the easiest way to start building agents and applications powered by LLMs. With under 10 lines of code, you can connect to OpenAI, Anthropic, Google, and [more](https://docs.langchain.com/oss/python/integrations/providers/overview). LangChain provides a pre-built agent architecture and model integrations to help you get started quickly and seamlessly incorporate LLMs into your agents and applications.
-This library aims to assist in the development of those types of applications. Common examples of these applications include:
+We recommend you use LangChain if you want to quickly build agents and autonomous applications. Use [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview), our low-level agent orchestration framework and runtime, when you have more advanced needs that require a combination of deterministic and agentic workflows, heavy customization, and carefully controlled latency.
-**β Question answering with RAG**
-
-- [Documentation](https://python.langchain.com/docs/tutorials/rag/)
-- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
-
-**π§± Extracting structured output**
-
-- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
-- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
-
-**π€ Chatbots**
-
-- [Documentation](https://python.langchain.com/docs/tutorials/chatbot/)
-- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
+LangChain [agents](https://docs.langchain.com/oss/python/langchain/agents) are built on top of LangGraph in order to provide durable execution, streaming, human-in-the-loop, persistence, and more. (You do not need to know LangGraph for basic LangChain agent usage.)
## π Documentation
-Please see [our full documentation](https://python.langchain.com) on:
+For full documentation, see the [API reference](https://reference.langchain.com/python/langchain/langchain/).
-- Getting started (installation, setting up the environment, simple examples)
-- How-To examples (demos, integrations, helper functions)
-- Reference (full API docs)
-- Resources (high-level explanation of core concepts)
+## π Releases & Versioning
-## π What can this help with?
-
-There are five main areas that LangChain is designed to help with.
-These are, in increasing order of complexity:
-
-**π€ Agents:**
-
-Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
-
-**π Retrieval Augmented Generation:**
-
-Retrieval Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
-
-**π§ Evaluation:**
-
-Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
-
-**π Models and Prompts:**
-
-This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with chat models and LLMs.
-
-**π Chains:**
-
-Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
-
-For more information on these concepts, please see our [full documentation](https://python.langchain.com).
+See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.
## π Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
-For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
+For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
diff --git a/libs/langchain_v1/langchain/__init__.py b/libs/langchain_v1/langchain/__init__.py
index 7be32be5ba2..eaea90f7695 100644
--- a/libs/langchain_v1/langchain/__init__.py
+++ b/libs/langchain_v1/langchain/__init__.py
@@ -1,3 +1,3 @@
"""Main entrypoint into LangChain."""
-__version__ = "1.0.0a12"
+__version__ = "1.0.4"
diff --git a/libs/langchain_v1/langchain/agents/__init__.py b/libs/langchain_v1/langchain/agents/__init__.py
index 72abb7ed627..67ac01b1ac5 100644
--- a/libs/langchain_v1/langchain/agents/__init__.py
+++ b/libs/langchain_v1/langchain/agents/__init__.py
@@ -1,4 +1,10 @@
-"""langgraph.prebuilt exposes a higher-level API for creating and executing agents and tools."""
+"""Entrypoint to building [Agents](https://docs.langchain.com/oss/python/langchain/agents) with LangChain.
+
+!!! warning "Reference docs"
+ This page contains **reference documentation** for Agents. See
+ [the docs](https://docs.langchain.com/oss/python/langchain/agents) for conceptual
+ guides, tutorials, and examples on using Agents.
+""" # noqa: E501
from langchain.agents.factory import create_agent
from langchain.agents.middleware.types import AgentState
diff --git a/libs/langchain_v1/langchain/agents/factory.py b/libs/langchain_v1/langchain/agents/factory.py
index f61d11d067d..c8880b65cd7 100644
--- a/libs/langchain_v1/langchain/agents/factory.py
+++ b/libs/langchain_v1/langchain/agents/factory.py
@@ -3,7 +3,6 @@
from __future__ import annotations
import itertools
-from dataclasses import dataclass
from typing import (
TYPE_CHECKING,
Annotated,
@@ -14,27 +13,29 @@ from typing import (
get_type_hints,
)
-if TYPE_CHECKING:
- from collections.abc import Awaitable
-
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_core.messages import AIMessage, AnyMessage, SystemMessage, ToolMessage
from langchain_core.tools import BaseTool
from langgraph._internal._runnable import RunnableCallable
from langgraph.constants import END, START
from langgraph.graph.state import StateGraph
+from langgraph.prebuilt.tool_node import ToolCallWithContext, ToolNode
from langgraph.runtime import Runtime # noqa: TC002
from langgraph.types import Command, Send
from langgraph.typing import ContextT # noqa: TC002
-from typing_extensions import NotRequired, Required, TypedDict, TypeVar
+from typing_extensions import NotRequired, Required, TypedDict
from langchain.agents.middleware.types import (
AgentMiddleware,
AgentState,
JumpTo,
ModelRequest,
+ ModelResponse,
OmitFromSchema,
- PublicAgentState,
+ ResponseT,
+ StateT_co,
+ _InputAgentState,
+ _OutputAgentState,
)
from langchain.agents.structured_output import (
AutoStrategy,
@@ -43,15 +44,14 @@ from langchain.agents.structured_output import (
ProviderStrategy,
ProviderStrategyBinding,
ResponseFormat,
+ StructuredOutputError,
StructuredOutputValidationError,
ToolStrategy,
)
from langchain.chat_models import init_chat_model
-from langchain.tools import ToolNode
-from langchain.tools.tool_node import ToolCallWithContext
if TYPE_CHECKING:
- from collections.abc import Callable, Sequence
+ from collections.abc import Awaitable, Callable, Sequence
from langchain_core.runnables import Runnable
from langgraph.cache.base import BaseCache
@@ -59,42 +59,29 @@ if TYPE_CHECKING:
from langgraph.store.base import BaseStore
from langgraph.types import Checkpointer
- from langchain.tools.tool_node import ToolCallHandler, ToolCallRequest
+ from langchain.agents.middleware.types import ToolCallRequest, ToolCallWrapper
STRUCTURED_OUTPUT_ERROR_TEMPLATE = "Error: {error}\n Please fix your mistakes."
-ResponseT = TypeVar("ResponseT")
-
-@dataclass
-class _InternalModelResponse:
- """Internal wrapper for model execution results.
-
- Contains either a successful result or an exception, plus cached metadata.
- Middleware receives either AIMessage via .send() or Exception via .throw().
- """
-
- result: AIMessage | None
- """The AI message result on success."""
-
- exception: Exception | None
- """The exception on error."""
-
- effective_response_format: Any = None
- """Cached response format after auto-detection."""
+def _normalize_to_model_response(result: ModelResponse | AIMessage) -> ModelResponse:
+ """Normalize middleware return value to ModelResponse."""
+ if isinstance(result, AIMessage):
+ return ModelResponse(result=[result], structured_response=None)
+ return result
def _chain_model_call_handlers(
handlers: Sequence[
Callable[
- [ModelRequest, Callable[[ModelRequest], AIMessage]],
- AIMessage,
+ [ModelRequest, Callable[[ModelRequest], ModelResponse]],
+ ModelResponse | AIMessage,
]
],
) -> (
Callable[
- [ModelRequest, Callable[[ModelRequest], AIMessage]],
- AIMessage,
+ [ModelRequest, Callable[[ModelRequest], ModelResponse]],
+ ModelResponse,
]
| None
):
@@ -107,7 +94,7 @@ def _chain_model_call_handlers(
handlers: List of handlers. First handler wraps all others.
Returns:
- Composed handler, or None if handlers empty.
+ Composed handler, or `None` if handlers empty.
Example:
```python
@@ -137,33 +124,45 @@ def _chain_model_call_handlers(
return None
if len(handlers) == 1:
- return handlers[0]
+ # Single handler - wrap to normalize output
+ single_handler = handlers[0]
+
+ def normalized_single(
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelResponse:
+ result = single_handler(request, handler)
+ return _normalize_to_model_response(result)
+
+ return normalized_single
def compose_two(
outer: Callable[
- [ModelRequest, Callable[[ModelRequest], AIMessage]],
- AIMessage,
+ [ModelRequest, Callable[[ModelRequest], ModelResponse]],
+ ModelResponse | AIMessage,
],
inner: Callable[
- [ModelRequest, Callable[[ModelRequest], AIMessage]],
- AIMessage,
+ [ModelRequest, Callable[[ModelRequest], ModelResponse]],
+ ModelResponse | AIMessage,
],
) -> Callable[
- [ModelRequest, Callable[[ModelRequest], AIMessage]],
- AIMessage,
+ [ModelRequest, Callable[[ModelRequest], ModelResponse]],
+ ModelResponse,
]:
"""Compose two handlers where outer wraps inner."""
def composed(
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
- # Create a wrapper that calls inner with the base handler
- def inner_handler(req: ModelRequest) -> AIMessage:
- return inner(req, handler)
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelResponse:
+ # Create a wrapper that calls inner with the base handler and normalizes
+ def inner_handler(req: ModelRequest) -> ModelResponse:
+ inner_result = inner(req, handler)
+ return _normalize_to_model_response(inner_result)
- # Call outer with the wrapped inner as its handler
- return outer(request, inner_handler)
+ # Call outer with the wrapped inner as its handler and normalize
+ outer_result = outer(request, inner_handler)
+ return _normalize_to_model_response(outer_result)
return composed
@@ -172,62 +171,83 @@ def _chain_model_call_handlers(
for handler in reversed(handlers[:-1]):
result = compose_two(handler, result)
- return result
+ # Wrap to ensure final return type is exactly ModelResponse
+ def final_normalized(
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelResponse:
+ # result here is typed as returning ModelResponse | AIMessage but compose_two normalizes
+ final_result = result(request, handler)
+ return _normalize_to_model_response(final_result)
+
+ return final_normalized
def _chain_async_model_call_handlers(
handlers: Sequence[
Callable[
- [ModelRequest, Callable[[ModelRequest], Awaitable[AIMessage]]],
- Awaitable[AIMessage],
+ [ModelRequest, Callable[[ModelRequest], Awaitable[ModelResponse]]],
+ Awaitable[ModelResponse | AIMessage],
]
],
) -> (
Callable[
- [ModelRequest, Callable[[ModelRequest], Awaitable[AIMessage]]],
- Awaitable[AIMessage],
+ [ModelRequest, Callable[[ModelRequest], Awaitable[ModelResponse]]],
+ Awaitable[ModelResponse],
]
| None
):
- """Compose multiple async wrap_model_call handlers into single middleware stack.
+ """Compose multiple async `wrap_model_call` handlers into single middleware stack.
Args:
handlers: List of async handlers. First handler wraps all others.
Returns:
- Composed async handler, or None if handlers empty.
+ Composed async handler, or `None` if handlers empty.
"""
if not handlers:
return None
if len(handlers) == 1:
- return handlers[0]
+ # Single handler - wrap to normalize output
+ single_handler = handlers[0]
+
+ async def normalized_single(
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelResponse:
+ result = await single_handler(request, handler)
+ return _normalize_to_model_response(result)
+
+ return normalized_single
def compose_two(
outer: Callable[
- [ModelRequest, Callable[[ModelRequest], Awaitable[AIMessage]]],
- Awaitable[AIMessage],
+ [ModelRequest, Callable[[ModelRequest], Awaitable[ModelResponse]]],
+ Awaitable[ModelResponse | AIMessage],
],
inner: Callable[
- [ModelRequest, Callable[[ModelRequest], Awaitable[AIMessage]]],
- Awaitable[AIMessage],
+ [ModelRequest, Callable[[ModelRequest], Awaitable[ModelResponse]]],
+ Awaitable[ModelResponse | AIMessage],
],
) -> Callable[
- [ModelRequest, Callable[[ModelRequest], Awaitable[AIMessage]]],
- Awaitable[AIMessage],
+ [ModelRequest, Callable[[ModelRequest], Awaitable[ModelResponse]]],
+ Awaitable[ModelResponse],
]:
"""Compose two async handlers where outer wraps inner."""
async def composed(
request: ModelRequest,
- handler: Callable[[ModelRequest], Awaitable[AIMessage]],
- ) -> AIMessage:
- # Create a wrapper that calls inner with the base handler
- async def inner_handler(req: ModelRequest) -> AIMessage:
- return await inner(req, handler)
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelResponse:
+ # Create a wrapper that calls inner with the base handler and normalizes
+ async def inner_handler(req: ModelRequest) -> ModelResponse:
+ inner_result = await inner(req, handler)
+ return _normalize_to_model_response(inner_result)
- # Call outer with the wrapped inner as its handler
- return await outer(request, inner_handler)
+ # Call outer with the wrapped inner as its handler and normalize
+ outer_result = await outer(request, inner_handler)
+ return _normalize_to_model_response(outer_result)
return composed
@@ -236,16 +256,26 @@ def _chain_async_model_call_handlers(
for handler in reversed(handlers[:-1]):
result = compose_two(handler, result)
- return result
+ # Wrap to ensure final return type is exactly ModelResponse
+ async def final_normalized(
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelResponse:
+ # result here is typed as returning ModelResponse | AIMessage but compose_two normalizes
+ final_result = await result(request, handler)
+ return _normalize_to_model_response(final_result)
+
+ return final_normalized
def _resolve_schema(schemas: set[type], schema_name: str, omit_flag: str | None = None) -> type:
- """Resolve schema by merging schemas and optionally respecting OmitFromSchema annotations.
+ """Resolve schema by merging schemas and optionally respecting `OmitFromSchema` annotations.
Args:
schemas: List of schema types to merge
- schema_name: Name for the generated TypedDict
- omit_flag: If specified, omit fields with this flag set ('input' or 'output')
+ schema_name: Name for the generated `TypedDict`
+ omit_flag: If specified, omit fields with this flag set (`'input'` or
+ `'output'`)
"""
all_annotations = {}
@@ -285,11 +315,11 @@ def _extract_metadata(type_: type) -> list:
def _get_can_jump_to(middleware: AgentMiddleware[Any, Any], hook_name: str) -> list[JumpTo]:
- """Get the can_jump_to list from either sync or async hook methods.
+ """Get the `can_jump_to` list from either sync or async hook methods.
Args:
middleware: The middleware instance to inspect.
- hook_name: The name of the hook ('before_model' or 'after_model').
+ hook_name: The name of the hook (`'before_model'` or `'after_model'`).
Returns:
List of jump destinations, or empty list if not configured.
@@ -323,7 +353,7 @@ def _supports_provider_strategy(model: str | BaseChatModel) -> bool:
"""Check if a model supports provider-specific structured output.
Args:
- model: Model name string or BaseChatModel instance.
+ model: Model name string or `BaseChatModel` instance.
Returns:
`True` if the model supports provider-specific structured output, `False` otherwise.
@@ -346,7 +376,7 @@ def _handle_structured_output_error(
exception: Exception,
response_format: ResponseFormat,
) -> tuple[bool, str]:
- """Handle structured output error. Returns (should_retry, retry_tool_message)."""
+ """Handle structured output error. Returns `(should_retry, retry_tool_message)`."""
if not isinstance(response_format, ToolStrategy):
return False, ""
@@ -372,30 +402,30 @@ def _handle_structured_output_error(
return False, ""
-def _chain_tool_call_handlers(
- handlers: Sequence[ToolCallHandler],
-) -> ToolCallHandler | None:
- """Compose handlers into middleware stack (first = outermost).
+def _chain_tool_call_wrappers(
+ wrappers: Sequence[ToolCallWrapper],
+) -> ToolCallWrapper | None:
+ """Compose wrappers into middleware stack (first = outermost).
Args:
- handlers: Handlers in middleware order.
+ wrappers: Wrappers in middleware order.
Returns:
- Composed handler, or None if empty.
+ Composed wrapper, or `None` if empty.
Example:
- handler = _chain_tool_call_handlers([auth, cache, retry])
+ wrapper = _chain_tool_call_wrappers([auth, cache, retry])
# Request flows: auth -> cache -> retry -> tool
# Response flows: tool -> retry -> cache -> auth
"""
- if not handlers:
+ if not wrappers:
return None
- if len(handlers) == 1:
- return handlers[0]
+ if len(wrappers) == 1:
+ return wrappers[0]
- def compose_two(outer: ToolCallHandler, inner: ToolCallHandler) -> ToolCallHandler:
- """Compose two handlers where outer wraps inner."""
+ def compose_two(outer: ToolCallWrapper, inner: ToolCallWrapper) -> ToolCallWrapper:
+ """Compose two wrappers where outer wraps inner."""
def composed(
request: ToolCallRequest,
@@ -410,10 +440,74 @@ def _chain_tool_call_handlers(
return composed
- # Chain all handlers: first -> second -> ... -> last
- result = handlers[-1]
- for handler in reversed(handlers[:-1]):
- result = compose_two(handler, result)
+ # Chain all wrappers: first -> second -> ... -> last
+ result = wrappers[-1]
+ for wrapper in reversed(wrappers[:-1]):
+ result = compose_two(wrapper, result)
+
+ return result
+
+
+def _chain_async_tool_call_wrappers(
+ wrappers: Sequence[
+ Callable[
+ [ToolCallRequest, Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]]],
+ Awaitable[ToolMessage | Command],
+ ]
+ ],
+) -> (
+ Callable[
+ [ToolCallRequest, Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]]],
+ Awaitable[ToolMessage | Command],
+ ]
+ | None
+):
+ """Compose async wrappers into middleware stack (first = outermost).
+
+ Args:
+ wrappers: Async wrappers in middleware order.
+
+ Returns:
+ Composed async wrapper, or `None` if empty.
+ """
+ if not wrappers:
+ return None
+
+ if len(wrappers) == 1:
+ return wrappers[0]
+
+ def compose_two(
+ outer: Callable[
+ [ToolCallRequest, Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]]],
+ Awaitable[ToolMessage | Command],
+ ],
+ inner: Callable[
+ [ToolCallRequest, Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]]],
+ Awaitable[ToolMessage | Command],
+ ],
+ ) -> Callable[
+ [ToolCallRequest, Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]]],
+ Awaitable[ToolMessage | Command],
+ ]:
+ """Compose two async wrappers where outer wraps inner."""
+
+ async def composed(
+ request: ToolCallRequest,
+ execute: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
+ ) -> ToolMessage | Command:
+ # Create an async callable that invokes inner with the original execute
+ async def call_inner(req: ToolCallRequest) -> ToolMessage | Command:
+ return await inner(req, execute)
+
+ # Outer can call call_inner multiple times
+ return await outer(request, call_inner)
+
+ return composed
+
+ # Chain all wrappers: first -> second -> ... -> last
+ result = wrappers[-1]
+ for wrapper in reversed(wrappers[:-1]):
+ result = compose_two(wrapper, result)
return result
@@ -422,9 +516,10 @@ def create_agent( # noqa: PLR0915
model: str | BaseChatModel,
tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
*,
- system_prompt: str | None = None,
- middleware: Sequence[AgentMiddleware[AgentState[ResponseT], ContextT]] = (),
+ system_prompt: str | SystemMessage | None = None,
+ middleware: Sequence[AgentMiddleware[StateT_co, ContextT]] = (),
response_format: ResponseFormat[ResponseT] | type[ResponseT] | None = None,
+ state_schema: type[AgentState[ResponseT]] | None = None,
context_schema: type[ContextT] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
@@ -434,56 +529,87 @@ def create_agent( # noqa: PLR0915
name: str | None = None,
cache: BaseCache | None = None,
) -> CompiledStateGraph[
- AgentState[ResponseT], ContextT, PublicAgentState[ResponseT], PublicAgentState[ResponseT]
+ AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]
]:
"""Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using `create_agent`,
- visit [Agents](https://docs.langchain.com/oss/python/langchain/agents) documentation.
+ visit the [Agents](https://docs.langchain.com/oss/python/langchain/agents) docs.
Args:
model: The language model for the agent. Can be a string identifier
- (e.g., `"openai:gpt-4"`), a chat model instance (e.g., `ChatOpenAI()`).
- tools: A list of tools, dicts, or callables. If `None` or an empty list,
- the agent will consist of a model node without a tool calling loop.
- system_prompt: An optional system prompt for the LLM. If provided as a string,
- it will be converted to a SystemMessage and added to the beginning
- of the message list.
+ (e.g., `"openai:gpt-4"`) or a direct chat model instance (e.g.,
+ [`ChatOpenAI`][langchain_openai.ChatOpenAI] or other another
+ [chat model](https://docs.langchain.com/oss/python/integrations/chat)).
+
+ For a full list of supported model strings, see
+ [`init_chat_model`][langchain.chat_models.init_chat_model(model_provider)].
+ tools: A list of tools, `dicts`, or `Callable`.
+
+ If `None` or an empty list, the agent will consist of a model node without a
+ tool calling loop.
+ system_prompt: An optional system prompt for the LLM or
+ can already be a [`SystemMessage`][langchain.messages.SystemMessage] object.
+
middleware: A sequence of middleware instances to apply to the agent.
- Middleware can intercept and modify agent behavior at various stages.
+
+ Middleware can intercept and modify agent behavior at various stages. See
+ the [full guide](https://docs.langchain.com/oss/python/langchain/middleware).
response_format: An optional configuration for structured responses.
- Can be a ToolStrategy, ProviderStrategy, or a Pydantic model class.
+
+ Can be a `ToolStrategy`, `ProviderStrategy`, or a Pydantic model class.
+
If provided, the agent will handle structured output during the
conversation flow. Raw schemas will be wrapped in an appropriate strategy
based on model capabilities.
+ state_schema: An optional `TypedDict` schema that extends `AgentState`.
+
+ When provided, this schema is used instead of `AgentState` as the base
+ schema for merging with middleware state schemas. This allows users to
+ add custom state fields without needing to create custom middleware.
+ Generally, it's recommended to use `state_schema` extensions via middleware
+ to keep relevant extensions scoped to corresponding hooks / tools.
+
+ The schema must be a subclass of `AgentState[ResponseT]`.
context_schema: An optional schema for runtime context.
- checkpointer: An optional checkpoint saver object. This is used for persisting
- the state of the graph (e.g., as chat memory) for a single thread
- (e.g., a single conversation).
- store: An optional store object. This is used for persisting data
- across multiple threads (e.g., multiple conversations / users).
+ checkpointer: An optional checkpoint saver object.
+
+ Used for persisting the state of the graph (e.g., as chat memory) for a
+ single thread (e.g., a single conversation).
+ store: An optional store object.
+
+ Used for persisting data across multiple threads (e.g., multiple
+ conversations / users).
interrupt_before: An optional list of node names to interrupt before.
- This is useful if you want to add a user confirmation or other interrupt
+
+ Useful if you want to add a user confirmation or other interrupt
before taking an action.
interrupt_after: An optional list of node names to interrupt after.
- This is useful if you want to return directly or run additional processing
+
+ Useful if you want to return directly or run additional processing
on an output.
- debug: A flag indicating whether to enable debug mode.
- name: An optional name for the CompiledStateGraph.
+ debug: Whether to enable verbose logging for graph execution.
+
+ When enabled, prints detailed information about each node execution, state
+ updates, and transitions during agent runtime. Useful for debugging
+ middleware behavior and understanding agent execution flow.
+ name: An optional name for the `CompiledStateGraph`.
+
This name will be automatically used when adding the agent graph to
another graph as a subgraph node - particularly useful for building
multi-agent systems.
- cache: An optional BaseCache instance to enable caching of graph execution.
+ cache: An optional `BaseCache` instance to enable caching of graph execution.
Returns:
- A compiled StateGraph that can be used for chat interactions.
+ A compiled `StateGraph` that can be used for chat interactions.
The agent node calls the language model with the messages list (after applying
- the system prompt). If the resulting AIMessage contains `tool_calls`, the graph will
- then call the tools. The tools node executes the tools and adds the responses
- to the messages list as `ToolMessage` objects. The agent node then calls the
- language model again. The process repeats until no more `tool_calls` are
- present in the response. The agent then returns the full list of messages.
+ the system prompt). If the resulting [`AIMessage`][langchain.messages.AIMessage]
+ contains `tool_calls`, the graph will then call the tools. The tools node executes
+ the tools and adds the responses to the messages list as
+ [`ToolMessage`][langchain.messages.ToolMessage] objects. The agent node then calls
+ the language model again. The process repeats until no more `tool_calls` are present
+ in the response. The agent then returns the full list of messages.
Example:
```python
@@ -496,7 +622,7 @@ def create_agent( # noqa: PLR0915
graph = create_agent(
- model="anthropic:claude-3-7-sonnet-latest",
+ model="anthropic:claude-sonnet-4-5-20250929",
tools=[check_weather],
system_prompt="You are a helpful assistant",
)
@@ -545,16 +671,37 @@ def create_agent( # noqa: PLR0915
structured_output_tools[structured_tool_info.tool.name] = structured_tool_info
middleware_tools = [t for m in middleware for t in getattr(m, "tools", [])]
- # Collect middleware with wrap_tool_call hooks
+ # Collect middleware with wrap_tool_call or awrap_tool_call hooks
+ # Include middleware with either implementation to ensure NotImplementedError is raised
+ # when middleware doesn't support the execution path
middleware_w_wrap_tool_call = [
- m for m in middleware if m.__class__.wrap_tool_call is not AgentMiddleware.wrap_tool_call
+ m
+ for m in middleware
+ if m.__class__.wrap_tool_call is not AgentMiddleware.wrap_tool_call
+ or m.__class__.awrap_tool_call is not AgentMiddleware.awrap_tool_call
]
# Chain all wrap_tool_call handlers into a single composed handler
- wrap_tool_call_handler = None
+ wrap_tool_call_wrapper = None
if middleware_w_wrap_tool_call:
- handlers = [m.wrap_tool_call for m in middleware_w_wrap_tool_call]
- wrap_tool_call_handler = _chain_tool_call_handlers(handlers)
+ wrappers = [m.wrap_tool_call for m in middleware_w_wrap_tool_call]
+ wrap_tool_call_wrapper = _chain_tool_call_wrappers(wrappers)
+
+ # Collect middleware with awrap_tool_call or wrap_tool_call hooks
+ # Include middleware with either implementation to ensure NotImplementedError is raised
+ # when middleware doesn't support the execution path
+ middleware_w_awrap_tool_call = [
+ m
+ for m in middleware
+ if m.__class__.awrap_tool_call is not AgentMiddleware.awrap_tool_call
+ or m.__class__.wrap_tool_call is not AgentMiddleware.wrap_tool_call
+ ]
+
+ # Chain all awrap_tool_call handlers into a single composed async handler
+ awrap_tool_call_wrapper = None
+ if middleware_w_awrap_tool_call:
+ async_wrappers = [m.awrap_tool_call for m in middleware_w_awrap_tool_call]
+ awrap_tool_call_wrapper = _chain_async_tool_call_wrappers(async_wrappers)
# Setup tools
tool_node: ToolNode | None = None
@@ -567,7 +714,11 @@ def create_agent( # noqa: PLR0915
# Only create ToolNode if we have client-side tools
tool_node = (
- ToolNode(tools=available_tools, on_tool_call=wrap_tool_call_handler)
+ ToolNode(
+ tools=available_tools,
+ wrap_tool_call=wrap_tool_call_wrapper,
+ awrap_tool_call=awrap_tool_call_wrapper,
+ )
if available_tools
else None
)
@@ -609,13 +760,23 @@ def create_agent( # noqa: PLR0915
if m.__class__.after_agent is not AgentMiddleware.after_agent
or m.__class__.aafter_agent is not AgentMiddleware.aafter_agent
]
+ # Collect middleware with wrap_model_call or awrap_model_call hooks
+ # Include middleware with either implementation to ensure NotImplementedError is raised
+ # when middleware doesn't support the execution path
middleware_w_wrap_model_call = [
- m for m in middleware if m.__class__.wrap_model_call is not AgentMiddleware.wrap_model_call
+ m
+ for m in middleware
+ if m.__class__.wrap_model_call is not AgentMiddleware.wrap_model_call
+ or m.__class__.awrap_model_call is not AgentMiddleware.awrap_model_call
]
+ # Collect middleware with awrap_model_call or wrap_model_call hooks
+ # Include middleware with either implementation to ensure NotImplementedError is raised
+ # when middleware doesn't support the execution path
middleware_w_awrap_model_call = [
m
for m in middleware
if m.__class__.awrap_model_call is not AgentMiddleware.awrap_model_call
+ or m.__class__.wrap_model_call is not AgentMiddleware.wrap_model_call
]
# Compose wrap_model_call handlers into a single middleware stack (sync)
@@ -630,18 +791,20 @@ def create_agent( # noqa: PLR0915
async_handlers = [m.awrap_model_call for m in middleware_w_awrap_model_call]
awrap_model_call_handler = _chain_async_model_call_handlers(async_handlers)
- state_schemas = {m.state_schema for m in middleware}
- state_schemas.add(AgentState)
+ state_schemas: set[type] = {m.state_schema for m in middleware}
+ # Use provided state_schema if available, otherwise use base AgentState
+ base_state = state_schema if state_schema is not None else AgentState
+ state_schemas.add(base_state)
- state_schema = _resolve_schema(state_schemas, "StateSchema", None)
+ resolved_state_schema = _resolve_schema(state_schemas, "StateSchema", None)
input_schema = _resolve_schema(state_schemas, "InputSchema", "input")
output_schema = _resolve_schema(state_schemas, "OutputSchema", "output")
# create graph, add nodes
graph: StateGraph[
- AgentState[ResponseT], ContextT, PublicAgentState[ResponseT], PublicAgentState[ResponseT]
+ AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]
] = StateGraph(
- state_schema=state_schema,
+ state_schema=resolved_state_schema,
input_schema=input_schema,
output_schema=output_schema,
context_schema=context_schema,
@@ -663,8 +826,16 @@ def create_agent( # noqa: PLR0915
provider_strategy_binding = ProviderStrategyBinding.from_schema_spec(
effective_response_format.schema_spec
)
- structured_response = provider_strategy_binding.parse(output)
- return {"messages": [output], "structured_response": structured_response}
+ try:
+ structured_response = provider_strategy_binding.parse(output)
+ except Exception as exc: # noqa: BLE001
+ schema_name = getattr(
+ effective_response_format.schema_spec.schema, "__name__", "response_format"
+ )
+ validation_error = StructuredOutputValidationError(schema_name, exc, output)
+ raise validation_error
+ else:
+ return {"messages": [output], "structured_response": structured_response}
return {"messages": [output]}
# Handle structured output with tool strategy
@@ -678,11 +849,11 @@ def create_agent( # noqa: PLR0915
]
if structured_tool_calls:
- exception: Exception | None = None
+ exception: StructuredOutputError | None = None
if len(structured_tool_calls) > 1:
# Handle multiple structured outputs error
tool_names = [tc["name"] for tc in structured_tool_calls]
- exception = MultipleStructuredOutputsError(tool_names)
+ exception = MultipleStructuredOutputsError(tool_names, output)
should_retry, error_message = _handle_structured_output_error(
exception, effective_response_format
)
@@ -724,7 +895,7 @@ def create_agent( # noqa: PLR0915
"structured_response": structured_response,
}
except Exception as exc: # noqa: BLE001
- exception = StructuredOutputValidationError(tool_call["name"], exc)
+ exception = StructuredOutputValidationError(tool_call["name"], exc, output)
should_retry, error_message = _handle_structured_output_error(
exception, effective_response_format
)
@@ -753,8 +924,9 @@ def create_agent( # noqa: PLR0915
request: The model request containing model, tools, and response format.
Returns:
- Tuple of (bound_model, effective_response_format) where `effective_response_format`
- is the actual strategy used (may differ from initial if auto-detected).
+ Tuple of `(bound_model, effective_response_format)` where
+ `effective_response_format` is the actual strategy used (may differ from
+ initial if auto-detected).
"""
# Validate ONLY client-side tools that need to exist in tool_node
# Build map of available client-side tools from the ToolNode
@@ -857,31 +1029,31 @@ def create_agent( # noqa: PLR0915
)
return request.model.bind(**request.model_settings), None
- def _execute_model_sync(request: ModelRequest) -> _InternalModelResponse:
- """Execute model and return result or exception.
+ def _execute_model_sync(request: ModelRequest) -> ModelResponse:
+ """Execute model and return response.
- This is the core model execution logic wrapped by wrap_model_call handlers.
+ This is the core model execution logic wrapped by `wrap_model_call` handlers.
+ Raises any exceptions that occur during model invocation.
"""
- try:
- # Get the bound model (with auto-detection if needed)
- model_, effective_response_format = _get_bound_model(request)
- messages = request.messages
- if request.system_prompt:
- messages = [SystemMessage(request.system_prompt), *messages]
+ # Get the bound model (with auto-detection if needed)
+ model_, effective_response_format = _get_bound_model(request)
+ messages = request.messages
+ if request.system_prompt and not isinstance(request.system_prompt, SystemMessage):
+ messages = [SystemMessage(content=request.system_prompt), *messages]
+ elif request.system_prompt and isinstance(request.system_prompt, SystemMessage):
+ messages = [request.system_prompt, *messages]
- output = model_.invoke(messages)
- return _InternalModelResponse(
- result=output,
- exception=None,
- effective_response_format=effective_response_format,
- )
- except Exception as error: # noqa: BLE001
- # Catch all exceptions from model invocation
- return _InternalModelResponse(
- result=None,
- exception=error,
- effective_response_format=None,
- )
+ output = model_.invoke(messages)
+
+ # Handle model output to get messages and structured_response
+ handled_output = _handle_model_output(output, effective_response_format)
+ messages_list = handled_output["messages"]
+ structured_response = handled_output.get("structured_response")
+
+ return ModelResponse(
+ result=messages_list,
+ structured_response=structured_response,
+ )
def model_node(state: AgentState, runtime: Runtime[ContextT]) -> dict[str, Any]:
"""Sync model request handler with sequential middleware processing."""
@@ -896,58 +1068,47 @@ def create_agent( # noqa: PLR0915
runtime=runtime,
)
- # Execute with or without handler
- effective_response_format: Any = None
-
- # Define base handler that executes the model
- def base_handler(req: ModelRequest) -> AIMessage:
- nonlocal effective_response_format
- internal_response = _execute_model_sync(req)
- if internal_response.exception is not None:
- raise internal_response.exception
- if internal_response.result is None:
- msg = "Model execution succeeded but returned no result"
- raise RuntimeError(msg)
- effective_response_format = internal_response.effective_response_format
- return internal_response.result
-
if wrap_model_call_handler is None:
# No handlers - execute directly
- output = base_handler(request)
+ response = _execute_model_sync(request)
else:
# Call composed handler with base handler
- output = wrap_model_call_handler(request, base_handler)
- return {
- "thread_model_call_count": state.get("thread_model_call_count", 0) + 1,
- "run_model_call_count": state.get("run_model_call_count", 0) + 1,
- **_handle_model_output(output, effective_response_format),
- }
+ response = wrap_model_call_handler(request, _execute_model_sync)
- async def _execute_model_async(request: ModelRequest) -> _InternalModelResponse:
- """Execute model asynchronously and return result or exception.
+ # Extract state updates from ModelResponse
+ state_updates = {"messages": response.result}
+ if response.structured_response is not None:
+ state_updates["structured_response"] = response.structured_response
- This is the core async model execution logic wrapped by wrap_model_call handlers.
+ return state_updates
+
+ async def _execute_model_async(request: ModelRequest) -> ModelResponse:
+ """Execute model asynchronously and return response.
+
+ This is the core async model execution logic wrapped by `wrap_model_call`
+ handlers.
+
+ Raises any exceptions that occur during model invocation.
"""
- try:
- # Get the bound model (with auto-detection if needed)
- model_, effective_response_format = _get_bound_model(request)
- messages = request.messages
- if request.system_prompt:
- messages = [SystemMessage(request.system_prompt), *messages]
+ # Get the bound model (with auto-detection if needed)
+ model_, effective_response_format = _get_bound_model(request)
+ messages = request.messages
+ if request.system_prompt and not isinstance(request.system_prompt, SystemMessage):
+ messages = [SystemMessage(content=request.system_prompt), *messages]
+ elif request.system_prompt and isinstance(request.system_prompt, SystemMessage):
+ messages = [request.system_prompt, *messages]
- output = await model_.ainvoke(messages)
- return _InternalModelResponse(
- result=output,
- exception=None,
- effective_response_format=effective_response_format,
- )
- except Exception as error: # noqa: BLE001
- # Catch all exceptions from model invocation
- return _InternalModelResponse(
- result=None,
- exception=error,
- effective_response_format=None,
- )
+ output = await model_.ainvoke(messages)
+
+ # Handle model output to get messages and structured_response
+ handled_output = _handle_model_output(output, effective_response_format)
+ messages_list = handled_output["messages"]
+ structured_response = handled_output.get("structured_response")
+
+ return ModelResponse(
+ result=messages_list,
+ structured_response=structured_response,
+ )
async def amodel_node(state: AgentState, runtime: Runtime[ContextT]) -> dict[str, Any]:
"""Async model request handler with sequential middleware processing."""
@@ -962,32 +1123,19 @@ def create_agent( # noqa: PLR0915
runtime=runtime,
)
- # Execute with or without handler
- effective_response_format: Any = None
-
- # Define base async handler that executes the model
- async def base_handler(req: ModelRequest) -> AIMessage:
- nonlocal effective_response_format
- internal_response = await _execute_model_async(req)
- if internal_response.exception is not None:
- raise internal_response.exception
- if internal_response.result is None:
- msg = "Model execution succeeded but returned no result"
- raise RuntimeError(msg)
- effective_response_format = internal_response.effective_response_format
- return internal_response.result
-
if awrap_model_call_handler is None:
# No async handlers - execute directly
- output = await base_handler(request)
+ response = await _execute_model_async(request)
else:
# Call composed async handler with base handler
- output = await awrap_model_call_handler(request, base_handler)
- return {
- "thread_model_call_count": state.get("thread_model_call_count", 0) + 1,
- "run_model_call_count": state.get("run_model_call_count", 0) + 1,
- **_handle_model_output(output, effective_response_format),
- }
+ response = await awrap_model_call_handler(request, _execute_model_async)
+
+ # Extract state updates from ModelResponse
+ state_updates = {"messages": response.result}
+ if response.structured_response is not None:
+ state_updates["structured_response"] = response.structured_response
+
+ return state_updates
# Use sync or async based on model capabilities
graph.add_node("model", RunnableCallable(model_node, amodel_node, trace=False))
@@ -1015,7 +1163,9 @@ def create_agent( # noqa: PLR0915
else None
)
before_agent_node = RunnableCallable(sync_before_agent, async_before_agent, trace=False)
- graph.add_node(f"{m.name}.before_agent", before_agent_node, input_schema=state_schema)
+ graph.add_node(
+ f"{m.name}.before_agent", before_agent_node, input_schema=resolved_state_schema
+ )
if (
m.__class__.before_model is not AgentMiddleware.before_model
@@ -1034,7 +1184,9 @@ def create_agent( # noqa: PLR0915
else None
)
before_node = RunnableCallable(sync_before, async_before, trace=False)
- graph.add_node(f"{m.name}.before_model", before_node, input_schema=state_schema)
+ graph.add_node(
+ f"{m.name}.before_model", before_node, input_schema=resolved_state_schema
+ )
if (
m.__class__.after_model is not AgentMiddleware.after_model
@@ -1053,7 +1205,7 @@ def create_agent( # noqa: PLR0915
else None
)
after_node = RunnableCallable(sync_after, async_after, trace=False)
- graph.add_node(f"{m.name}.after_model", after_node, input_schema=state_schema)
+ graph.add_node(f"{m.name}.after_model", after_node, input_schema=resolved_state_schema)
if (
m.__class__.after_agent is not AgentMiddleware.after_agent
@@ -1072,7 +1224,9 @@ def create_agent( # noqa: PLR0915
else None
)
after_agent_node = RunnableCallable(sync_after_agent, async_after_agent, trace=False)
- graph.add_node(f"{m.name}.after_agent", after_agent_node, input_schema=state_schema)
+ graph.add_node(
+ f"{m.name}.after_agent", after_agent_node, input_schema=resolved_state_schema
+ )
# Determine the entry node (runs once at start): before_agent -> before_model -> model
if middleware_w_before_agent:
@@ -1105,15 +1259,27 @@ def create_agent( # noqa: PLR0915
graph.add_edge(START, entry_node)
# add conditional edges only if tools exist
if tool_node is not None:
+ # Only include exit_node in destinations if any tool has return_direct=True
+ # or if there are structured output tools
+ tools_to_model_destinations = [loop_entry_node]
+ if (
+ any(tool.return_direct for tool in tool_node.tools_by_name.values())
+ or structured_output_tools
+ ):
+ tools_to_model_destinations.append(exit_node)
+
graph.add_conditional_edges(
"tools",
- _make_tools_to_model_edge(
- tool_node=tool_node,
- model_destination=loop_entry_node,
- structured_output_tools=structured_output_tools,
- end_destination=exit_node,
+ RunnableCallable(
+ _make_tools_to_model_edge(
+ tool_node=tool_node,
+ model_destination=loop_entry_node,
+ structured_output_tools=structured_output_tools,
+ end_destination=exit_node,
+ ),
+ trace=False,
),
- [loop_entry_node, exit_node],
+ tools_to_model_destinations,
)
# base destinations are tools and exit_node
@@ -1128,19 +1294,25 @@ def create_agent( # noqa: PLR0915
graph.add_conditional_edges(
loop_exit_node,
- _make_model_to_tools_edge(
- model_destination=loop_entry_node,
- structured_output_tools=structured_output_tools,
- end_destination=exit_node,
+ RunnableCallable(
+ _make_model_to_tools_edge(
+ model_destination=loop_entry_node,
+ structured_output_tools=structured_output_tools,
+ end_destination=exit_node,
+ ),
+ trace=False,
),
model_to_tools_destinations,
)
elif len(structured_output_tools) > 0:
graph.add_conditional_edges(
loop_exit_node,
- _make_model_to_model_edge(
- model_destination=loop_entry_node,
- end_destination=exit_node,
+ RunnableCallable(
+ _make_model_to_model_edge(
+ model_destination=loop_entry_node,
+ end_destination=exit_node,
+ ),
+ trace=False,
),
[loop_entry_node, exit_node],
)
@@ -1378,10 +1550,12 @@ def _make_tools_to_model_edge(
last_ai_message, tool_messages = _fetch_last_ai_and_tool_messages(state["messages"])
# 1. Exit condition: All executed tools have return_direct=True
- if all(
- tool_node.tools_by_name[c["name"]].return_direct
- for c in last_ai_message.tool_calls
- if c["name"] in tool_node.tools_by_name
+ # Filter to only client-side tools (provider tools are not in tool_node)
+ client_side_tool_calls = [
+ c for c in last_ai_message.tool_calls if c["name"] in tool_node.tools_by_name
+ ]
+ if client_side_tool_calls and all(
+ tool_node.tools_by_name[c["name"]].return_direct for c in client_side_tool_calls
):
return end_destination
@@ -1398,7 +1572,9 @@ def _make_tools_to_model_edge(
def _add_middleware_edge(
- graph: StateGraph[AgentState, ContextT, PublicAgentState, PublicAgentState],
+ graph: StateGraph[
+ AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]
+ ],
*,
name: str,
default_destination: str,
@@ -1437,7 +1613,7 @@ def _add_middleware_edge(
if "model" in can_jump_to and name != model_destination:
destinations.append(model_destination)
- graph.add_conditional_edges(name, jump_edge, destinations)
+ graph.add_conditional_edges(name, RunnableCallable(jump_edge, trace=False), destinations)
else:
graph.add_edge(name, default_destination)
diff --git a/libs/langchain_v1/langchain/agents/middleware/__init__.py b/libs/langchain_v1/langchain/agents/middleware/__init__.py
index d7066c67920..8ed35aafcd5 100644
--- a/libs/langchain_v1/langchain/agents/middleware/__init__.py
+++ b/libs/langchain_v1/langchain/agents/middleware/__init__.py
@@ -1,22 +1,40 @@
-"""Middleware plugins for agents."""
+"""Entrypoint to using [Middleware](https://docs.langchain.com/oss/python/langchain/middleware) plugins with [Agents](https://docs.langchain.com/oss/python/langchain/agents).
+
+!!! warning "Reference docs"
+ This page contains **reference documentation** for Middleware. See
+ [the docs](https://docs.langchain.com/oss/python/langchain/middleware) for conceptual
+ guides, tutorials, and examples on using Middleware.
+""" # noqa: E501
from .context_editing import (
ClearToolUsesEdit,
ContextEditingMiddleware,
)
-from .human_in_the_loop import HumanInTheLoopMiddleware
+from .human_in_the_loop import (
+ HumanInTheLoopMiddleware,
+ InterruptOnConfig,
+)
from .model_call_limit import ModelCallLimitMiddleware
from .model_fallback import ModelFallbackMiddleware
from .pii import PIIDetectionError, PIIMiddleware
-from .planning import PlanningMiddleware
-from .prompt_caching import AnthropicPromptCachingMiddleware
+from .shell_tool import (
+ CodexSandboxExecutionPolicy,
+ DockerExecutionPolicy,
+ HostExecutionPolicy,
+ RedactionRule,
+ ShellToolMiddleware,
+)
from .summarization import SummarizationMiddleware
+from .todo import TodoListMiddleware
from .tool_call_limit import ToolCallLimitMiddleware
+from .tool_emulator import LLMToolEmulator
+from .tool_retry import ToolRetryMiddleware
from .tool_selection import LLMToolSelectorMiddleware
from .types import (
AgentMiddleware,
AgentState,
ModelRequest,
+ ModelResponse,
after_agent,
after_model,
before_agent,
@@ -24,25 +42,33 @@ from .types import (
dynamic_prompt,
hook_config,
wrap_model_call,
+ wrap_tool_call,
)
__all__ = [
"AgentMiddleware",
"AgentState",
- # should move to langchain-anthropic if we decide to keep it
- "AnthropicPromptCachingMiddleware",
"ClearToolUsesEdit",
+ "CodexSandboxExecutionPolicy",
"ContextEditingMiddleware",
+ "DockerExecutionPolicy",
+ "HostExecutionPolicy",
"HumanInTheLoopMiddleware",
+ "InterruptOnConfig",
+ "LLMToolEmulator",
"LLMToolSelectorMiddleware",
"ModelCallLimitMiddleware",
"ModelFallbackMiddleware",
"ModelRequest",
+ "ModelResponse",
"PIIDetectionError",
"PIIMiddleware",
- "PlanningMiddleware",
+ "RedactionRule",
+ "ShellToolMiddleware",
"SummarizationMiddleware",
+ "TodoListMiddleware",
"ToolCallLimitMiddleware",
+ "ToolRetryMiddleware",
"after_agent",
"after_model",
"before_agent",
@@ -50,4 +76,5 @@ __all__ = [
"dynamic_prompt",
"hook_config",
"wrap_model_call",
+ "wrap_tool_call",
]
diff --git a/libs/langchain_v1/langchain/agents/middleware/_execution.py b/libs/langchain_v1/langchain/agents/middleware/_execution.py
new file mode 100644
index 00000000000..f14235bf627
--- /dev/null
+++ b/libs/langchain_v1/langchain/agents/middleware/_execution.py
@@ -0,0 +1,388 @@
+"""Execution policies for the persistent shell middleware."""
+
+from __future__ import annotations
+
+import abc
+import json
+import os
+import shutil
+import subprocess
+import sys
+import typing
+from collections.abc import Mapping, Sequence
+from dataclasses import dataclass, field
+from pathlib import Path
+
+try: # pragma: no cover - optional dependency on POSIX platforms
+ import resource
+except ImportError: # pragma: no cover - non-POSIX systems
+ resource = None # type: ignore[assignment]
+
+
+SHELL_TEMP_PREFIX = "langchain-shell-"
+
+
+def _launch_subprocess(
+ command: Sequence[str],
+ *,
+ env: Mapping[str, str],
+ cwd: Path,
+ preexec_fn: typing.Callable[[], None] | None,
+ start_new_session: bool,
+) -> subprocess.Popen[str]:
+ return subprocess.Popen( # noqa: S603
+ list(command),
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ cwd=cwd,
+ text=True,
+ encoding="utf-8",
+ errors="replace",
+ bufsize=1,
+ env=env,
+ preexec_fn=preexec_fn, # noqa: PLW1509
+ start_new_session=start_new_session,
+ )
+
+
+if typing.TYPE_CHECKING:
+ from collections.abc import Mapping, Sequence
+ from pathlib import Path
+
+
+@dataclass
+class BaseExecutionPolicy(abc.ABC):
+ """Configuration contract for persistent shell sessions.
+
+ Concrete subclasses encapsulate how a shell process is launched and constrained.
+ Each policy documents its security guarantees and the operating environments in
+ which it is appropriate. Use :class:`HostExecutionPolicy` for trusted, same-host
+ execution; :class:`CodexSandboxExecutionPolicy` when the Codex CLI sandbox is
+ available and you want additional syscall restrictions; and
+ :class:`DockerExecutionPolicy` for container-level isolation using Docker.
+ """
+
+ command_timeout: float = 30.0
+ startup_timeout: float = 30.0
+ termination_timeout: float = 10.0
+ max_output_lines: int = 100
+ max_output_bytes: int | None = None
+
+ def __post_init__(self) -> None:
+ if self.max_output_lines <= 0:
+ msg = "max_output_lines must be positive."
+ raise ValueError(msg)
+
+ @abc.abstractmethod
+ def spawn(
+ self,
+ *,
+ workspace: Path,
+ env: Mapping[str, str],
+ command: Sequence[str],
+ ) -> subprocess.Popen[str]:
+ """Launch the persistent shell process."""
+
+
+@dataclass
+class HostExecutionPolicy(BaseExecutionPolicy):
+ """Run the shell directly on the host process.
+
+ This policy is best suited for trusted or single-tenant environments (CI jobs,
+ developer workstations, pre-sandboxed containers) where the agent must access the
+ host filesystem and tooling without additional isolation. It enforces optional CPU
+ and memory limits to prevent runaway commands but offers **no** filesystem or network
+ sandboxing; commands can modify anything the process user can reach.
+
+ On Linux platforms resource limits are applied with ``resource.prlimit`` after the
+ shell starts. On macOS, where ``prlimit`` is unavailable, limits are set in a
+ ``preexec_fn`` before ``exec``. In both cases the shell runs in its own process group
+ so timeouts can terminate the full subtree.
+ """
+
+ cpu_time_seconds: int | None = None
+ memory_bytes: int | None = None
+ create_process_group: bool = True
+
+ _limits_requested: bool = field(init=False, repr=False, default=False)
+
+ def __post_init__(self) -> None:
+ super().__post_init__()
+ if self.cpu_time_seconds is not None and self.cpu_time_seconds <= 0:
+ msg = "cpu_time_seconds must be positive if provided."
+ raise ValueError(msg)
+ if self.memory_bytes is not None and self.memory_bytes <= 0:
+ msg = "memory_bytes must be positive if provided."
+ raise ValueError(msg)
+ self._limits_requested = any(
+ value is not None for value in (self.cpu_time_seconds, self.memory_bytes)
+ )
+ if self._limits_requested and resource is None:
+ msg = (
+ "HostExecutionPolicy cpu/memory limits require the Python 'resource' module. "
+ "Either remove the limits or run on a POSIX platform."
+ )
+ raise RuntimeError(msg)
+
+ def spawn(
+ self,
+ *,
+ workspace: Path,
+ env: Mapping[str, str],
+ command: Sequence[str],
+ ) -> subprocess.Popen[str]:
+ process = _launch_subprocess(
+ list(command),
+ env=env,
+ cwd=workspace,
+ preexec_fn=self._create_preexec_fn(),
+ start_new_session=self.create_process_group,
+ )
+ self._apply_post_spawn_limits(process)
+ return process
+
+ def _create_preexec_fn(self) -> typing.Callable[[], None] | None:
+ if not self._limits_requested or self._can_use_prlimit():
+ return None
+
+ def _configure() -> None: # pragma: no cover - depends on OS
+ if self.cpu_time_seconds is not None:
+ limit = (self.cpu_time_seconds, self.cpu_time_seconds)
+ resource.setrlimit(resource.RLIMIT_CPU, limit)
+ if self.memory_bytes is not None:
+ limit = (self.memory_bytes, self.memory_bytes)
+ if hasattr(resource, "RLIMIT_AS"):
+ resource.setrlimit(resource.RLIMIT_AS, limit)
+ elif hasattr(resource, "RLIMIT_DATA"):
+ resource.setrlimit(resource.RLIMIT_DATA, limit)
+
+ return _configure
+
+ def _apply_post_spawn_limits(self, process: subprocess.Popen[str]) -> None:
+ if not self._limits_requested or not self._can_use_prlimit():
+ return
+ if resource is None: # pragma: no cover - defensive
+ return
+ pid = process.pid
+ if pid is None:
+ return
+ try:
+ prlimit = typing.cast("typing.Any", resource).prlimit
+ if self.cpu_time_seconds is not None:
+ prlimit(pid, resource.RLIMIT_CPU, (self.cpu_time_seconds, self.cpu_time_seconds))
+ if self.memory_bytes is not None:
+ limit = (self.memory_bytes, self.memory_bytes)
+ if hasattr(resource, "RLIMIT_AS"):
+ prlimit(pid, resource.RLIMIT_AS, limit)
+ elif hasattr(resource, "RLIMIT_DATA"):
+ prlimit(pid, resource.RLIMIT_DATA, limit)
+ except OSError as exc: # pragma: no cover - depends on platform support
+ msg = "Failed to apply resource limits via prlimit."
+ raise RuntimeError(msg) from exc
+
+ @staticmethod
+ def _can_use_prlimit() -> bool:
+ return (
+ resource is not None
+ and hasattr(resource, "prlimit")
+ and sys.platform.startswith("linux")
+ )
+
+
+@dataclass
+class CodexSandboxExecutionPolicy(BaseExecutionPolicy):
+ """Launch the shell through the Codex CLI sandbox.
+
+ Ideal when you have the Codex CLI installed and want the additional syscall and
+ filesystem restrictions provided by Anthropic's Seatbelt (macOS) or Landlock/seccomp
+ (Linux) profiles. Commands still run on the host, but within the sandbox requested by
+ the CLI. If the Codex binary is unavailable or the runtime lacks the required
+ kernel features (e.g., Landlock inside some containers), process startup fails with a
+ :class:`RuntimeError`.
+
+ Configure sandbox behaviour via ``config_overrides`` to align with your Codex CLI
+ profile. This policy does not add its own resource limits; combine it with
+ host-level guards (cgroups, container resource limits) as needed.
+ """
+
+ binary: str = "codex"
+ platform: typing.Literal["auto", "macos", "linux"] = "auto"
+ config_overrides: Mapping[str, typing.Any] = field(default_factory=dict)
+
+ def spawn(
+ self,
+ *,
+ workspace: Path,
+ env: Mapping[str, str],
+ command: Sequence[str],
+ ) -> subprocess.Popen[str]:
+ full_command = self._build_command(command)
+ return _launch_subprocess(
+ full_command,
+ env=env,
+ cwd=workspace,
+ preexec_fn=None,
+ start_new_session=False,
+ )
+
+ def _build_command(self, command: Sequence[str]) -> list[str]:
+ binary = self._resolve_binary()
+ platform_arg = self._determine_platform()
+ full_command: list[str] = [binary, "sandbox", platform_arg]
+ for key, value in sorted(dict(self.config_overrides).items()):
+ full_command.extend(["-c", f"{key}={self._format_override(value)}"])
+ full_command.append("--")
+ full_command.extend(command)
+ return full_command
+
+ def _resolve_binary(self) -> str:
+ path = shutil.which(self.binary)
+ if path is None:
+ msg = (
+ "Codex sandbox policy requires the '%s' CLI to be installed and available on PATH."
+ )
+ raise RuntimeError(msg % self.binary)
+ return path
+
+ def _determine_platform(self) -> str:
+ if self.platform != "auto":
+ return self.platform
+ if sys.platform.startswith("linux"):
+ return "linux"
+ if sys.platform == "darwin":
+ return "macos"
+ msg = (
+ "Codex sandbox policy could not determine a supported platform; "
+ "set 'platform' explicitly."
+ )
+ raise RuntimeError(msg)
+
+ @staticmethod
+ def _format_override(value: typing.Any) -> str:
+ try:
+ return json.dumps(value)
+ except TypeError:
+ return str(value)
+
+
+@dataclass
+class DockerExecutionPolicy(BaseExecutionPolicy):
+ """Run the shell inside a dedicated Docker container.
+
+ Choose this policy when commands originate from untrusted users or you require
+ strong isolation between sessions. By default the workspace is bind-mounted only when
+ it refers to an existing non-temporary directory; ephemeral sessions run without a
+ mount to minimise host exposure. The container's network namespace is disabled by
+ default (``--network none``) and you can enable further hardening via
+ ``read_only_rootfs`` and ``user``.
+
+ The security guarantees depend on your Docker daemon configuration. Run the agent on
+ a host where Docker is locked down (rootless mode, AppArmor/SELinux, etc.) and review
+ any additional volumes or capabilities passed through ``extra_run_args``. The default
+ image is ``python:3.12-alpine3.19``; supply a custom image if you need preinstalled
+ tooling.
+ """
+
+ binary: str = "docker"
+ image: str = "python:3.12-alpine3.19"
+ remove_container_on_exit: bool = True
+ network_enabled: bool = False
+ extra_run_args: Sequence[str] | None = None
+ memory_bytes: int | None = None
+ cpu_time_seconds: typing.Any | None = None
+ cpus: str | None = None
+ read_only_rootfs: bool = False
+ user: str | None = None
+
+ def __post_init__(self) -> None:
+ super().__post_init__()
+ if self.memory_bytes is not None and self.memory_bytes <= 0:
+ msg = "memory_bytes must be positive if provided."
+ raise ValueError(msg)
+ if self.cpu_time_seconds is not None:
+ msg = (
+ "DockerExecutionPolicy does not support cpu_time_seconds; configure CPU limits "
+ "using Docker run options such as '--cpus'."
+ )
+ raise RuntimeError(msg)
+ if self.cpus is not None and not self.cpus.strip():
+ msg = "cpus must be a non-empty string when provided."
+ raise ValueError(msg)
+ if self.user is not None and not self.user.strip():
+ msg = "user must be a non-empty string when provided."
+ raise ValueError(msg)
+ self.extra_run_args = tuple(self.extra_run_args or ())
+
+ def spawn(
+ self,
+ *,
+ workspace: Path,
+ env: Mapping[str, str],
+ command: Sequence[str],
+ ) -> subprocess.Popen[str]:
+ full_command = self._build_command(workspace, env, command)
+ host_env = os.environ.copy()
+ return _launch_subprocess(
+ full_command,
+ env=host_env,
+ cwd=workspace,
+ preexec_fn=None,
+ start_new_session=False,
+ )
+
+ def _build_command(
+ self,
+ workspace: Path,
+ env: Mapping[str, str],
+ command: Sequence[str],
+ ) -> list[str]:
+ binary = self._resolve_binary()
+ full_command: list[str] = [binary, "run", "-i"]
+ if self.remove_container_on_exit:
+ full_command.append("--rm")
+ if not self.network_enabled:
+ full_command.extend(["--network", "none"])
+ if self.memory_bytes is not None:
+ full_command.extend(["--memory", str(self.memory_bytes)])
+ if self._should_mount_workspace(workspace):
+ host_path = str(workspace)
+ full_command.extend(["-v", f"{host_path}:{host_path}"])
+ full_command.extend(["-w", host_path])
+ else:
+ full_command.extend(["-w", "/"])
+ if self.read_only_rootfs:
+ full_command.append("--read-only")
+ for key, value in env.items():
+ full_command.extend(["-e", f"{key}={value}"])
+ if self.cpus is not None:
+ full_command.extend(["--cpus", self.cpus])
+ if self.user is not None:
+ full_command.extend(["--user", self.user])
+ if self.extra_run_args:
+ full_command.extend(self.extra_run_args)
+ full_command.append(self.image)
+ full_command.extend(command)
+ return full_command
+
+ @staticmethod
+ def _should_mount_workspace(workspace: Path) -> bool:
+ return not workspace.name.startswith(SHELL_TEMP_PREFIX)
+
+ def _resolve_binary(self) -> str:
+ path = shutil.which(self.binary)
+ if path is None:
+ msg = (
+ "Docker execution policy requires the '%s' CLI to be installed"
+ " and available on PATH."
+ )
+ raise RuntimeError(msg % self.binary)
+ return path
+
+
+__all__ = [
+ "BaseExecutionPolicy",
+ "CodexSandboxExecutionPolicy",
+ "DockerExecutionPolicy",
+ "HostExecutionPolicy",
+]
diff --git a/libs/langchain_v1/langchain/agents/middleware/_redaction.py b/libs/langchain_v1/langchain/agents/middleware/_redaction.py
new file mode 100644
index 00000000000..ba4755b8ce8
--- /dev/null
+++ b/libs/langchain_v1/langchain/agents/middleware/_redaction.py
@@ -0,0 +1,350 @@
+"""Shared redaction utilities for middleware components."""
+
+from __future__ import annotations
+
+import hashlib
+import ipaddress
+import re
+from collections.abc import Callable, Sequence
+from dataclasses import dataclass
+from typing import Literal
+from urllib.parse import urlparse
+
+from typing_extensions import TypedDict
+
+RedactionStrategy = Literal["block", "redact", "mask", "hash"]
+"""Supported strategies for handling detected sensitive values."""
+
+
+class PIIMatch(TypedDict):
+ """Represents an individual match of sensitive data."""
+
+ type: str
+ value: str
+ start: int
+ end: int
+
+
+class PIIDetectionError(Exception):
+ """Raised when configured to block on detected sensitive values."""
+
+ def __init__(self, pii_type: str, matches: Sequence[PIIMatch]) -> None:
+ """Initialize the exception with match context.
+
+ Args:
+ pii_type: Name of the detected sensitive type.
+ matches: All matches that were detected for that type.
+ """
+ self.pii_type = pii_type
+ self.matches = list(matches)
+ count = len(matches)
+ msg = f"Detected {count} instance(s) of {pii_type} in text content"
+ super().__init__(msg)
+
+
+Detector = Callable[[str], list[PIIMatch]]
+"""Callable signature for detectors that locate sensitive values."""
+
+
+def detect_email(content: str) -> list[PIIMatch]:
+ """Detect email addresses in content."""
+ pattern = r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b"
+ return [
+ PIIMatch(
+ type="email",
+ value=match.group(),
+ start=match.start(),
+ end=match.end(),
+ )
+ for match in re.finditer(pattern, content)
+ ]
+
+
+def detect_credit_card(content: str) -> list[PIIMatch]:
+ """Detect credit card numbers in content using Luhn validation."""
+ pattern = r"\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b"
+ matches = []
+
+ for match in re.finditer(pattern, content):
+ card_number = match.group()
+ if _passes_luhn(card_number):
+ matches.append(
+ PIIMatch(
+ type="credit_card",
+ value=card_number,
+ start=match.start(),
+ end=match.end(),
+ )
+ )
+
+ return matches
+
+
+def detect_ip(content: str) -> list[PIIMatch]:
+ """Detect IPv4 or IPv6 addresses in content."""
+ matches: list[PIIMatch] = []
+ ipv4_pattern = r"\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b"
+
+ for match in re.finditer(ipv4_pattern, content):
+ ip_candidate = match.group()
+ try:
+ ipaddress.ip_address(ip_candidate)
+ except ValueError:
+ continue
+ matches.append(
+ PIIMatch(
+ type="ip",
+ value=ip_candidate,
+ start=match.start(),
+ end=match.end(),
+ )
+ )
+
+ return matches
+
+
+def detect_mac_address(content: str) -> list[PIIMatch]:
+ """Detect MAC addresses in content."""
+ pattern = r"\b([0-9A-Fa-f]{2}[:-]){5}[0-9A-Fa-f]{2}\b"
+ return [
+ PIIMatch(
+ type="mac_address",
+ value=match.group(),
+ start=match.start(),
+ end=match.end(),
+ )
+ for match in re.finditer(pattern, content)
+ ]
+
+
+def detect_url(content: str) -> list[PIIMatch]:
+ """Detect URLs in content using regex and stdlib validation."""
+ matches: list[PIIMatch] = []
+
+ # Pattern 1: URLs with scheme (http:// or https://)
+ scheme_pattern = r"https?://[^\s<>\"{}|\\^`\[\]]+"
+
+ for match in re.finditer(scheme_pattern, content):
+ url = match.group()
+ result = urlparse(url)
+ if result.scheme in ("http", "https") and result.netloc:
+ matches.append(
+ PIIMatch(
+ type="url",
+ value=url,
+ start=match.start(),
+ end=match.end(),
+ )
+ )
+
+ # Pattern 2: URLs without scheme (www.example.com or example.com/path)
+ # More conservative to avoid false positives
+ bare_pattern = (
+ r"\b(?:www\.)?[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?"
+ r"(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)+(?:/[^\s]*)?"
+ )
+
+ for match in re.finditer(bare_pattern, content):
+ start, end = match.start(), match.end()
+ # Skip if already matched with scheme
+ if any(m["start"] <= start < m["end"] or m["start"] < end <= m["end"] for m in matches):
+ continue
+
+ url = match.group()
+ # Only accept if it has a path or starts with www
+ # This reduces false positives like "example.com" in prose
+ if "/" in url or url.startswith("www."):
+ # Add scheme for validation (required for urlparse to work correctly)
+ test_url = f"http://{url}"
+ result = urlparse(test_url)
+ if result.netloc and "." in result.netloc:
+ matches.append(
+ PIIMatch(
+ type="url",
+ value=url,
+ start=start,
+ end=end,
+ )
+ )
+
+ return matches
+
+
+BUILTIN_DETECTORS: dict[str, Detector] = {
+ "email": detect_email,
+ "credit_card": detect_credit_card,
+ "ip": detect_ip,
+ "mac_address": detect_mac_address,
+ "url": detect_url,
+}
+"""Registry of built-in detectors keyed by type name."""
+
+
+def _passes_luhn(card_number: str) -> bool:
+ """Validate credit card number using the Luhn checksum."""
+ digits = [int(d) for d in card_number if d.isdigit()]
+ if not 13 <= len(digits) <= 19:
+ return False
+
+ checksum = 0
+ for index, digit in enumerate(reversed(digits)):
+ value = digit
+ if index % 2 == 1:
+ value *= 2
+ if value > 9:
+ value -= 9
+ checksum += value
+ return checksum % 10 == 0
+
+
+def _apply_redact_strategy(content: str, matches: list[PIIMatch]) -> str:
+ result = content
+ for match in sorted(matches, key=lambda item: item["start"], reverse=True):
+ replacement = f"[REDACTED_{match['type'].upper()}]"
+ result = result[: match["start"]] + replacement + result[match["end"] :]
+ return result
+
+
+def _apply_mask_strategy(content: str, matches: list[PIIMatch]) -> str:
+ result = content
+ for match in sorted(matches, key=lambda item: item["start"], reverse=True):
+ value = match["value"]
+ pii_type = match["type"]
+ if pii_type == "email":
+ parts = value.split("@")
+ if len(parts) == 2:
+ domain_parts = parts[1].split(".")
+ masked = (
+ f"{parts[0]}@****.{domain_parts[-1]}"
+ if len(domain_parts) >= 2
+ else f"{parts[0]}@****"
+ )
+ else:
+ masked = "****"
+ elif pii_type == "credit_card":
+ digits_only = "".join(c for c in value if c.isdigit())
+ separator = "-" if "-" in value else " " if " " in value else ""
+ if separator:
+ masked = f"****{separator}****{separator}****{separator}{digits_only[-4:]}"
+ else:
+ masked = f"************{digits_only[-4:]}"
+ elif pii_type == "ip":
+ octets = value.split(".")
+ masked = f"*.*.*.{octets[-1]}" if len(octets) == 4 else "****"
+ elif pii_type == "mac_address":
+ separator = ":" if ":" in value else "-"
+ masked = (
+ f"**{separator}**{separator}**{separator}**{separator}**{separator}{value[-2:]}"
+ )
+ elif pii_type == "url":
+ masked = "[MASKED_URL]"
+ else:
+ masked = f"****{value[-4:]}" if len(value) > 4 else "****"
+ result = result[: match["start"]] + masked + result[match["end"] :]
+ return result
+
+
+def _apply_hash_strategy(content: str, matches: list[PIIMatch]) -> str:
+ result = content
+ for match in sorted(matches, key=lambda item: item["start"], reverse=True):
+ digest = hashlib.sha256(match["value"].encode()).hexdigest()[:8]
+ replacement = f"<{match['type']}_hash:{digest}>"
+ result = result[: match["start"]] + replacement + result[match["end"] :]
+ return result
+
+
+def apply_strategy(
+ content: str,
+ matches: list[PIIMatch],
+ strategy: RedactionStrategy,
+) -> str:
+ """Apply the configured strategy to matches within content."""
+ if not matches:
+ return content
+ if strategy == "redact":
+ return _apply_redact_strategy(content, matches)
+ if strategy == "mask":
+ return _apply_mask_strategy(content, matches)
+ if strategy == "hash":
+ return _apply_hash_strategy(content, matches)
+ if strategy == "block":
+ raise PIIDetectionError(matches[0]["type"], matches)
+ msg = f"Unknown redaction strategy: {strategy}"
+ raise ValueError(msg)
+
+
+def resolve_detector(pii_type: str, detector: Detector | str | None) -> Detector:
+ """Return a callable detector for the given configuration."""
+ if detector is None:
+ if pii_type not in BUILTIN_DETECTORS:
+ msg = (
+ f"Unknown PII type: {pii_type}. "
+ f"Must be one of {list(BUILTIN_DETECTORS.keys())} or provide a custom detector."
+ )
+ raise ValueError(msg)
+ return BUILTIN_DETECTORS[pii_type]
+ if isinstance(detector, str):
+ pattern = re.compile(detector)
+
+ def regex_detector(content: str) -> list[PIIMatch]:
+ return [
+ PIIMatch(
+ type=pii_type,
+ value=match.group(),
+ start=match.start(),
+ end=match.end(),
+ )
+ for match in pattern.finditer(content)
+ ]
+
+ return regex_detector
+ return detector
+
+
+@dataclass(frozen=True)
+class RedactionRule:
+ """Configuration for handling a single PII type."""
+
+ pii_type: str
+ strategy: RedactionStrategy = "redact"
+ detector: Detector | str | None = None
+
+ def resolve(self) -> ResolvedRedactionRule:
+ """Resolve runtime detector and return an immutable rule."""
+ resolved_detector = resolve_detector(self.pii_type, self.detector)
+ return ResolvedRedactionRule(
+ pii_type=self.pii_type,
+ strategy=self.strategy,
+ detector=resolved_detector,
+ )
+
+
+@dataclass(frozen=True)
+class ResolvedRedactionRule:
+ """Resolved redaction rule ready for execution."""
+
+ pii_type: str
+ strategy: RedactionStrategy
+ detector: Detector
+
+ def apply(self, content: str) -> tuple[str, list[PIIMatch]]:
+ """Apply this rule to content, returning new content and matches."""
+ matches = self.detector(content)
+ if not matches:
+ return content, []
+ updated = apply_strategy(content, matches, self.strategy)
+ return updated, matches
+
+
+__all__ = [
+ "PIIDetectionError",
+ "PIIMatch",
+ "RedactionRule",
+ "ResolvedRedactionRule",
+ "apply_strategy",
+ "detect_credit_card",
+ "detect_email",
+ "detect_ip",
+ "detect_mac_address",
+ "detect_url",
+]
diff --git a/libs/langchain_v1/langchain/agents/middleware/context_editing.py b/libs/langchain_v1/langchain/agents/middleware/context_editing.py
index c26e28dc330..3de8a380f7e 100644
--- a/libs/langchain_v1/langchain/agents/middleware/context_editing.py
+++ b/libs/langchain_v1/langchain/agents/middleware/context_editing.py
@@ -8,7 +8,7 @@ with any LangChain chat model.
from __future__ import annotations
-from collections.abc import Callable, Iterable, Sequence
+from collections.abc import Awaitable, Callable, Iterable, Sequence
from dataclasses import dataclass
from typing import Literal
@@ -22,7 +22,12 @@ from langchain_core.messages import (
from langchain_core.messages.utils import count_tokens_approximately
from typing_extensions import Protocol
-from langchain.agents.middleware.types import AgentMiddleware, ModelRequest
+from langchain.agents.middleware.types import (
+ AgentMiddleware,
+ ModelCallResult,
+ ModelRequest,
+ ModelResponse,
+)
DEFAULT_TOOL_PLACEHOLDER = "[cleared]"
@@ -177,7 +182,7 @@ class ClearToolUsesEdit(ContextEdit):
class ContextEditingMiddleware(AgentMiddleware):
- """Middleware that automatically prunes tool results to manage context size.
+ """Automatically prunes tool results to manage context size.
The middleware applies a sequence of edits when the total input token count
exceeds configured thresholds. Currently the `ClearToolUsesEdit` strategy is
@@ -193,7 +198,7 @@ class ContextEditingMiddleware(AgentMiddleware):
edits: Iterable[ContextEdit] | None = None,
token_count_method: Literal["approximate", "model"] = "approximate", # noqa: S107
) -> None:
- """Initialise a context editing middleware instance.
+ """Initializes a context editing middleware instance.
Args:
edits: Sequence of edit strategies to apply. Defaults to a single
@@ -209,8 +214,8 @@ class ContextEditingMiddleware(AgentMiddleware):
def wrap_model_call(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
"""Apply context edits before invoking the model via handler."""
if not request.messages:
return handler(request)
@@ -220,9 +225,11 @@ class ContextEditingMiddleware(AgentMiddleware):
def count_tokens(messages: Sequence[BaseMessage]) -> int:
return count_tokens_approximately(messages)
else:
- system_msg = (
- [SystemMessage(content=request.system_prompt)] if request.system_prompt else []
- )
+ system_msg = []
+ if request.system_prompt and not isinstance(request.system_prompt, SystemMessage):
+ system_msg = [SystemMessage(content=request.system_prompt)]
+ elif request.system_prompt and isinstance(request.system_prompt, SystemMessage):
+ system_msg = [request.system_prompt]
def count_tokens(messages: Sequence[BaseMessage]) -> int:
return request.model.get_num_tokens_from_messages(
@@ -234,6 +241,37 @@ class ContextEditingMiddleware(AgentMiddleware):
return handler(request)
+ async def awrap_model_call(
+ self,
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
+ """Apply context edits before invoking the model via handler (async version)."""
+ if not request.messages:
+ return await handler(request)
+
+ if self.token_count_method == "approximate": # noqa: S105
+
+ def count_tokens(messages: Sequence[BaseMessage]) -> int:
+ return count_tokens_approximately(messages)
+ else:
+ system_msg = []
+
+ if request.system_prompt and not isinstance(request.system_prompt, SystemMessage):
+ system_msg = [SystemMessage(content=request.system_prompt)]
+ elif request.system_prompt and isinstance(request.system_prompt, SystemMessage):
+ system_msg = [request.system_prompt]
+
+ def count_tokens(messages: Sequence[BaseMessage]) -> int:
+ return request.model.get_num_tokens_from_messages(
+ system_msg + list(messages), request.tools
+ )
+
+ for edit in self.edits:
+ edit.apply(request.messages, count_tokens=count_tokens)
+
+ return await handler(request)
+
__all__ = [
"ClearToolUsesEdit",
diff --git a/libs/langchain_v1/langchain/agents/middleware/file_search.py b/libs/langchain_v1/langchain/agents/middleware/file_search.py
new file mode 100644
index 00000000000..fe9efc60b02
--- /dev/null
+++ b/libs/langchain_v1/langchain/agents/middleware/file_search.py
@@ -0,0 +1,382 @@
+"""File search middleware for Anthropic text editor and memory tools.
+
+This module provides Glob and Grep search tools that operate on files stored
+in state or filesystem.
+"""
+
+from __future__ import annotations
+
+import fnmatch
+import json
+import re
+import subprocess
+from contextlib import suppress
+from datetime import datetime, timezone
+from pathlib import Path
+from typing import Literal
+
+from langchain_core.tools import tool
+
+from langchain.agents.middleware.types import AgentMiddleware
+
+
+def _expand_include_patterns(pattern: str) -> list[str] | None:
+ """Expand brace patterns like ``*.{py,pyi}`` into a list of globs."""
+ if "}" in pattern and "{" not in pattern:
+ return None
+
+ expanded: list[str] = []
+
+ def _expand(current: str) -> None:
+ start = current.find("{")
+ if start == -1:
+ expanded.append(current)
+ return
+
+ end = current.find("}", start)
+ if end == -1:
+ raise ValueError
+
+ prefix = current[:start]
+ suffix = current[end + 1 :]
+ inner = current[start + 1 : end]
+ if not inner:
+ raise ValueError
+
+ for option in inner.split(","):
+ _expand(prefix + option + suffix)
+
+ try:
+ _expand(pattern)
+ except ValueError:
+ return None
+
+ return expanded
+
+
+def _is_valid_include_pattern(pattern: str) -> bool:
+ """Validate glob pattern used for include filters."""
+ if not pattern:
+ return False
+
+ if any(char in pattern for char in ("\x00", "\n", "\r")):
+ return False
+
+ expanded = _expand_include_patterns(pattern)
+ if expanded is None:
+ return False
+
+ try:
+ for candidate in expanded:
+ re.compile(fnmatch.translate(candidate))
+ except re.error:
+ return False
+
+ return True
+
+
+def _match_include_pattern(basename: str, pattern: str) -> bool:
+ """Return True if the basename matches the include pattern."""
+ expanded = _expand_include_patterns(pattern)
+ if not expanded:
+ return False
+
+ return any(fnmatch.fnmatch(basename, candidate) for candidate in expanded)
+
+
+class FilesystemFileSearchMiddleware(AgentMiddleware):
+ """Provides Glob and Grep search over filesystem files.
+
+ This middleware adds two tools that search through local filesystem:
+ - Glob: Fast file pattern matching by file path
+ - Grep: Fast content search using ripgrep or Python fallback
+
+ Example:
+ ```python
+ from langchain.agents import create_agent
+ from langchain.agents.middleware import (
+ FilesystemFileSearchMiddleware,
+ )
+
+ agent = create_agent(
+ model=model,
+ tools=[],
+ middleware=[
+ FilesystemFileSearchMiddleware(root_path="/workspace"),
+ ],
+ )
+ ```
+ """
+
+ def __init__(
+ self,
+ *,
+ root_path: str,
+ use_ripgrep: bool = True,
+ max_file_size_mb: int = 10,
+ ) -> None:
+ """Initialize the search middleware.
+
+ Args:
+ root_path: Root directory to search.
+ use_ripgrep: Whether to use ripgrep for search (default: True).
+ Falls back to Python if ripgrep unavailable.
+ max_file_size_mb: Maximum file size to search in MB (default: 10).
+ """
+ self.root_path = Path(root_path).resolve()
+ self.use_ripgrep = use_ripgrep
+ self.max_file_size_bytes = max_file_size_mb * 1024 * 1024
+
+ # Create tool instances as closures that capture self
+ @tool
+ def glob_search(pattern: str, path: str = "/") -> str:
+ """Fast file pattern matching tool that works with any codebase size.
+
+ Supports glob patterns like **/*.js or src/**/*.ts.
+ Returns matching file paths sorted by modification time.
+ Use this tool when you need to find files by name patterns.
+
+ Args:
+ pattern: The glob pattern to match files against.
+ path: The directory to search in. If not specified, searches from root.
+
+ Returns:
+ Newline-separated list of matching file paths, sorted by modification
+ time (most recently modified first). Returns "No files found" if no
+ matches.
+ """
+ try:
+ base_full = self._validate_and_resolve_path(path)
+ except ValueError:
+ return "No files found"
+
+ if not base_full.exists() or not base_full.is_dir():
+ return "No files found"
+
+ # Use pathlib glob
+ matching: list[tuple[str, str]] = []
+ for match in base_full.glob(pattern):
+ if match.is_file():
+ # Convert to virtual path
+ virtual_path = "/" + str(match.relative_to(self.root_path))
+ stat = match.stat()
+ modified_at = datetime.fromtimestamp(stat.st_mtime, tz=timezone.utc).isoformat()
+ matching.append((virtual_path, modified_at))
+
+ if not matching:
+ return "No files found"
+
+ file_paths = [p for p, _ in matching]
+ return "\n".join(file_paths)
+
+ @tool
+ def grep_search(
+ pattern: str,
+ path: str = "/",
+ include: str | None = None,
+ output_mode: Literal["files_with_matches", "content", "count"] = "files_with_matches",
+ ) -> str:
+ """Fast content search tool that works with any codebase size.
+
+ Searches file contents using regular expressions. Supports full regex
+ syntax and filters files by pattern with the include parameter.
+
+ Args:
+ pattern: The regular expression pattern to search for in file contents.
+ path: The directory to search in. If not specified, searches from root.
+ include: File pattern to filter (e.g., "*.js", "*.{ts,tsx}").
+ output_mode: Output format:
+ - "files_with_matches": Only file paths containing matches (default)
+ - "content": Matching lines with file:line:content format
+ - "count": Count of matches per file
+
+ Returns:
+ Search results formatted according to output_mode. Returns "No matches
+ found" if no results.
+ """
+ # Compile regex pattern (for validation)
+ try:
+ re.compile(pattern)
+ except re.error as e:
+ return f"Invalid regex pattern: {e}"
+
+ if include and not _is_valid_include_pattern(include):
+ return "Invalid include pattern"
+
+ # Try ripgrep first if enabled
+ results = None
+ if self.use_ripgrep:
+ with suppress(
+ FileNotFoundError,
+ subprocess.CalledProcessError,
+ subprocess.TimeoutExpired,
+ ):
+ results = self._ripgrep_search(pattern, path, include)
+
+ # Python fallback if ripgrep failed or is disabled
+ if results is None:
+ results = self._python_search(pattern, path, include)
+
+ if not results:
+ return "No matches found"
+
+ # Format output based on mode
+ return self._format_grep_results(results, output_mode)
+
+ self.glob_search = glob_search
+ self.grep_search = grep_search
+ self.tools = [glob_search, grep_search]
+
+ def _validate_and_resolve_path(self, path: str) -> Path:
+ """Validate and resolve a virtual path to filesystem path."""
+ # Normalize path
+ if not path.startswith("/"):
+ path = "/" + path
+
+ # Check for path traversal
+ if ".." in path or "~" in path:
+ msg = "Path traversal not allowed"
+ raise ValueError(msg)
+
+ # Convert virtual path to filesystem path
+ relative = path.lstrip("/")
+ full_path = (self.root_path / relative).resolve()
+
+ # Ensure path is within root
+ try:
+ full_path.relative_to(self.root_path)
+ except ValueError:
+ msg = f"Path outside root directory: {path}"
+ raise ValueError(msg) from None
+
+ return full_path
+
+ def _ripgrep_search(
+ self, pattern: str, base_path: str, include: str | None
+ ) -> dict[str, list[tuple[int, str]]]:
+ """Search using ripgrep subprocess."""
+ try:
+ base_full = self._validate_and_resolve_path(base_path)
+ except ValueError:
+ return {}
+
+ if not base_full.exists():
+ return {}
+
+ # Build ripgrep command
+ cmd = ["rg", "--json"]
+
+ if include:
+ # Convert glob pattern to ripgrep glob
+ cmd.extend(["--glob", include])
+
+ cmd.extend(["--", pattern, str(base_full)])
+
+ try:
+ result = subprocess.run( # noqa: S603
+ cmd,
+ capture_output=True,
+ text=True,
+ timeout=30,
+ check=False,
+ )
+ except (subprocess.TimeoutExpired, FileNotFoundError):
+ # Fallback to Python search if ripgrep unavailable or times out
+ return self._python_search(pattern, base_path, include)
+
+ # Parse ripgrep JSON output
+ results: dict[str, list[tuple[int, str]]] = {}
+ for line in result.stdout.splitlines():
+ try:
+ data = json.loads(line)
+ if data["type"] == "match":
+ path = data["data"]["path"]["text"]
+ # Convert to virtual path
+ virtual_path = "/" + str(Path(path).relative_to(self.root_path))
+ line_num = data["data"]["line_number"]
+ line_text = data["data"]["lines"]["text"].rstrip("\n")
+
+ if virtual_path not in results:
+ results[virtual_path] = []
+ results[virtual_path].append((line_num, line_text))
+ except (json.JSONDecodeError, KeyError):
+ continue
+
+ return results
+
+ def _python_search(
+ self, pattern: str, base_path: str, include: str | None
+ ) -> dict[str, list[tuple[int, str]]]:
+ """Search using Python regex (fallback)."""
+ try:
+ base_full = self._validate_and_resolve_path(base_path)
+ except ValueError:
+ return {}
+
+ if not base_full.exists():
+ return {}
+
+ regex = re.compile(pattern)
+ results: dict[str, list[tuple[int, str]]] = {}
+
+ # Walk directory tree
+ for file_path in base_full.rglob("*"):
+ if not file_path.is_file():
+ continue
+
+ # Check include filter
+ if include and not _match_include_pattern(file_path.name, include):
+ continue
+
+ # Skip files that are too large
+ if file_path.stat().st_size > self.max_file_size_bytes:
+ continue
+
+ try:
+ content = file_path.read_text()
+ except (UnicodeDecodeError, PermissionError):
+ continue
+
+ # Search content
+ for line_num, line in enumerate(content.splitlines(), 1):
+ if regex.search(line):
+ virtual_path = "/" + str(file_path.relative_to(self.root_path))
+ if virtual_path not in results:
+ results[virtual_path] = []
+ results[virtual_path].append((line_num, line))
+
+ return results
+
+ def _format_grep_results(
+ self,
+ results: dict[str, list[tuple[int, str]]],
+ output_mode: str,
+ ) -> str:
+ """Format grep results based on output mode."""
+ if output_mode == "files_with_matches":
+ # Just return file paths
+ return "\n".join(sorted(results.keys()))
+
+ if output_mode == "content":
+ # Return file:line:content format
+ lines = []
+ for file_path in sorted(results.keys()):
+ for line_num, line in results[file_path]:
+ lines.append(f"{file_path}:{line_num}:{line}")
+ return "\n".join(lines)
+
+ if output_mode == "count":
+ # Return file:count format
+ lines = []
+ for file_path in sorted(results.keys()):
+ count = len(results[file_path])
+ lines.append(f"{file_path}:{count}")
+ return "\n".join(lines)
+
+ # Default to files_with_matches
+ return "\n".join(sorted(results.keys()))
+
+
+__all__ = [
+ "FilesystemFileSearchMiddleware",
+]
diff --git a/libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py b/libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py
index 071a8d9cf4a..cc1a4f2df3f 100644
--- a/libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py
+++ b/libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py
@@ -10,89 +10,93 @@ from typing_extensions import NotRequired, TypedDict
from langchain.agents.middleware.types import AgentMiddleware, AgentState
-class HumanInTheLoopConfig(TypedDict):
- """Configuration that defines what actions are allowed for a human interrupt.
+class Action(TypedDict):
+ """Represents an action with a name and args."""
- This controls the available interaction options when the graph is paused for human input.
- """
+ name: str
+ """The type or name of action being requested (e.g., "add_numbers")."""
- allow_accept: NotRequired[bool]
- """Whether the human can approve the current action without changes."""
- allow_edit: NotRequired[bool]
- """Whether the human can approve the current action with edited content."""
- allow_respond: NotRequired[bool]
- """Whether the human can reject the current action with feedback."""
+ args: dict[str, Any]
+ """Key-value pairs of args needed for the action (e.g., {"a": 1, "b": 2})."""
class ActionRequest(TypedDict):
- """Represents a request with a name and arguments."""
+ """Represents an action request with a name, args, and description."""
- action: str
- """The type or name of action being requested (e.g., "add_numbers")."""
- args: dict
- """Key-value pairs of arguments needed for the action (e.g., {"a": 1, "b": 2})."""
+ name: str
+ """The name of the action being requested."""
+
+ args: dict[str, Any]
+ """Key-value pairs of args needed for the action (e.g., {"a": 1, "b": 2})."""
+
+ description: NotRequired[str]
+ """The description of the action to be reviewed."""
-class HumanInTheLoopRequest(TypedDict):
- """Represents an interrupt triggered by the graph that requires human intervention.
-
- Example:
- ```python
- # Extract a tool call from the state and create an interrupt request
- request = HumanInterrupt(
- action_request=ActionRequest(
- action="run_command", # The action being requested
- args={"command": "ls", "args": ["-l"]}, # Arguments for the action
- ),
- config=HumanInTheLoopConfig(
- allow_accept=True, # Allow approval
- allow_respond=True, # Allow rejection with feedback
- allow_edit=False, # Don't allow approval with edits
- ),
- description="Please review the command before execution",
- )
- # Send the interrupt request and get the response
- response = interrupt([request])[0]
- ```
- """
-
- action_request: ActionRequest
- """The specific action being requested from the human."""
- config: HumanInTheLoopConfig
- """Configuration defining what response types are allowed."""
- description: str | None
- """Optional detailed description of what input is needed."""
+DecisionType = Literal["approve", "edit", "reject"]
-class AcceptPayload(TypedDict):
+class ReviewConfig(TypedDict):
+ """Policy for reviewing a HITL request."""
+
+ action_name: str
+ """Name of the action associated with this review configuration."""
+
+ allowed_decisions: list[DecisionType]
+ """The decisions that are allowed for this request."""
+
+ args_schema: NotRequired[dict[str, Any]]
+ """JSON schema for the args associated with the action, if edits are allowed."""
+
+
+class HITLRequest(TypedDict):
+ """Request for human feedback on a sequence of actions requested by a model."""
+
+ action_requests: list[ActionRequest]
+ """A list of agent actions for human review."""
+
+ review_configs: list[ReviewConfig]
+ """Review configuration for all possible actions."""
+
+
+class ApproveDecision(TypedDict):
"""Response when a human approves the action."""
- type: Literal["accept"]
+ type: Literal["approve"]
"""The type of response when a human approves the action."""
-class ResponsePayload(TypedDict):
- """Response when a human rejects the action."""
-
- type: Literal["response"]
- """The type of response when a human rejects the action."""
-
- args: NotRequired[str]
- """The message to be sent to the model explaining why the action was rejected."""
-
-
-class EditPayload(TypedDict):
+class EditDecision(TypedDict):
"""Response when a human edits the action."""
type: Literal["edit"]
"""The type of response when a human edits the action."""
- args: ActionRequest
- """The action request with the edited content."""
+ edited_action: Action
+ """Edited action for the agent to perform.
+
+ Ex: for a tool call, a human reviewer can edit the tool name and args.
+ """
-HumanInTheLoopResponse = AcceptPayload | ResponsePayload | EditPayload
-"""Aggregated response type for all possible human in the loop responses."""
+class RejectDecision(TypedDict):
+ """Response when a human rejects the action."""
+
+ type: Literal["reject"]
+ """The type of response when a human rejects the action."""
+
+ message: NotRequired[str]
+ """The message sent to the model explaining why the action was rejected."""
+
+
+Decision = ApproveDecision | EditDecision | RejectDecision
+
+
+class HITLResponse(TypedDict):
+ """Response payload for a HITLRequest."""
+
+ decisions: list[Decision]
+ """The decisions made by the human."""
class _DescriptionFactory(Protocol):
@@ -103,19 +107,21 @@ class _DescriptionFactory(Protocol):
...
-class ToolConfig(TypedDict):
- """Configuration for a tool requiring human in the loop."""
+class InterruptOnConfig(TypedDict):
+ """Configuration for an action requiring human in the loop.
+
+ This is the configuration format used in the `HumanInTheLoopMiddleware.__init__`
+ method.
+ """
+
+ allowed_decisions: list[DecisionType]
+ """The decisions that are allowed for this action."""
- allow_accept: NotRequired[bool]
- """Whether the human can approve the current action without changes."""
- allow_edit: NotRequired[bool]
- """Whether the human can approve the current action with edited content."""
- allow_respond: NotRequired[bool]
- """Whether the human can reject the current action with feedback."""
description: NotRequired[str | _DescriptionFactory]
"""The description attached to the request for human input.
Can be either:
+
- A static string describing the approval request
- A callable that dynamically generates the description based on agent state,
runtime, and tool call information
@@ -124,7 +130,7 @@ class ToolConfig(TypedDict):
```python
# Static string description
config = ToolConfig(
- allow_accept=True,
+ allowed_decisions=["approve", "reject"],
description="Please review this tool execution"
)
@@ -140,12 +146,14 @@ class ToolConfig(TypedDict):
f"Arguments:\\n{json.dumps(tool_call['args'], indent=2)}"
)
- config = ToolConfig(
- allow_accept=True,
+ config = InterruptOnConfig(
+ allowed_decisions=["approve", "edit", "reject"],
description=format_tool_description
)
```
"""
+ args_schema: NotRequired[dict[str, Any]]
+ """JSON schema for the args associated with the action, if edits are allowed."""
class HumanInTheLoopMiddleware(AgentMiddleware):
@@ -153,7 +161,7 @@ class HumanInTheLoopMiddleware(AgentMiddleware):
def __init__(
self,
- interrupt_on: dict[str, bool | ToolConfig],
+ interrupt_on: dict[str, bool | InterruptOnConfig],
*,
description_prefix: str = "Tool execution requires approval",
) -> None:
@@ -163,34 +171,110 @@ class HumanInTheLoopMiddleware(AgentMiddleware):
interrupt_on: Mapping of tool name to allowed actions.
If a tool doesn't have an entry, it's auto-approved by default.
- * `True` indicates all actions are allowed: accept, edit, and respond.
+ * `True` indicates all decisions are allowed: approve, edit, and reject.
* `False` indicates that the tool is auto-approved.
- * `ToolConfig` indicates the specific actions allowed for this tool.
- The ToolConfig can include a `description` field (str or callable) for
- custom formatting of the interrupt description.
+ * `InterruptOnConfig` indicates the specific decisions allowed for this
+ tool.
+ The InterruptOnConfig can include a `description` field (`str` or
+ `Callable`) for custom formatting of the interrupt description.
description_prefix: The prefix to use when constructing action requests.
- This is used to provide context about the tool call and the action being requested.
- Not used if a tool has a `description` in its ToolConfig.
+ This is used to provide context about the tool call and the action being
+ requested. Not used if a tool has a `description` in its
+ `InterruptOnConfig`.
"""
super().__init__()
- resolved_tool_configs: dict[str, ToolConfig] = {}
+ resolved_configs: dict[str, InterruptOnConfig] = {}
for tool_name, tool_config in interrupt_on.items():
if isinstance(tool_config, bool):
if tool_config is True:
- resolved_tool_configs[tool_name] = ToolConfig(
- allow_accept=True,
- allow_edit=True,
- allow_respond=True,
+ resolved_configs[tool_name] = InterruptOnConfig(
+ allowed_decisions=["approve", "edit", "reject"]
)
- elif any(
- tool_config.get(x, False) for x in ["allow_accept", "allow_edit", "allow_respond"]
- ):
- resolved_tool_configs[tool_name] = tool_config
- self.interrupt_on = resolved_tool_configs
+ elif tool_config.get("allowed_decisions"):
+ resolved_configs[tool_name] = tool_config
+ self.interrupt_on = resolved_configs
self.description_prefix = description_prefix
+ def _create_action_and_config(
+ self,
+ tool_call: ToolCall,
+ config: InterruptOnConfig,
+ state: AgentState,
+ runtime: Runtime,
+ ) -> tuple[ActionRequest, ReviewConfig]:
+ """Create an ActionRequest and ReviewConfig for a tool call."""
+ tool_name = tool_call["name"]
+ tool_args = tool_call["args"]
+
+ # Generate description using the description field (str or callable)
+ description_value = config.get("description")
+ if callable(description_value):
+ description = description_value(tool_call, state, runtime)
+ elif description_value is not None:
+ description = description_value
+ else:
+ description = f"{self.description_prefix}\n\nTool: {tool_name}\nArgs: {tool_args}"
+
+ # Create ActionRequest with description
+ action_request = ActionRequest(
+ name=tool_name,
+ args=tool_args,
+ description=description,
+ )
+
+ # Create ReviewConfig
+ # eventually can get tool information and populate args_schema from there
+ review_config = ReviewConfig(
+ action_name=tool_name,
+ allowed_decisions=config["allowed_decisions"],
+ )
+
+ return action_request, review_config
+
+ def _process_decision(
+ self,
+ decision: Decision,
+ tool_call: ToolCall,
+ config: InterruptOnConfig,
+ ) -> tuple[ToolCall | None, ToolMessage | None]:
+ """Process a single decision and return the revised tool call and optional tool message."""
+ allowed_decisions = config["allowed_decisions"]
+
+ if decision["type"] == "approve" and "approve" in allowed_decisions:
+ return tool_call, None
+ if decision["type"] == "edit" and "edit" in allowed_decisions:
+ edited_action = decision["edited_action"]
+ return (
+ ToolCall(
+ type="tool_call",
+ name=edited_action["name"],
+ args=edited_action["args"],
+ id=tool_call["id"],
+ ),
+ None,
+ )
+ if decision["type"] == "reject" and "reject" in allowed_decisions:
+ # Create a tool message with the human's text response
+ content = decision.get("message") or (
+ f"User rejected the tool call for `{tool_call['name']}` with id {tool_call['id']}"
+ )
+ tool_message = ToolMessage(
+ content=content,
+ name=tool_call["name"],
+ tool_call_id=tool_call["id"],
+ status="error",
+ )
+ return tool_call, tool_message
+ msg = (
+ f"Unexpected human decision: {decision}. "
+ f"Decision type '{decision.get('type')}' "
+ f"is not allowed for tool '{tool_call['name']}'. "
+ f"Expected one of {allowed_decisions} based on the tool's configuration."
+ )
+ raise ValueError(msg)
+
def after_model(self, state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
- """Trigger interrupt flows for relevant tool calls after an AIMessage."""
+ """Trigger interrupt flows for relevant tool calls after an `AIMessage`."""
messages = state["messages"]
if not messages:
return None
@@ -216,87 +300,50 @@ class HumanInTheLoopMiddleware(AgentMiddleware):
revised_tool_calls: list[ToolCall] = auto_approved_tool_calls.copy()
artificial_tool_messages: list[ToolMessage] = []
- # Create interrupt requests for all tools that need approval
- interrupt_requests: list[HumanInTheLoopRequest] = []
+ # Create action requests and review configs for all tools that need approval
+ action_requests: list[ActionRequest] = []
+ review_configs: list[ReviewConfig] = []
+
for tool_call in interrupt_tool_calls:
- tool_name = tool_call["name"]
- tool_args = tool_call["args"]
- config = self.interrupt_on[tool_name]
+ config = self.interrupt_on[tool_call["name"]]
- # Generate description using the description field (str or callable)
- description_value = config.get("description")
- if callable(description_value):
- description = description_value(tool_call, state, runtime)
- elif description_value is not None:
- description = description_value
- else:
- description = f"{self.description_prefix}\n\nTool: {tool_name}\nArgs: {tool_args}"
+ # Create ActionRequest and ReviewConfig using helper method
+ action_request, review_config = self._create_action_and_config(
+ tool_call, config, state, runtime
+ )
+ action_requests.append(action_request)
+ review_configs.append(review_config)
- request: HumanInTheLoopRequest = {
- "action_request": ActionRequest(
- action=tool_name,
- args=tool_args,
- ),
- "config": config,
- "description": description,
- }
- interrupt_requests.append(request)
+ # Create single HITLRequest with all actions and configs
+ hitl_request = HITLRequest(
+ action_requests=action_requests,
+ review_configs=review_configs,
+ )
- responses: list[HumanInTheLoopResponse] = interrupt(interrupt_requests)
+ # Send interrupt and get response
+ hitl_response: HITLResponse = interrupt(hitl_request)
+ decisions = hitl_response["decisions"]
- # Validate that the number of responses matches the number of interrupt tool calls
- if (responses_len := len(responses)) != (
+ # Validate that the number of decisions matches the number of interrupt tool calls
+ if (decisions_len := len(decisions)) != (
interrupt_tool_calls_len := len(interrupt_tool_calls)
):
msg = (
- f"Number of human responses ({responses_len}) does not match "
+ f"Number of human decisions ({decisions_len}) does not match "
f"number of hanging tool calls ({interrupt_tool_calls_len})."
)
raise ValueError(msg)
- for i, response in enumerate(responses):
+ # Process each decision using helper method
+ for i, decision in enumerate(decisions):
tool_call = interrupt_tool_calls[i]
config = self.interrupt_on[tool_call["name"]]
- if response["type"] == "accept" and config.get("allow_accept"):
- revised_tool_calls.append(tool_call)
- elif response["type"] == "edit" and config.get("allow_edit"):
- edited_action = response["args"]
- revised_tool_calls.append(
- ToolCall(
- type="tool_call",
- name=edited_action["action"],
- args=edited_action["args"],
- id=tool_call["id"],
- )
- )
- elif response["type"] == "response" and config.get("allow_respond"):
- # Create a tool message with the human's text response
- content = response.get("args") or (
- f"User rejected the tool call for `{tool_call['name']}` "
- f"with id {tool_call['id']}"
- )
- tool_message = ToolMessage(
- content=content,
- name=tool_call["name"],
- tool_call_id=tool_call["id"],
- status="error",
- )
- revised_tool_calls.append(tool_call)
+ revised_tool_call, tool_message = self._process_decision(decision, tool_call, config)
+ if revised_tool_call:
+ revised_tool_calls.append(revised_tool_call)
+ if tool_message:
artificial_tool_messages.append(tool_message)
- else:
- allowed_actions = [
- action
- for action in ["accept", "edit", "response"]
- if config.get(f"allow_{'respond' if action == 'response' else action}")
- ]
- msg = (
- f"Unexpected human response: {response}. "
- f"Response action '{response.get('type')}' "
- f"is not allowed for tool '{tool_call['name']}'. "
- f"Expected one of {allowed_actions} based on the tool's configuration."
- )
- raise ValueError(msg)
# Update the AI message to only include approved tool calls
last_ai_msg.tool_calls = revised_tool_calls
diff --git a/libs/langchain_v1/langchain/agents/middleware/model_call_limit.py b/libs/langchain_v1/langchain/agents/middleware/model_call_limit.py
index 63f0be50fda..90dba99dfa3 100644
--- a/libs/langchain_v1/langchain/agents/middleware/model_call_limit.py
+++ b/libs/langchain_v1/langchain/agents/middleware/model_call_limit.py
@@ -2,16 +2,33 @@
from __future__ import annotations
-from typing import TYPE_CHECKING, Any, Literal
+from typing import TYPE_CHECKING, Annotated, Any, Literal
from langchain_core.messages import AIMessage
+from langgraph.channels.untracked_value import UntrackedValue
+from typing_extensions import NotRequired
-from langchain.agents.middleware.types import AgentMiddleware, AgentState, hook_config
+from langchain.agents.middleware.types import (
+ AgentMiddleware,
+ AgentState,
+ PrivateStateAttr,
+ hook_config,
+)
if TYPE_CHECKING:
from langgraph.runtime import Runtime
+class ModelCallLimitState(AgentState):
+ """State schema for ModelCallLimitMiddleware.
+
+ Extends AgentState with model call tracking fields.
+ """
+
+ thread_model_call_count: NotRequired[Annotated[int, PrivateStateAttr]]
+ run_model_call_count: NotRequired[Annotated[int, UntrackedValue, PrivateStateAttr]]
+
+
def _build_limit_exceeded_message(
thread_count: int,
run_count: int,
@@ -69,8 +86,8 @@ class ModelCallLimitExceededError(Exception):
super().__init__(msg)
-class ModelCallLimitMiddleware(AgentMiddleware):
- """Middleware that tracks model call counts and enforces limits.
+class ModelCallLimitMiddleware(AgentMiddleware[ModelCallLimitState, Any]):
+ """Tracks model call counts and enforces limits.
This middleware monitors the number of model calls made during agent execution
and can terminate the agent when specified limits are reached. It supports
@@ -97,6 +114,8 @@ class ModelCallLimitMiddleware(AgentMiddleware):
```
"""
+ state_schema = ModelCallLimitState
+
def __init__(
self,
*,
@@ -108,17 +127,16 @@ class ModelCallLimitMiddleware(AgentMiddleware):
Args:
thread_limit: Maximum number of model calls allowed per thread.
- None means no limit. Defaults to `None`.
+ None means no limit.
run_limit: Maximum number of model calls allowed per run.
- None means no limit. Defaults to `None`.
+ None means no limit.
exit_behavior: What to do when limits are exceeded.
- "end": Jump to the end of the agent execution and
inject an artificial AI message indicating that the limit was exceeded.
- - "error": Raise a ModelCallLimitExceededError
- Defaults to "end".
+ - "error": Raise a `ModelCallLimitExceededError`
Raises:
- ValueError: If both limits are None or if exit_behavior is invalid.
+ ValueError: If both limits are `None` or if `exit_behavior` is invalid.
"""
super().__init__()
@@ -135,7 +153,7 @@ class ModelCallLimitMiddleware(AgentMiddleware):
self.exit_behavior = exit_behavior
@hook_config(can_jump_to=["end"])
- def before_model(self, state: AgentState, runtime: Runtime) -> dict[str, Any] | None: # noqa: ARG002
+ def before_model(self, state: ModelCallLimitState, runtime: Runtime) -> dict[str, Any] | None: # noqa: ARG002
"""Check model call limits before making a model call.
Args:
@@ -175,3 +193,18 @@ class ModelCallLimitMiddleware(AgentMiddleware):
return {"jump_to": "end", "messages": [limit_ai_message]}
return None
+
+ def after_model(self, state: ModelCallLimitState, runtime: Runtime) -> dict[str, Any] | None: # noqa: ARG002
+ """Increment model call counts after a model call.
+
+ Args:
+ state: The current agent state.
+ runtime: The langgraph runtime.
+
+ Returns:
+ State updates with incremented call counts.
+ """
+ return {
+ "thread_model_call_count": state.get("thread_model_call_count", 0) + 1,
+ "run_model_call_count": state.get("run_model_call_count", 0) + 1,
+ }
diff --git a/libs/langchain_v1/langchain/agents/middleware/model_fallback.py b/libs/langchain_v1/langchain/agents/middleware/model_fallback.py
index 048eb8edb88..6ac85e57751 100644
--- a/libs/langchain_v1/langchain/agents/middleware/model_fallback.py
+++ b/libs/langchain_v1/langchain/agents/middleware/model_fallback.py
@@ -6,15 +6,16 @@ from typing import TYPE_CHECKING
from langchain.agents.middleware.types import (
AgentMiddleware,
+ ModelCallResult,
ModelRequest,
+ ModelResponse,
)
from langchain.chat_models import init_chat_model
if TYPE_CHECKING:
- from collections.abc import Callable
+ from collections.abc import Awaitable, Callable
from langchain_core.language_models.chat_models import BaseChatModel
- from langchain_core.messages import AIMessage
class ModelFallbackMiddleware(AgentMiddleware):
@@ -30,7 +31,7 @@ class ModelFallbackMiddleware(AgentMiddleware):
fallback = ModelFallbackMiddleware(
"openai:gpt-4o-mini", # Try first on error
- "anthropic:claude-3-5-sonnet-20241022", # Then this
+ "anthropic:claude-sonnet-4-5-20250929", # Then this
)
agent = create_agent(
@@ -38,7 +39,7 @@ class ModelFallbackMiddleware(AgentMiddleware):
middleware=[fallback],
)
- # If primary fails: tries gpt-4o-mini, then claude-3-5-sonnet
+ # If primary fails: tries gpt-4o-mini, then claude-sonnet-4-5-20250929
result = await agent.invoke({"messages": [HumanMessage("Hello")]})
```
"""
@@ -68,14 +69,12 @@ class ModelFallbackMiddleware(AgentMiddleware):
def wrap_model_call(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
"""Try fallback models in sequence on errors.
Args:
request: Initial model request.
- state: Current agent state.
- runtime: LangGraph runtime.
handler: Callback to execute the model.
Returns:
@@ -101,3 +100,38 @@ class ModelFallbackMiddleware(AgentMiddleware):
continue
raise last_exception
+
+ async def awrap_model_call(
+ self,
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
+ """Try fallback models in sequence on errors (async version).
+
+ Args:
+ request: Initial model request.
+ handler: Async callback to execute the model.
+
+ Returns:
+ AIMessage from successful model call.
+
+ Raises:
+ Exception: If all models fail, re-raises last exception.
+ """
+ # Try primary model first
+ last_exception: Exception
+ try:
+ return await handler(request)
+ except Exception as e: # noqa: BLE001
+ last_exception = e
+
+ # Try fallback models
+ for fallback_model in self.models:
+ request.model = fallback_model
+ try:
+ return await handler(request)
+ except Exception as e: # noqa: BLE001
+ last_exception = e
+ continue
+
+ raise last_exception
diff --git a/libs/langchain_v1/langchain/agents/middleware/pii.py b/libs/langchain_v1/langchain/agents/middleware/pii.py
index 00e28bfe26b..4ca139174a3 100644
--- a/libs/langchain_v1/langchain/agents/middleware/pii.py
+++ b/libs/langchain_v1/langchain/agents/middleware/pii.py
@@ -2,15 +2,22 @@
from __future__ import annotations
-import hashlib
-import ipaddress
-import re
from typing import TYPE_CHECKING, Any, Literal
-from urllib.parse import urlparse
from langchain_core.messages import AIMessage, AnyMessage, HumanMessage, ToolMessage
-from typing_extensions import TypedDict
+from langchain.agents.middleware._redaction import (
+ PIIDetectionError,
+ PIIMatch,
+ RedactionRule,
+ ResolvedRedactionRule,
+ apply_strategy,
+ detect_credit_card,
+ detect_email,
+ detect_ip,
+ detect_mac_address,
+ detect_url,
+)
from langchain.agents.middleware.types import AgentMiddleware, AgentState, hook_config
if TYPE_CHECKING:
@@ -19,396 +26,6 @@ if TYPE_CHECKING:
from langgraph.runtime import Runtime
-class PIIMatch(TypedDict):
- """Represents a detected PII match in text."""
-
- type: str
- """The type of PII detected (e.g., 'email', 'ssn', 'credit_card')."""
- value: str
- """The actual matched text."""
- start: int
- """Starting position of the match in the text."""
- end: int
- """Ending position of the match in the text."""
-
-
-class PIIDetectionError(Exception):
- """Exception raised when PII is detected and strategy is 'block'."""
-
- def __init__(self, pii_type: str, matches: list[PIIMatch]) -> None:
- """Initialize the exception with PII detection information.
-
- Args:
- pii_type: The type of PII that was detected.
- matches: List of PII matches found.
- """
- self.pii_type = pii_type
- self.matches = matches
- count = len(matches)
- msg = f"Detected {count} instance(s) of {pii_type} in message content"
- super().__init__(msg)
-
-
-# ============================================================================
-# PII Detection Functions
-# ============================================================================
-
-
-def _luhn_checksum(card_number: str) -> bool:
- """Validate credit card number using Luhn algorithm.
-
- Args:
- card_number: Credit card number string (digits only).
-
- Returns:
- True if the number passes Luhn validation, False otherwise.
- """
- digits = [int(d) for d in card_number if d.isdigit()]
-
- if len(digits) < 13 or len(digits) > 19:
- return False
-
- checksum = 0
- for i, digit in enumerate(reversed(digits)):
- d = digit
- if i % 2 == 1:
- d *= 2
- if d > 9:
- d -= 9
- checksum += d
-
- return checksum % 10 == 0
-
-
-def detect_email(content: str) -> list[PIIMatch]:
- """Detect email addresses in content.
-
- Args:
- content: Text content to scan.
-
- Returns:
- List of detected email matches.
- """
- pattern = r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b"
- return [
- PIIMatch(
- type="email",
- value=match.group(),
- start=match.start(),
- end=match.end(),
- )
- for match in re.finditer(pattern, content)
- ]
-
-
-def detect_credit_card(content: str) -> list[PIIMatch]:
- """Detect credit card numbers in content using Luhn validation.
-
- Detects cards in formats like:
- - 1234567890123456
- - 1234 5678 9012 3456
- - 1234-5678-9012-3456
-
- Args:
- content: Text content to scan.
-
- Returns:
- List of detected credit card matches.
- """
- # Match various credit card formats
- pattern = r"\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b"
- matches = []
-
- for match in re.finditer(pattern, content):
- card_number = match.group()
- # Validate with Luhn algorithm
- if _luhn_checksum(card_number):
- matches.append(
- PIIMatch(
- type="credit_card",
- value=card_number,
- start=match.start(),
- end=match.end(),
- )
- )
-
- return matches
-
-
-def detect_ip(content: str) -> list[PIIMatch]:
- """Detect IP addresses in content using stdlib validation.
-
- Validates both IPv4 and IPv6 addresses.
-
- Args:
- content: Text content to scan.
-
- Returns:
- List of detected IP address matches.
- """
- matches = []
-
- # IPv4 pattern
- ipv4_pattern = r"\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b"
-
- for match in re.finditer(ipv4_pattern, content):
- ip_str = match.group()
- try:
- # Validate with stdlib
- ipaddress.ip_address(ip_str)
- matches.append(
- PIIMatch(
- type="ip",
- value=ip_str,
- start=match.start(),
- end=match.end(),
- )
- )
- except ValueError:
- # Not a valid IP address
- pass
-
- return matches
-
-
-def detect_mac_address(content: str) -> list[PIIMatch]:
- """Detect MAC addresses in content.
-
- Detects formats like:
- - 00:1A:2B:3C:4D:5E
- - 00-1A-2B-3C-4D-5E
-
- Args:
- content: Text content to scan.
-
- Returns:
- List of detected MAC address matches.
- """
- pattern = r"\b([0-9A-Fa-f]{2}[:-]){5}[0-9A-Fa-f]{2}\b"
- return [
- PIIMatch(
- type="mac_address",
- value=match.group(),
- start=match.start(),
- end=match.end(),
- )
- for match in re.finditer(pattern, content)
- ]
-
-
-def detect_url(content: str) -> list[PIIMatch]:
- """Detect URLs in content using regex and stdlib validation.
-
- Detects:
- - http://example.com
- - https://example.com/path
- - www.example.com
- - example.com/path
-
- Args:
- content: Text content to scan.
-
- Returns:
- List of detected URL matches.
- """
- matches = []
-
- # Pattern 1: URLs with scheme (http:// or https://)
- scheme_pattern = r"https?://[^\s<>\"{}|\\^`\[\]]+"
-
- for match in re.finditer(scheme_pattern, content):
- url = match.group()
- try:
- result = urlparse(url)
- if result.scheme in ("http", "https") and result.netloc:
- matches.append(
- PIIMatch(
- type="url",
- value=url,
- start=match.start(),
- end=match.end(),
- )
- )
- except Exception: # noqa: S110, BLE001
- # Invalid URL, skip
- pass
-
- # Pattern 2: URLs without scheme (www.example.com or example.com/path)
- # More conservative to avoid false positives
- bare_pattern = r"\b(?:www\.)?[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)+(?:/[^\s]*)?" # noqa: E501
-
- for match in re.finditer(bare_pattern, content):
- # Skip if already matched with scheme
- if any(
- m["start"] <= match.start() < m["end"] or m["start"] < match.end() <= m["end"]
- for m in matches
- ):
- continue
-
- url = match.group()
- # Only accept if it has a path or starts with www
- # This reduces false positives like "example.com" in prose
- if "/" in url or url.startswith("www."):
- try:
- # Add scheme for validation (required for urlparse to work correctly)
- test_url = f"http://{url}"
- result = urlparse(test_url)
- if result.netloc and "." in result.netloc:
- matches.append(
- PIIMatch(
- type="url",
- value=url,
- start=match.start(),
- end=match.end(),
- )
- )
- except Exception: # noqa: S110, BLE001
- # Invalid URL, skip
- pass
-
- return matches
-
-
-# Built-in detector registry
-_BUILTIN_DETECTORS: dict[str, Callable[[str], list[PIIMatch]]] = {
- "email": detect_email,
- "credit_card": detect_credit_card,
- "ip": detect_ip,
- "mac_address": detect_mac_address,
- "url": detect_url,
-}
-
-
-# ============================================================================
-# Strategy Implementations
-# ============================================================================
-
-
-def _apply_redact_strategy(content: str, matches: list[PIIMatch]) -> str:
- """Replace PII with [REDACTED_TYPE] placeholders.
-
- Args:
- content: Original content.
- matches: List of PII matches to redact.
-
- Returns:
- Content with PII redacted.
- """
- if not matches:
- return content
-
- # Sort matches by start position in reverse to avoid offset issues
- sorted_matches = sorted(matches, key=lambda m: m["start"], reverse=True)
-
- result = content
- for match in sorted_matches:
- replacement = f"[REDACTED_{match['type'].upper()}]"
- result = result[: match["start"]] + replacement + result[match["end"] :]
-
- return result
-
-
-def _apply_mask_strategy(content: str, matches: list[PIIMatch]) -> str:
- """Partially mask PII, showing only last few characters.
-
- Args:
- content: Original content.
- matches: List of PII matches to mask.
-
- Returns:
- Content with PII masked.
- """
- if not matches:
- return content
-
- # Sort matches by start position in reverse
- sorted_matches = sorted(matches, key=lambda m: m["start"], reverse=True)
-
- result = content
- for match in sorted_matches:
- value = match["value"]
- pii_type = match["type"]
-
- # Different masking strategies by type
- if pii_type == "email":
- # Show only domain: user@****.com
- parts = value.split("@")
- if len(parts) == 2:
- domain_parts = parts[1].split(".")
- if len(domain_parts) >= 2:
- masked = f"{parts[0]}@****.{domain_parts[-1]}"
- else:
- masked = f"{parts[0]}@****"
- else:
- masked = "****"
-
- elif pii_type == "credit_card":
- # Show last 4: ****-****-****-1234
- digits_only = "".join(c for c in value if c.isdigit())
- separator = "-" if "-" in value else " " if " " in value else ""
- if separator:
- masked = f"****{separator}****{separator}****{separator}{digits_only[-4:]}"
- else:
- masked = f"************{digits_only[-4:]}"
-
- elif pii_type == "ip":
- # Show last octet: *.*.*. 123
- parts = value.split(".")
- masked = f"*.*.*.{parts[-1]}" if len(parts) == 4 else "****"
-
- elif pii_type == "mac_address":
- # Show last byte: **:**:**:**:**:5E
- separator = ":" if ":" in value else "-"
- masked = (
- f"**{separator}**{separator}**{separator}**{separator}**{separator}{value[-2:]}"
- )
-
- elif pii_type == "url":
- # Mask everything: [MASKED_URL]
- masked = "[MASKED_URL]"
-
- else:
- # Default: show last 4 chars
- masked = f"****{value[-4:]}" if len(value) > 4 else "****"
-
- result = result[: match["start"]] + masked + result[match["end"] :]
-
- return result
-
-
-def _apply_hash_strategy(content: str, matches: list[PIIMatch]) -> str:
- """Replace PII with deterministic hash including type information.
-
- Args:
- content: Original content.
- matches: List of PII matches to hash.
-
- Returns:
- Content with PII replaced by hashes in format .
- """
- if not matches:
- return content
-
- # Sort matches by start position in reverse
- sorted_matches = sorted(matches, key=lambda m: m["start"], reverse=True)
-
- result = content
- for match in sorted_matches:
- value = match["value"]
- pii_type = match["type"]
- # Create deterministic hash
- hash_digest = hashlib.sha256(value.encode()).hexdigest()[:8]
- replacement = f"<{pii_type}_hash:{hash_digest}>"
- result = result[: match["start"]] + replacement + result[match["end"] :]
-
- return result
-
-
-# ============================================================================
-# PIIMiddleware
-# ============================================================================
-
-
class PIIMiddleware(AgentMiddleware):
"""Detect and handle Personally Identifiable Information (PII) in agent conversations.
@@ -421,7 +38,7 @@ class PIIMiddleware(AgentMiddleware):
- `credit_card`: Credit card numbers (validated with Luhn algorithm)
- `ip`: IP addresses (validated with stdlib)
- `mac_address`: MAC addresses
- - `url`: URLs (both http/https and bare URLs)
+ - `url`: URLs (both `http`/`https` and bare URLs)
Strategies:
- `block`: Raise an exception when PII is detected
@@ -431,14 +48,12 @@ class PIIMiddleware(AgentMiddleware):
Strategy Selection Guide:
- ======== =================== =======================================
- Strategy Preserves Identity? Best For
- ======== =================== =======================================
- `block` N/A Avoid PII completely
- `redact` No General compliance, log sanitization
- `mask` No Human readability, customer service UIs
- `hash` Yes (pseudonymous) Analytics, debugging
- ======== =================== =======================================
+ | Strategy | Preserves Identity? | Best For |
+ | -------- | ------------------- | --------------------------------------- |
+ | `block` | N/A | Avoid PII completely |
+ | `redact` | No | General compliance, log sanitization |
+ | `mask` | No | Human readability, customer service UIs |
+ | `hash` | Yes (pseudonymous) | Analytics, debugging |
Example:
```python
@@ -512,50 +127,34 @@ class PIIMiddleware(AgentMiddleware):
"""
super().__init__()
- self.pii_type = pii_type
- self.strategy = strategy
self.apply_to_input = apply_to_input
self.apply_to_output = apply_to_output
self.apply_to_tool_results = apply_to_tool_results
- # Resolve detector
- if detector is None:
- # Use built-in detector
- if pii_type not in _BUILTIN_DETECTORS:
- msg = (
- f"Unknown PII type: {pii_type}. "
- f"Must be one of {list(_BUILTIN_DETECTORS.keys())} "
- "or provide a custom detector."
- )
- raise ValueError(msg)
- self.detector = _BUILTIN_DETECTORS[pii_type]
- elif isinstance(detector, str):
- # Custom regex pattern
- pattern = detector
-
- def regex_detector(content: str) -> list[PIIMatch]:
- return [
- PIIMatch(
- type=pii_type,
- value=match.group(),
- start=match.start(),
- end=match.end(),
- )
- for match in re.finditer(pattern, content)
- ]
-
- self.detector = regex_detector
- else:
- # Custom callable detector
- self.detector = detector
+ self._resolved_rule: ResolvedRedactionRule = RedactionRule(
+ pii_type=pii_type,
+ strategy=strategy,
+ detector=detector,
+ ).resolve()
+ self.pii_type = self._resolved_rule.pii_type
+ self.strategy = self._resolved_rule.strategy
+ self.detector = self._resolved_rule.detector
@property
def name(self) -> str:
"""Name of the middleware."""
return f"{self.__class__.__name__}[{self.pii_type}]"
+ def _process_content(self, content: str) -> tuple[str, list[PIIMatch]]:
+ """Apply the configured redaction rule to the provided content."""
+ matches = self.detector(content)
+ if not matches:
+ return content, []
+ sanitized = apply_strategy(content, matches, self.strategy)
+ return sanitized, matches
+
@hook_config(can_jump_to=["end"])
- def before_model( # noqa: PLR0915
+ def before_model(
self,
state: AgentState,
runtime: Runtime, # noqa: ARG002
@@ -596,25 +195,9 @@ class PIIMiddleware(AgentMiddleware):
if last_user_idx is not None and last_user_msg and last_user_msg.content:
# Detect PII in message content
content = str(last_user_msg.content)
- matches = self.detector(content)
+ new_content, matches = self._process_content(content)
if matches:
- # Apply strategy
- if self.strategy == "block":
- raise PIIDetectionError(self.pii_type, matches)
-
- if self.strategy == "redact":
- new_content = _apply_redact_strategy(content, matches)
- elif self.strategy == "mask":
- new_content = _apply_mask_strategy(content, matches)
- elif self.strategy == "hash":
- new_content = _apply_hash_strategy(content, matches)
- else:
- # Should not reach here due to type hints
- msg = f"Unknown strategy: {self.strategy}"
- raise ValueError(msg)
-
- # Create updated message
updated_message: AnyMessage = HumanMessage(
content=new_content,
id=last_user_msg.id,
@@ -643,26 +226,11 @@ class PIIMiddleware(AgentMiddleware):
continue
content = str(tool_msg.content)
- matches = self.detector(content)
+ new_content, matches = self._process_content(content)
if not matches:
continue
- # Apply strategy
- if self.strategy == "block":
- raise PIIDetectionError(self.pii_type, matches)
-
- if self.strategy == "redact":
- new_content = _apply_redact_strategy(content, matches)
- elif self.strategy == "mask":
- new_content = _apply_mask_strategy(content, matches)
- elif self.strategy == "hash":
- new_content = _apply_hash_strategy(content, matches)
- else:
- # Should not reach here due to type hints
- msg = f"Unknown strategy: {self.strategy}"
- raise ValueError(msg)
-
# Create updated tool message
updated_message = ToolMessage(
content=new_content,
@@ -718,26 +286,11 @@ class PIIMiddleware(AgentMiddleware):
# Detect PII in message content
content = str(last_ai_msg.content)
- matches = self.detector(content)
+ new_content, matches = self._process_content(content)
if not matches:
return None
- # Apply strategy
- if self.strategy == "block":
- raise PIIDetectionError(self.pii_type, matches)
-
- if self.strategy == "redact":
- new_content = _apply_redact_strategy(content, matches)
- elif self.strategy == "mask":
- new_content = _apply_mask_strategy(content, matches)
- elif self.strategy == "hash":
- new_content = _apply_hash_strategy(content, matches)
- else:
- # Should not reach here due to type hints
- msg = f"Unknown strategy: {self.strategy}"
- raise ValueError(msg)
-
# Create updated message
updated_message = AIMessage(
content=new_content,
@@ -751,3 +304,14 @@ class PIIMiddleware(AgentMiddleware):
new_messages[last_ai_idx] = updated_message
return {"messages": new_messages}
+
+
+__all__ = [
+ "PIIDetectionError",
+ "PIIMiddleware",
+ "detect_credit_card",
+ "detect_email",
+ "detect_ip",
+ "detect_mac_address",
+ "detect_url",
+]
diff --git a/libs/langchain_v1/langchain/agents/middleware/prompt_caching.py b/libs/langchain_v1/langchain/agents/middleware/prompt_caching.py
deleted file mode 100644
index ae640a7459d..00000000000
--- a/libs/langchain_v1/langchain/agents/middleware/prompt_caching.py
+++ /dev/null
@@ -1,86 +0,0 @@
-"""Anthropic prompt caching middleware."""
-
-from collections.abc import Callable
-from typing import Literal
-from warnings import warn
-
-from langchain_core.messages import AIMessage
-
-from langchain.agents.middleware.types import AgentMiddleware, ModelRequest
-
-
-class AnthropicPromptCachingMiddleware(AgentMiddleware):
- """Prompt Caching Middleware.
-
- Optimizes API usage by caching conversation prefixes for Anthropic models.
-
- Learn more about Anthropic prompt caching
- [here](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching).
- """
-
- def __init__(
- self,
- type: Literal["ephemeral"] = "ephemeral",
- ttl: Literal["5m", "1h"] = "5m",
- min_messages_to_cache: int = 0,
- unsupported_model_behavior: Literal["ignore", "warn", "raise"] = "warn",
- ) -> None:
- """Initialize the middleware with cache control settings.
-
- Args:
- type: The type of cache to use, only "ephemeral" is supported.
- ttl: The time to live for the cache, only "5m" and "1h" are supported.
- min_messages_to_cache: The minimum number of messages until the cache is used,
- default is 0.
- unsupported_model_behavior: The behavior to take when an unsupported model is used.
- "ignore" will ignore the unsupported model and continue without caching.
- "warn" will warn the user and continue without caching.
- "raise" will raise an error and stop the agent.
- """
- self.type = type
- self.ttl = ttl
- self.min_messages_to_cache = min_messages_to_cache
- self.unsupported_model_behavior = unsupported_model_behavior
-
- def wrap_model_call(
- self,
- request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
- """Modify the model request to add cache control blocks."""
- try:
- from langchain_anthropic import ChatAnthropic
- except ImportError:
- ChatAnthropic = None # noqa: N806
-
- msg: str | None = None
-
- if ChatAnthropic is None:
- msg = (
- "AnthropicPromptCachingMiddleware caching middleware only supports "
- "Anthropic models. "
- "Please install langchain-anthropic."
- )
- elif not isinstance(request.model, ChatAnthropic):
- msg = (
- "AnthropicPromptCachingMiddleware caching middleware only supports "
- f"Anthropic models, not instances of {type(request.model)}"
- )
-
- if msg is not None:
- if self.unsupported_model_behavior == "raise":
- raise ValueError(msg)
- if self.unsupported_model_behavior == "warn":
- warn(msg, stacklevel=3)
- else:
- return handler(request)
-
- messages_count = (
- len(request.messages) + 1 if request.system_prompt else len(request.messages)
- )
- if messages_count < self.min_messages_to_cache:
- return handler(request)
-
- request.model_settings["cache_control"] = {"type": self.type, "ttl": self.ttl}
-
- return handler(request)
diff --git a/libs/langchain_v1/langchain/agents/middleware/shell_tool.py b/libs/langchain_v1/langchain/agents/middleware/shell_tool.py
new file mode 100644
index 00000000000..563ef2a2c39
--- /dev/null
+++ b/libs/langchain_v1/langchain/agents/middleware/shell_tool.py
@@ -0,0 +1,718 @@
+"""Middleware that exposes a persistent shell tool to agents."""
+
+from __future__ import annotations
+
+import contextlib
+import logging
+import os
+import queue
+import signal
+import subprocess
+import tempfile
+import threading
+import time
+import typing
+import uuid
+import weakref
+from dataclasses import dataclass, field
+from pathlib import Path
+from typing import TYPE_CHECKING, Annotated, Any, Literal
+
+from langchain_core.messages import ToolMessage
+from langchain_core.tools.base import BaseTool, ToolException
+from langgraph.channels.untracked_value import UntrackedValue
+from pydantic import BaseModel, model_validator
+from typing_extensions import NotRequired
+
+from langchain.agents.middleware._execution import (
+ SHELL_TEMP_PREFIX,
+ BaseExecutionPolicy,
+ CodexSandboxExecutionPolicy,
+ DockerExecutionPolicy,
+ HostExecutionPolicy,
+)
+from langchain.agents.middleware._redaction import (
+ PIIDetectionError,
+ PIIMatch,
+ RedactionRule,
+ ResolvedRedactionRule,
+)
+from langchain.agents.middleware.types import AgentMiddleware, AgentState, PrivateStateAttr
+
+if TYPE_CHECKING:
+ from collections.abc import Mapping, Sequence
+
+ from langgraph.runtime import Runtime
+ from langgraph.types import Command
+
+ from langchain.agents.middleware.types import ToolCallRequest
+
+LOGGER = logging.getLogger(__name__)
+_DONE_MARKER_PREFIX = "__LC_SHELL_DONE__"
+
+DEFAULT_TOOL_DESCRIPTION = (
+ "Execute a shell command inside a persistent session. Before running a command, "
+ "confirm the working directory is correct (e.g., inspect with `ls` or `pwd`) and ensure "
+ "any parent directories exist. Prefer absolute paths and quote paths containing spaces, "
+ 'such as `cd "/path/with spaces"`. Chain multiple commands with `&&` or `;` instead of '
+ "embedding newlines. Avoid unnecessary `cd` usage unless explicitly required so the "
+ "session remains stable. Outputs may be truncated when they become very large, and long "
+ "running commands will be terminated once their configured timeout elapses."
+)
+
+
+def _cleanup_resources(
+ session: ShellSession, tempdir: tempfile.TemporaryDirectory[str] | None, timeout: float
+) -> None:
+ with contextlib.suppress(Exception):
+ session.stop(timeout)
+ if tempdir is not None:
+ with contextlib.suppress(Exception):
+ tempdir.cleanup()
+
+
+@dataclass
+class _SessionResources:
+ """Container for per-run shell resources."""
+
+ session: ShellSession
+ tempdir: tempfile.TemporaryDirectory[str] | None
+ policy: BaseExecutionPolicy
+ _finalizer: weakref.finalize = field(init=False, repr=False)
+
+ def __post_init__(self) -> None:
+ self._finalizer = weakref.finalize(
+ self,
+ _cleanup_resources,
+ self.session,
+ self.tempdir,
+ self.policy.termination_timeout,
+ )
+
+
+class ShellToolState(AgentState):
+ """Agent state extension for tracking shell session resources."""
+
+ shell_session_resources: NotRequired[
+ Annotated[_SessionResources | None, UntrackedValue, PrivateStateAttr]
+ ]
+
+
+@dataclass(frozen=True)
+class CommandExecutionResult:
+ """Structured result from command execution."""
+
+ output: str
+ exit_code: int | None
+ timed_out: bool
+ truncated_by_lines: bool
+ truncated_by_bytes: bool
+ total_lines: int
+ total_bytes: int
+
+
+class ShellSession:
+ """Persistent shell session that supports sequential command execution."""
+
+ def __init__(
+ self,
+ workspace: Path,
+ policy: BaseExecutionPolicy,
+ command: tuple[str, ...],
+ environment: Mapping[str, str],
+ ) -> None:
+ self._workspace = workspace
+ self._policy = policy
+ self._command = command
+ self._environment = dict(environment)
+ self._process: subprocess.Popen[str] | None = None
+ self._stdin: Any = None
+ self._queue: queue.Queue[tuple[str, str | None]] = queue.Queue()
+ self._lock = threading.Lock()
+ self._stdout_thread: threading.Thread | None = None
+ self._stderr_thread: threading.Thread | None = None
+ self._terminated = False
+
+ def start(self) -> None:
+ """Start the shell subprocess and reader threads."""
+ if self._process and self._process.poll() is None:
+ return
+
+ self._process = self._policy.spawn(
+ workspace=self._workspace,
+ env=self._environment,
+ command=self._command,
+ )
+ if (
+ self._process.stdin is None
+ or self._process.stdout is None
+ or self._process.stderr is None
+ ):
+ msg = "Failed to initialize shell session pipes."
+ raise RuntimeError(msg)
+
+ self._stdin = self._process.stdin
+ self._terminated = False
+ self._queue = queue.Queue()
+
+ self._stdout_thread = threading.Thread(
+ target=self._enqueue_stream,
+ args=(self._process.stdout, "stdout"),
+ daemon=True,
+ )
+ self._stderr_thread = threading.Thread(
+ target=self._enqueue_stream,
+ args=(self._process.stderr, "stderr"),
+ daemon=True,
+ )
+ self._stdout_thread.start()
+ self._stderr_thread.start()
+
+ def restart(self) -> None:
+ """Restart the shell process."""
+ self.stop(self._policy.termination_timeout)
+ self.start()
+
+ def stop(self, timeout: float) -> None:
+ """Stop the shell subprocess."""
+ if not self._process:
+ return
+
+ if self._process.poll() is None and not self._terminated:
+ try:
+ self._stdin.write("exit\n")
+ self._stdin.flush()
+ except (BrokenPipeError, OSError):
+ LOGGER.debug(
+ "Failed to write exit command; terminating shell session.",
+ exc_info=True,
+ )
+
+ try:
+ if self._process.wait(timeout=timeout) is None:
+ self._kill_process()
+ except subprocess.TimeoutExpired:
+ self._kill_process()
+ finally:
+ self._terminated = True
+ with contextlib.suppress(Exception):
+ self._stdin.close()
+ self._process = None
+
+ def execute(self, command: str, *, timeout: float) -> CommandExecutionResult:
+ """Execute a command in the persistent shell."""
+ if not self._process or self._process.poll() is not None:
+ msg = "Shell session is not running."
+ raise RuntimeError(msg)
+
+ marker = f"{_DONE_MARKER_PREFIX}{uuid.uuid4().hex}"
+ deadline = time.monotonic() + timeout
+
+ with self._lock:
+ self._drain_queue()
+ payload = command if command.endswith("\n") else f"{command}\n"
+ self._stdin.write(payload)
+ self._stdin.write(f"printf '{marker} %s\\n' $?\n")
+ self._stdin.flush()
+
+ return self._collect_output(marker, deadline, timeout)
+
+ def _collect_output(
+ self,
+ marker: str,
+ deadline: float,
+ timeout: float,
+ ) -> CommandExecutionResult:
+ collected: list[str] = []
+ total_lines = 0
+ total_bytes = 0
+ truncated_by_lines = False
+ truncated_by_bytes = False
+ exit_code: int | None = None
+ timed_out = False
+
+ while True:
+ remaining = deadline - time.monotonic()
+ if remaining <= 0:
+ timed_out = True
+ break
+ try:
+ source, data = self._queue.get(timeout=remaining)
+ except queue.Empty:
+ timed_out = True
+ break
+
+ if data is None:
+ continue
+
+ if source == "stdout" and data.startswith(marker):
+ _, _, status = data.partition(" ")
+ exit_code = self._safe_int(status.strip())
+ break
+
+ total_lines += 1
+ encoded = data.encode("utf-8", "replace")
+ total_bytes += len(encoded)
+
+ if total_lines > self._policy.max_output_lines:
+ truncated_by_lines = True
+ continue
+
+ if (
+ self._policy.max_output_bytes is not None
+ and total_bytes > self._policy.max_output_bytes
+ ):
+ truncated_by_bytes = True
+ continue
+
+ if source == "stderr":
+ stripped = data.rstrip("\n")
+ collected.append(f"[stderr] {stripped}")
+ if data.endswith("\n"):
+ collected.append("\n")
+ else:
+ collected.append(data)
+
+ if timed_out:
+ LOGGER.warning(
+ "Command timed out after %.2f seconds; restarting shell session.",
+ timeout,
+ )
+ self.restart()
+ return CommandExecutionResult(
+ output="",
+ exit_code=None,
+ timed_out=True,
+ truncated_by_lines=truncated_by_lines,
+ truncated_by_bytes=truncated_by_bytes,
+ total_lines=total_lines,
+ total_bytes=total_bytes,
+ )
+
+ output = "".join(collected)
+ return CommandExecutionResult(
+ output=output,
+ exit_code=exit_code,
+ timed_out=False,
+ truncated_by_lines=truncated_by_lines,
+ truncated_by_bytes=truncated_by_bytes,
+ total_lines=total_lines,
+ total_bytes=total_bytes,
+ )
+
+ def _kill_process(self) -> None:
+ if not self._process:
+ return
+
+ if hasattr(os, "killpg"):
+ with contextlib.suppress(ProcessLookupError):
+ os.killpg(os.getpgid(self._process.pid), signal.SIGKILL)
+ else: # pragma: no cover
+ with contextlib.suppress(ProcessLookupError):
+ self._process.kill()
+
+ def _enqueue_stream(self, stream: Any, label: str) -> None:
+ for line in iter(stream.readline, ""):
+ self._queue.put((label, line))
+ self._queue.put((label, None))
+
+ def _drain_queue(self) -> None:
+ while True:
+ try:
+ self._queue.get_nowait()
+ except queue.Empty:
+ break
+
+ @staticmethod
+ def _safe_int(value: str) -> int | None:
+ with contextlib.suppress(ValueError):
+ return int(value)
+ return None
+
+
+class _ShellToolInput(BaseModel):
+ """Input schema for the persistent shell tool."""
+
+ command: str | None = None
+ restart: bool | None = None
+
+ @model_validator(mode="after")
+ def validate_payload(self) -> _ShellToolInput:
+ if self.command is None and not self.restart:
+ msg = "Shell tool requires either 'command' or 'restart'."
+ raise ValueError(msg)
+ if self.command is not None and self.restart:
+ msg = "Specify only one of 'command' or 'restart'."
+ raise ValueError(msg)
+ return self
+
+
+class _PersistentShellTool(BaseTool):
+ """Tool wrapper that relies on middleware interception for execution."""
+
+ name: str = "shell"
+ description: str = DEFAULT_TOOL_DESCRIPTION
+ args_schema: type[BaseModel] = _ShellToolInput
+
+ def __init__(self, middleware: ShellToolMiddleware, description: str | None = None) -> None:
+ super().__init__()
+ self._middleware = middleware
+ if description is not None:
+ self.description = description
+
+ def _run(self, **_: Any) -> Any: # pragma: no cover - executed via middleware wrapper
+ msg = "Persistent shell tool execution should be intercepted via middleware wrappers."
+ raise RuntimeError(msg)
+
+
+class ShellToolMiddleware(AgentMiddleware[ShellToolState, Any]):
+ """Middleware that registers a persistent shell tool for agents.
+
+ The middleware exposes a single long-lived shell session. Use the execution policy to
+ match your deployment's security posture:
+
+ * ``HostExecutionPolicy`` - full host access; best for trusted environments where the
+ agent already runs inside a container or VM that provides isolation.
+ * ``CodexSandboxExecutionPolicy`` - reuses the Codex CLI sandbox for additional
+ syscall/filesystem restrictions when the CLI is available.
+ * ``DockerExecutionPolicy`` - launches a separate Docker container for each agent run,
+ providing harder isolation, optional read-only root filesystems, and user remapping.
+
+ When no policy is provided the middleware defaults to ``HostExecutionPolicy``.
+ """
+
+ state_schema = ShellToolState
+
+ def __init__(
+ self,
+ workspace_root: str | Path | None = None,
+ *,
+ startup_commands: tuple[str, ...] | list[str] | str | None = None,
+ shutdown_commands: tuple[str, ...] | list[str] | str | None = None,
+ execution_policy: BaseExecutionPolicy | None = None,
+ redaction_rules: tuple[RedactionRule, ...] | list[RedactionRule] | None = None,
+ tool_description: str | None = None,
+ shell_command: Sequence[str] | str | None = None,
+ env: Mapping[str, Any] | None = None,
+ ) -> None:
+ """Initialize the middleware.
+
+ Args:
+ workspace_root: Base directory for the shell session. If omitted, a temporary
+ directory is created when the agent starts and removed when it ends.
+ startup_commands: Optional commands executed sequentially after the session starts.
+ shutdown_commands: Optional commands executed before the session shuts down.
+ execution_policy: Execution policy controlling timeouts, output limits, and resource
+ configuration. Defaults to :class:`HostExecutionPolicy` for native execution.
+ redaction_rules: Optional redaction rules to sanitize command output before
+ returning it to the model.
+ tool_description: Optional override for the registered shell tool description.
+ shell_command: Optional shell executable (string) or argument sequence used to
+ launch the persistent session. Defaults to an implementation-defined bash command.
+ env: Optional environment variables to supply to the shell session. Values are
+ coerced to strings before command execution. If omitted, the session inherits the
+ parent process environment.
+ """
+ super().__init__()
+ self._workspace_root = Path(workspace_root) if workspace_root else None
+ self._shell_command = self._normalize_shell_command(shell_command)
+ self._environment = self._normalize_env(env)
+ if execution_policy is not None:
+ self._execution_policy = execution_policy
+ else:
+ self._execution_policy = HostExecutionPolicy()
+ rules = redaction_rules or ()
+ self._redaction_rules: tuple[ResolvedRedactionRule, ...] = tuple(
+ rule.resolve() for rule in rules
+ )
+ self._startup_commands = self._normalize_commands(startup_commands)
+ self._shutdown_commands = self._normalize_commands(shutdown_commands)
+
+ description = tool_description or DEFAULT_TOOL_DESCRIPTION
+ self._tool = _PersistentShellTool(self, description=description)
+ self.tools = [self._tool]
+
+ @staticmethod
+ def _normalize_commands(
+ commands: tuple[str, ...] | list[str] | str | None,
+ ) -> tuple[str, ...]:
+ if commands is None:
+ return ()
+ if isinstance(commands, str):
+ return (commands,)
+ return tuple(commands)
+
+ @staticmethod
+ def _normalize_shell_command(
+ shell_command: Sequence[str] | str | None,
+ ) -> tuple[str, ...]:
+ if shell_command is None:
+ return ("/bin/bash",)
+ normalized = (shell_command,) if isinstance(shell_command, str) else tuple(shell_command)
+ if not normalized:
+ msg = "Shell command must contain at least one argument."
+ raise ValueError(msg)
+ return normalized
+
+ @staticmethod
+ def _normalize_env(env: Mapping[str, Any] | None) -> dict[str, str] | None:
+ if env is None:
+ return None
+ normalized: dict[str, str] = {}
+ for key, value in env.items():
+ if not isinstance(key, str):
+ msg = "Environment variable names must be strings."
+ raise TypeError(msg)
+ normalized[key] = str(value)
+ return normalized
+
+ def before_agent(self, state: ShellToolState, runtime: Runtime) -> dict[str, Any] | None: # noqa: ARG002
+ """Start the shell session and run startup commands."""
+ resources = self._create_resources()
+ return {"shell_session_resources": resources}
+
+ async def abefore_agent(self, state: ShellToolState, runtime: Runtime) -> dict[str, Any] | None:
+ """Async counterpart to `before_agent`."""
+ return self.before_agent(state, runtime)
+
+ def after_agent(self, state: ShellToolState, runtime: Runtime) -> None: # noqa: ARG002
+ """Run shutdown commands and release resources when an agent completes."""
+ resources = self._ensure_resources(state)
+ try:
+ self._run_shutdown_commands(resources.session)
+ finally:
+ resources._finalizer()
+
+ async def aafter_agent(self, state: ShellToolState, runtime: Runtime) -> None:
+ """Async counterpart to `after_agent`."""
+ return self.after_agent(state, runtime)
+
+ def _ensure_resources(self, state: ShellToolState) -> _SessionResources:
+ resources = state.get("shell_session_resources")
+ if resources is not None and not isinstance(resources, _SessionResources):
+ resources = None
+ if resources is None:
+ msg = (
+ "Shell session resources are unavailable. Ensure `before_agent` ran successfully "
+ "before invoking the shell tool."
+ )
+ raise ToolException(msg)
+ return resources
+
+ def _create_resources(self) -> _SessionResources:
+ workspace = self._workspace_root
+ tempdir: tempfile.TemporaryDirectory[str] | None = None
+ if workspace is None:
+ tempdir = tempfile.TemporaryDirectory(prefix=SHELL_TEMP_PREFIX)
+ workspace_path = Path(tempdir.name)
+ else:
+ workspace_path = workspace
+ workspace_path.mkdir(parents=True, exist_ok=True)
+
+ session = ShellSession(
+ workspace_path,
+ self._execution_policy,
+ self._shell_command,
+ self._environment or {},
+ )
+ try:
+ session.start()
+ LOGGER.info("Started shell session in %s", workspace_path)
+ self._run_startup_commands(session)
+ except BaseException:
+ LOGGER.exception("Starting shell session failed; cleaning up resources.")
+ session.stop(self._execution_policy.termination_timeout)
+ if tempdir is not None:
+ tempdir.cleanup()
+ raise
+
+ return _SessionResources(session=session, tempdir=tempdir, policy=self._execution_policy)
+
+ def _run_startup_commands(self, session: ShellSession) -> None:
+ if not self._startup_commands:
+ return
+ for command in self._startup_commands:
+ result = session.execute(command, timeout=self._execution_policy.startup_timeout)
+ if result.timed_out or (result.exit_code not in (0, None)):
+ msg = f"Startup command '{command}' failed with exit code {result.exit_code}"
+ raise RuntimeError(msg)
+
+ def _run_shutdown_commands(self, session: ShellSession) -> None:
+ if not self._shutdown_commands:
+ return
+ for command in self._shutdown_commands:
+ try:
+ result = session.execute(command, timeout=self._execution_policy.command_timeout)
+ if result.timed_out:
+ LOGGER.warning("Shutdown command '%s' timed out.", command)
+ elif result.exit_code not in (0, None):
+ LOGGER.warning(
+ "Shutdown command '%s' exited with %s.", command, result.exit_code
+ )
+ except (RuntimeError, ToolException, OSError) as exc:
+ LOGGER.warning(
+ "Failed to run shutdown command '%s': %s", command, exc, exc_info=True
+ )
+
+ def _apply_redactions(self, content: str) -> tuple[str, dict[str, list[PIIMatch]]]:
+ """Apply configured redaction rules to command output."""
+ matches_by_type: dict[str, list[PIIMatch]] = {}
+ updated = content
+ for rule in self._redaction_rules:
+ updated, matches = rule.apply(updated)
+ if matches:
+ matches_by_type.setdefault(rule.pii_type, []).extend(matches)
+ return updated, matches_by_type
+
+ def _run_shell_tool(
+ self,
+ resources: _SessionResources,
+ payload: dict[str, Any],
+ *,
+ tool_call_id: str | None,
+ ) -> Any:
+ session = resources.session
+
+ if payload.get("restart"):
+ LOGGER.info("Restarting shell session on request.")
+ try:
+ session.restart()
+ self._run_startup_commands(session)
+ except BaseException as err:
+ LOGGER.exception("Restarting shell session failed; session remains unavailable.")
+ msg = "Failed to restart shell session."
+ raise ToolException(msg) from err
+ message = "Shell session restarted."
+ return self._format_tool_message(message, tool_call_id, status="success")
+
+ command = payload.get("command")
+ if not command or not isinstance(command, str):
+ msg = "Shell tool expects a 'command' string when restart is not requested."
+ raise ToolException(msg)
+
+ LOGGER.info("Executing shell command: %s", command)
+ result = session.execute(command, timeout=self._execution_policy.command_timeout)
+
+ if result.timed_out:
+ timeout_seconds = self._execution_policy.command_timeout
+ message = f"Error: Command timed out after {timeout_seconds:.1f} seconds."
+ return self._format_tool_message(
+ message,
+ tool_call_id,
+ status="error",
+ artifact={
+ "timed_out": True,
+ "exit_code": None,
+ },
+ )
+
+ try:
+ sanitized_output, matches = self._apply_redactions(result.output)
+ except PIIDetectionError as error:
+ LOGGER.warning("Blocking command output due to detected %s.", error.pii_type)
+ message = f"Output blocked: detected {error.pii_type}."
+ return self._format_tool_message(
+ message,
+ tool_call_id,
+ status="error",
+ artifact={
+ "timed_out": False,
+ "exit_code": result.exit_code,
+ "matches": {error.pii_type: error.matches},
+ },
+ )
+
+ sanitized_output = sanitized_output or ""
+ if result.truncated_by_lines:
+ sanitized_output = (
+ f"{sanitized_output.rstrip()}\n\n"
+ f"... Output truncated at {self._execution_policy.max_output_lines} lines "
+ f"(observed {result.total_lines})."
+ )
+ if result.truncated_by_bytes and self._execution_policy.max_output_bytes is not None:
+ sanitized_output = (
+ f"{sanitized_output.rstrip()}\n\n"
+ f"... Output truncated at {self._execution_policy.max_output_bytes} bytes "
+ f"(observed {result.total_bytes})."
+ )
+
+ if result.exit_code not in (0, None):
+ sanitized_output = f"{sanitized_output.rstrip()}\n\nExit code: {result.exit_code}"
+ final_status: Literal["success", "error"] = "error"
+ else:
+ final_status = "success"
+
+ artifact = {
+ "timed_out": False,
+ "exit_code": result.exit_code,
+ "truncated_by_lines": result.truncated_by_lines,
+ "truncated_by_bytes": result.truncated_by_bytes,
+ "total_lines": result.total_lines,
+ "total_bytes": result.total_bytes,
+ "redaction_matches": matches,
+ }
+
+ return self._format_tool_message(
+ sanitized_output,
+ tool_call_id,
+ status=final_status,
+ artifact=artifact,
+ )
+
+ def wrap_tool_call(
+ self,
+ request: ToolCallRequest,
+ handler: typing.Callable[[ToolCallRequest], ToolMessage | Command],
+ ) -> ToolMessage | Command:
+ """Intercept local shell tool calls and execute them via the managed session."""
+ if isinstance(request.tool, _PersistentShellTool):
+ resources = self._ensure_resources(request.state)
+ return self._run_shell_tool(
+ resources,
+ request.tool_call["args"],
+ tool_call_id=request.tool_call.get("id"),
+ )
+ return handler(request)
+
+ async def awrap_tool_call(
+ self,
+ request: ToolCallRequest,
+ handler: typing.Callable[[ToolCallRequest], typing.Awaitable[ToolMessage | Command]],
+ ) -> ToolMessage | Command:
+ """Async interception mirroring the synchronous tool handler."""
+ if isinstance(request.tool, _PersistentShellTool):
+ resources = self._ensure_resources(request.state)
+ return self._run_shell_tool(
+ resources,
+ request.tool_call["args"],
+ tool_call_id=request.tool_call.get("id"),
+ )
+ return await handler(request)
+
+ def _format_tool_message(
+ self,
+ content: str,
+ tool_call_id: str | None,
+ *,
+ status: Literal["success", "error"],
+ artifact: dict[str, Any] | None = None,
+ ) -> ToolMessage | str:
+ artifact = artifact or {}
+ if tool_call_id is None:
+ return content
+ return ToolMessage(
+ content=content,
+ tool_call_id=tool_call_id,
+ name=self._tool.name,
+ status=status,
+ artifact=artifact,
+ )
+
+
+__all__ = [
+ "CodexSandboxExecutionPolicy",
+ "DockerExecutionPolicy",
+ "HostExecutionPolicy",
+ "RedactionRule",
+ "ShellToolMiddleware",
+]
diff --git a/libs/langchain_v1/langchain/agents/middleware/summarization.py b/libs/langchain_v1/langchain/agents/middleware/summarization.py
index de59095be1c..6ba6221206c 100644
--- a/libs/langchain_v1/langchain/agents/middleware/summarization.py
+++ b/libs/langchain_v1/langchain/agents/middleware/summarization.py
@@ -60,7 +60,7 @@ _SEARCH_RANGE_FOR_TOOL_PAIRS = 5
class SummarizationMiddleware(AgentMiddleware):
- """Middleware that summarizes conversation history when token limits are approached.
+ """Summarizes conversation history when token limits are approached.
This middleware monitors message token counts and automatically summarizes older
messages when a threshold is reached, preserving recent messages and maintaining
diff --git a/libs/langchain_v1/langchain/agents/middleware/planning.py b/libs/langchain_v1/langchain/agents/middleware/todo.py
similarity index 76%
rename from libs/langchain_v1/langchain/agents/middleware/planning.py
rename to libs/langchain_v1/langchain/agents/middleware/todo.py
index 3278ed8f125..c2b1b75d05c 100644
--- a/libs/langchain_v1/langchain/agents/middleware/planning.py
+++ b/libs/langchain_v1/langchain/agents/middleware/todo.py
@@ -6,14 +6,21 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Annotated, Literal
if TYPE_CHECKING:
- from collections.abc import Callable
+ from collections.abc import Awaitable, Callable
-from langchain_core.messages import AIMessage, ToolMessage
+from langchain_core.messages import SystemMessage, ToolMessage
from langchain_core.tools import tool
from langgraph.types import Command
from typing_extensions import NotRequired, TypedDict
-from langchain.agents.middleware.types import AgentMiddleware, AgentState, ModelRequest
+from langchain.agents.middleware.types import (
+ AgentMiddleware,
+ AgentState,
+ ModelCallResult,
+ ModelRequest,
+ ModelResponse,
+ OmitFromInput,
+)
from langchain.tools import InjectedToolCallId
@@ -30,7 +37,7 @@ class Todo(TypedDict):
class PlanningState(AgentState):
"""State schema for the todo middleware."""
- todos: NotRequired[list[Todo]]
+ todos: Annotated[NotRequired[list[Todo]], OmitFromInput]
"""List of todo items for tracking task progress."""
@@ -120,7 +127,7 @@ def write_todos(todos: list[Todo], tool_call_id: Annotated[str, InjectedToolCall
)
-class PlanningMiddleware(AgentMiddleware):
+class TodoListMiddleware(AgentMiddleware):
"""Middleware that provides todo list management capabilities to agents.
This middleware adds a `write_todos` tool that allows agents to create and manage
@@ -133,10 +140,10 @@ class PlanningMiddleware(AgentMiddleware):
Example:
```python
- from langchain.agents.middleware.planning import PlanningMiddleware
+ from langchain.agents.middleware.todo import TodoListMiddleware
from langchain.agents import create_agent
- agent = create_agent("openai:gpt-4o", middleware=[PlanningMiddleware()])
+ agent = create_agent("openai:gpt-4o", middleware=[TodoListMiddleware()])
# Agent now has access to write_todos tool and todo state tracking
result = await agent.invoke({"messages": [HumanMessage("Help me refactor my codebase")]})
@@ -159,7 +166,7 @@ class PlanningMiddleware(AgentMiddleware):
system_prompt: str = WRITE_TODOS_SYSTEM_PROMPT,
tool_description: str = WRITE_TODOS_TOOL_DESCRIPTION,
) -> None:
- """Initialize the PlanningMiddleware with optional custom prompts.
+ """Initialize the TodoListMiddleware with optional custom prompts.
Args:
system_prompt: Custom system prompt to guide the agent on using the todo tool.
@@ -189,12 +196,47 @@ class PlanningMiddleware(AgentMiddleware):
def wrap_model_call(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
"""Update the system prompt to include the todo system prompt."""
- request.system_prompt = (
- request.system_prompt + "\n\n" + self.system_prompt
- if request.system_prompt
- else self.system_prompt
- )
+ if request.system_prompt is None:
+ request.system_prompt = self.system_prompt
+ elif isinstance(request.system_prompt, str):
+ request.system_prompt = request.system_prompt + "\n\n" + self.system_prompt
+ elif isinstance(request.system_prompt, SystemMessage) and isinstance(
+ request.system_prompt.content, str
+ ):
+ request.system_prompt = SystemMessage(
+ content=request.system_prompt.content + self.system_prompt
+ )
+ elif isinstance(request.system_prompt, SystemMessage) and isinstance(
+ request.system_prompt.content, list
+ ):
+ request.system_prompt = SystemMessage(
+ content=[*request.system_prompt.content, self.system_prompt]
+ )
return handler(request)
+
+ async def awrap_model_call(
+ self,
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
+ """Update the system prompt to include the todo system prompt (async version)."""
+ if request.system_prompt is None:
+ request.system_prompt = self.system_prompt
+ elif isinstance(request.system_prompt, str):
+ request.system_prompt = request.system_prompt + "\n\n" + self.system_prompt
+ elif isinstance(request.system_prompt, SystemMessage) and isinstance(
+ request.system_prompt.content, str
+ ):
+ request.system_prompt = SystemMessage(
+ content=request.system_prompt.content + self.system_prompt
+ )
+ elif isinstance(request.system_prompt, SystemMessage) and isinstance(
+ request.system_prompt.content, list
+ ):
+ request.system_prompt = SystemMessage(
+ content=[*request.system_prompt.content, self.system_prompt]
+ )
+ return await handler(request)
diff --git a/libs/langchain_v1/langchain/agents/middleware/tool_call_limit.py b/libs/langchain_v1/langchain/agents/middleware/tool_call_limit.py
index 52c5d488bee..686ca06ab81 100644
--- a/libs/langchain_v1/langchain/agents/middleware/tool_call_limit.py
+++ b/libs/langchain_v1/langchain/agents/middleware/tool_call_limit.py
@@ -2,71 +2,78 @@
from __future__ import annotations
-from typing import TYPE_CHECKING, Any, Literal
+from typing import TYPE_CHECKING, Annotated, Any, Generic, Literal
-from langchain_core.messages import AIMessage, AnyMessage, HumanMessage
+from langchain_core.messages import AIMessage, ToolCall, ToolMessage
+from langgraph.channels.untracked_value import UntrackedValue
+from langgraph.typing import ContextT
+from typing_extensions import NotRequired
-from langchain.agents.middleware.types import AgentMiddleware, AgentState, hook_config
+from langchain.agents.middleware.types import (
+ AgentMiddleware,
+ AgentState,
+ PrivateStateAttr,
+ ResponseT,
+ hook_config,
+)
if TYPE_CHECKING:
from langgraph.runtime import Runtime
+ExitBehavior = Literal["continue", "error", "end"]
+"""How to handle execution when tool call limits are exceeded.
-def _count_tool_calls_in_messages(messages: list[AnyMessage], tool_name: str | None = None) -> int:
- """Count tool calls in a list of messages.
+- `"continue"`: Block exceeded tools with error messages, let other tools continue (default)
+- `"error"`: Raise a `ToolCallLimitExceededError` exception
+- `"end"`: Stop execution immediately, injecting a ToolMessage and an AI message
+ for the single tool call that exceeded the limit. Raises `NotImplementedError`
+ if there are other pending tool calls (due to parallel tool calling).
+"""
+
+
+class ToolCallLimitState(AgentState[ResponseT], Generic[ResponseT]):
+ """State schema for ToolCallLimitMiddleware.
+
+ Extends AgentState with tool call tracking fields.
+
+ The count fields are dictionaries mapping tool names to execution counts.
+ This allows multiple middleware instances to track different tools independently.
+ The special key "__all__" is used for tracking all tool calls globally.
+ """
+
+ thread_tool_call_count: NotRequired[Annotated[dict[str, int], PrivateStateAttr]]
+ run_tool_call_count: NotRequired[Annotated[dict[str, int], UntrackedValue, PrivateStateAttr]]
+
+
+def _build_tool_message_content(tool_name: str | None) -> str:
+ """Build the error message content for ToolMessage when limit is exceeded.
+
+ This message is sent to the model, so it should not reference thread/run concepts
+ that the model has no notion of.
Args:
- messages: List of messages to count tool calls in.
- tool_name: If specified, only count calls to this specific tool.
- If `None`, count all tool calls.
+ tool_name: Tool name being limited (if specific tool), or None for all tools.
Returns:
- The total number of tool calls (optionally filtered by tool_name).
+ A concise message instructing the model not to call the tool again.
"""
- count = 0
- for message in messages:
- if isinstance(message, AIMessage) and message.tool_calls:
- if tool_name is None:
- # Count all tool calls
- count += len(message.tool_calls)
- else:
- # Count only calls to the specified tool
- count += sum(1 for tc in message.tool_calls if tc["name"] == tool_name)
- return count
+ # Always instruct the model not to call again, regardless of which limit was hit
+ if tool_name:
+ return f"Tool call limit exceeded. Do not call '{tool_name}' again."
+ return "Tool call limit exceeded. Do not make additional tool calls."
-def _get_run_messages(messages: list[AnyMessage]) -> list[AnyMessage]:
- """Get messages from the current run (after the last HumanMessage).
-
- Args:
- messages: Full list of messages.
-
- Returns:
- Messages from the current run (after last HumanMessage).
- """
- # Find the last HumanMessage
- last_human_index = -1
- for i in range(len(messages) - 1, -1, -1):
- if isinstance(messages[i], HumanMessage):
- last_human_index = i
- break
-
- # If no HumanMessage found, return all messages
- if last_human_index == -1:
- return messages
-
- # Return messages after the last HumanMessage
- return messages[last_human_index + 1 :]
-
-
-def _build_tool_limit_exceeded_message(
+def _build_final_ai_message_content(
thread_count: int,
run_count: int,
thread_limit: int | None,
run_limit: int | None,
tool_name: str | None,
) -> str:
- """Build a message indicating which tool call limits were exceeded.
+ """Build the final AI message content for 'end' behavior.
+
+ This message is displayed to the user, so it should include detailed information
+ about which limits were exceeded.
Args:
thread_count: Current thread tool call count.
@@ -78,14 +85,16 @@ def _build_tool_limit_exceeded_message(
Returns:
A formatted message describing which limits were exceeded.
"""
- tool_desc = f"'{tool_name}' tool call" if tool_name else "Tool call"
+ tool_desc = f"'{tool_name}' tool" if tool_name else "Tool"
exceeded_limits = []
- if thread_limit is not None and thread_count >= thread_limit:
- exceeded_limits.append(f"thread limit ({thread_count}/{thread_limit})")
- if run_limit is not None and run_count >= run_limit:
- exceeded_limits.append(f"run limit ({run_count}/{run_limit})")
- return f"{tool_desc} limits exceeded: {', '.join(exceeded_limits)}"
+ if thread_limit is not None and thread_count > thread_limit:
+ exceeded_limits.append(f"thread limit exceeded ({thread_count}/{thread_limit} calls)")
+ if run_limit is not None and run_count > run_limit:
+ exceeded_limits.append(f"run limit exceeded ({run_count}/{run_limit} calls)")
+
+ limits_text = " and ".join(exceeded_limits)
+ return f"{tool_desc} call limit reached: {limits_text}."
class ToolCallLimitExceededError(Exception):
@@ -118,52 +127,78 @@ class ToolCallLimitExceededError(Exception):
self.run_limit = run_limit
self.tool_name = tool_name
- msg = _build_tool_limit_exceeded_message(
+ msg = _build_final_ai_message_content(
thread_count, run_count, thread_limit, run_limit, tool_name
)
super().__init__(msg)
-class ToolCallLimitMiddleware(AgentMiddleware):
- """Middleware that tracks tool call counts and enforces limits.
+class ToolCallLimitMiddleware(
+ AgentMiddleware[ToolCallLimitState[ResponseT], ContextT],
+ Generic[ResponseT, ContextT],
+):
+ """Track tool call counts and enforces limits during agent execution.
- This middleware monitors the number of tool calls made during agent execution
- and can terminate the agent when specified limits are reached. It supports
- both thread-level and run-level call counting with configurable exit behaviors.
+ This middleware monitors the number of tool calls made and can terminate or
+ restrict execution when limits are exceeded. It supports both thread-level
+ (persistent across runs) and run-level (per invocation) call counting.
- Thread-level: The middleware counts all tool calls in the entire message history
- and persists this count across multiple runs (invocations) of the agent.
+ Configuration:
+ - `exit_behavior`: How to handle when limits are exceeded
+ - `"continue"`: Block exceeded tools, let execution continue (default)
+ - `"error"`: Raise an exception
+ - `"end"`: Stop immediately with a ToolMessage + AI message for the single
+ tool call that exceeded the limit (raises `NotImplementedError` if there
+ are other pending tool calls (due to parallel tool calling).
- Run-level: The middleware counts tool calls made after the last HumanMessage,
- representing the current run (invocation) of the agent.
-
- Example:
+ Examples:
+ Continue execution with blocked tools (default):
```python
from langchain.agents.middleware.tool_call_limit import ToolCallLimitMiddleware
from langchain.agents import create_agent
- # Limit all tool calls globally
- global_limiter = ToolCallLimitMiddleware(thread_limit=20, run_limit=10, exit_behavior="end")
-
- # Limit a specific tool
- search_limiter = ToolCallLimitMiddleware(
- tool_name="search", thread_limit=5, run_limit=3, exit_behavior="end"
+ # Block exceeded tools but let other tools and model continue
+ limiter = ToolCallLimitMiddleware(
+ thread_limit=20,
+ run_limit=10,
+ exit_behavior="continue", # default
)
- # Use both in the same agent
- agent = create_agent("openai:gpt-4o", middleware=[global_limiter, search_limiter])
-
- result = await agent.invoke({"messages": [HumanMessage("Help me with a task")]})
+ agent = create_agent("openai:gpt-4o", middleware=[limiter])
```
+
+ Stop immediately when limit exceeded:
+ ```python
+ # End execution immediately with an AI message
+ limiter = ToolCallLimitMiddleware(run_limit=5, exit_behavior="end")
+
+ agent = create_agent("openai:gpt-4o", middleware=[limiter])
+ ```
+
+ Raise exception on limit:
+ ```python
+ # Strict limit with exception handling
+ limiter = ToolCallLimitMiddleware(tool_name="search", thread_limit=5, exit_behavior="error")
+
+ agent = create_agent("openai:gpt-4o", middleware=[limiter])
+
+ try:
+ result = await agent.invoke({"messages": [HumanMessage("Task")]})
+ except ToolCallLimitExceededError as e:
+ print(f"Search limit exceeded: {e}")
+ ```
+
"""
+ state_schema = ToolCallLimitState # type: ignore[assignment]
+
def __init__(
self,
*,
tool_name: str | None = None,
thread_limit: int | None = None,
run_limit: int | None = None,
- exit_behavior: Literal["end", "error"] = "end",
+ exit_behavior: ExitBehavior = "continue",
) -> None:
"""Initialize the tool call limit middleware.
@@ -171,17 +206,21 @@ class ToolCallLimitMiddleware(AgentMiddleware):
tool_name: Name of the specific tool to limit. If `None`, limits apply
to all tools. Defaults to `None`.
thread_limit: Maximum number of tool calls allowed per thread.
- None means no limit. Defaults to `None`.
+ `None` means no limit. Defaults to `None`.
run_limit: Maximum number of tool calls allowed per run.
- None means no limit. Defaults to `None`.
- exit_behavior: What to do when limits are exceeded.
- - "end": Jump to the end of the agent execution and
- inject an artificial AI message indicating that the limit was exceeded.
- - "error": Raise a ToolCallLimitExceededError
- Defaults to "end".
+ `None` means no limit. Defaults to `None`.
+ exit_behavior: How to handle when limits are exceeded.
+ - `"continue"`: Block exceeded tools with error messages, let other
+ tools continue. Model decides when to end. (default)
+ - `"error"`: Raise a `ToolCallLimitExceededError` exception
+ - `"end"`: Stop execution immediately with a ToolMessage + AI message
+ for the single tool call that exceeded the limit. Raises
+ `NotImplementedError` if there are multiple parallel tool
+ calls to other tools or multiple pending tool calls.
Raises:
- ValueError: If both limits are None or if exit_behavior is invalid.
+ ValueError: If both limits are `None`, if exit_behavior is invalid,
+ or if run_limit exceeds thread_limit.
"""
super().__init__()
@@ -189,8 +228,16 @@ class ToolCallLimitMiddleware(AgentMiddleware):
msg = "At least one limit must be specified (thread_limit or run_limit)"
raise ValueError(msg)
- if exit_behavior not in ("end", "error"):
- msg = f"Invalid exit_behavior: {exit_behavior}. Must be 'end' or 'error'"
+ valid_behaviors = ("continue", "error", "end")
+ if exit_behavior not in valid_behaviors:
+ msg = f"Invalid exit_behavior: {exit_behavior!r}. Must be one of {valid_behaviors}"
+ raise ValueError(msg)
+
+ if thread_limit is not None and run_limit is not None and run_limit > thread_limit:
+ msg = (
+ f"run_limit ({run_limit}) cannot exceed thread_limit ({thread_limit}). "
+ "The run limit should be less than or equal to the thread limit."
+ )
raise ValueError(msg)
self.tool_name = tool_name
@@ -210,51 +257,198 @@ class ToolCallLimitMiddleware(AgentMiddleware):
return f"{base_name}[{self.tool_name}]"
return base_name
- @hook_config(can_jump_to=["end"])
- def before_model(self, state: AgentState, runtime: Runtime) -> dict[str, Any] | None: # noqa: ARG002
- """Check tool call limits before making a model call.
+ def _would_exceed_limit(self, thread_count: int, run_count: int) -> bool:
+ """Check if incrementing the counts would exceed any configured limit.
Args:
- state: The current agent state containing messages.
+ thread_count: Current thread call count.
+ run_count: Current run call count.
+
+ Returns:
+ True if either limit would be exceeded by one more call.
+ """
+ return (self.thread_limit is not None and thread_count + 1 > self.thread_limit) or (
+ self.run_limit is not None and run_count + 1 > self.run_limit
+ )
+
+ def _matches_tool_filter(self, tool_call: ToolCall) -> bool:
+ """Check if a tool call matches this middleware's tool filter.
+
+ Args:
+ tool_call: The tool call to check.
+
+ Returns:
+ True if this middleware should track this tool call.
+ """
+ return self.tool_name is None or tool_call["name"] == self.tool_name
+
+ def _separate_tool_calls(
+ self, tool_calls: list[ToolCall], thread_count: int, run_count: int
+ ) -> tuple[list[ToolCall], list[ToolCall], int, int]:
+ """Separate tool calls into allowed and blocked based on limits.
+
+ Args:
+ tool_calls: List of tool calls to evaluate.
+ thread_count: Current thread call count.
+ run_count: Current run call count.
+
+ Returns:
+ Tuple of (allowed_calls, blocked_calls, final_thread_count, final_run_count).
+ """
+ allowed_calls: list[ToolCall] = []
+ blocked_calls: list[ToolCall] = []
+ temp_thread_count = thread_count
+ temp_run_count = run_count
+
+ for tool_call in tool_calls:
+ if not self._matches_tool_filter(tool_call):
+ continue
+
+ if self._would_exceed_limit(temp_thread_count, temp_run_count):
+ blocked_calls.append(tool_call)
+ else:
+ allowed_calls.append(tool_call)
+ temp_thread_count += 1
+ temp_run_count += 1
+
+ return allowed_calls, blocked_calls, temp_thread_count, temp_run_count
+
+ @hook_config(can_jump_to=["end"])
+ def after_model(
+ self,
+ state: ToolCallLimitState[ResponseT],
+ runtime: Runtime[ContextT], # noqa: ARG002
+ ) -> dict[str, Any] | None:
+ """Increment tool call counts after a model call and check limits.
+
+ Args:
+ state: The current agent state.
runtime: The langgraph runtime.
Returns:
- If limits are exceeded and exit_behavior is "end", returns
- a Command to jump to the end with a limit exceeded message. Otherwise returns None.
+ State updates with incremented tool call counts. If limits are exceeded
+ and exit_behavior is "end", also includes a jump to end with a ToolMessage
+ and AI message for the single exceeded tool call.
Raises:
ToolCallLimitExceededError: If limits are exceeded and exit_behavior
is "error".
+ NotImplementedError: If limits are exceeded, exit_behavior is "end",
+ and there are multiple tool calls.
"""
+ # Get the last AIMessage to check for tool calls
messages = state.get("messages", [])
+ if not messages:
+ return None
- # Count tool calls in entire thread
- thread_count = _count_tool_calls_in_messages(messages, self.tool_name)
+ # Find the last AIMessage
+ last_ai_message = None
+ for message in reversed(messages):
+ if isinstance(message, AIMessage):
+ last_ai_message = message
+ break
- # Count tool calls in current run (after last HumanMessage)
- run_messages = _get_run_messages(messages)
- run_count = _count_tool_calls_in_messages(run_messages, self.tool_name)
+ if not last_ai_message or not last_ai_message.tool_calls:
+ return None
- # Check if any limits are exceeded
- thread_limit_exceeded = self.thread_limit is not None and thread_count >= self.thread_limit
- run_limit_exceeded = self.run_limit is not None and run_count >= self.run_limit
+ # Get the count key for this middleware instance
+ count_key = self.tool_name if self.tool_name else "__all__"
- if thread_limit_exceeded or run_limit_exceeded:
- if self.exit_behavior == "error":
- raise ToolCallLimitExceededError(
- thread_count=thread_count,
- run_count=run_count,
- thread_limit=self.thread_limit,
- run_limit=self.run_limit,
- tool_name=self.tool_name,
+ # Get current counts
+ thread_counts = state.get("thread_tool_call_count", {}).copy()
+ run_counts = state.get("run_tool_call_count", {}).copy()
+ current_thread_count = thread_counts.get(count_key, 0)
+ current_run_count = run_counts.get(count_key, 0)
+
+ # Separate tool calls into allowed and blocked
+ allowed_calls, blocked_calls, new_thread_count, new_run_count = self._separate_tool_calls(
+ last_ai_message.tool_calls, current_thread_count, current_run_count
+ )
+
+ # Update counts to include only allowed calls for thread count
+ # (blocked calls don't count towards thread-level tracking)
+ # But run count includes blocked calls since they were attempted in this run
+ thread_counts[count_key] = new_thread_count
+ run_counts[count_key] = new_run_count + len(blocked_calls)
+
+ # If no tool calls are blocked, just update counts
+ if not blocked_calls:
+ if allowed_calls:
+ return {
+ "thread_tool_call_count": thread_counts,
+ "run_tool_call_count": run_counts,
+ }
+ return None
+
+ # Get final counts for building messages
+ final_thread_count = thread_counts[count_key]
+ final_run_count = run_counts[count_key]
+
+ # Handle different exit behaviors
+ if self.exit_behavior == "error":
+ # Use hypothetical thread count to show which limit was exceeded
+ hypothetical_thread_count = final_thread_count + len(blocked_calls)
+ raise ToolCallLimitExceededError(
+ thread_count=hypothetical_thread_count,
+ run_count=final_run_count,
+ thread_limit=self.thread_limit,
+ run_limit=self.run_limit,
+ tool_name=self.tool_name,
+ )
+
+ # Build tool message content (sent to model - no thread/run details)
+ tool_msg_content = _build_tool_message_content(self.tool_name)
+
+ # Inject artificial error ToolMessages for blocked tool calls
+ artificial_messages: list[ToolMessage | AIMessage] = [
+ ToolMessage(
+ content=tool_msg_content,
+ tool_call_id=tool_call["id"],
+ name=tool_call.get("name"),
+ status="error",
+ )
+ for tool_call in blocked_calls
+ ]
+
+ if self.exit_behavior == "end":
+ # Check if there are tool calls to other tools that would continue executing
+ other_tools = [
+ tc
+ for tc in last_ai_message.tool_calls
+ if self.tool_name is not None and tc["name"] != self.tool_name
+ ]
+
+ if other_tools:
+ tool_names = ", ".join({tc["name"] for tc in other_tools})
+ msg = (
+ f"Cannot end execution with other tool calls pending. "
+ f"Found calls to: {tool_names}. Use 'continue' or 'error' behavior instead."
)
- if self.exit_behavior == "end":
- # Create a message indicating the limit was exceeded
- limit_message = _build_tool_limit_exceeded_message(
- thread_count, run_count, self.thread_limit, self.run_limit, self.tool_name
- )
- limit_ai_message = AIMessage(content=limit_message)
+ raise NotImplementedError(msg)
- return {"jump_to": "end", "messages": [limit_ai_message]}
+ # Build final AI message content (displayed to user - includes thread/run details)
+ # Use hypothetical thread count (what it would have been if call wasn't blocked)
+ # to show which limit was actually exceeded
+ hypothetical_thread_count = final_thread_count + len(blocked_calls)
+ final_msg_content = _build_final_ai_message_content(
+ hypothetical_thread_count,
+ final_run_count,
+ self.thread_limit,
+ self.run_limit,
+ self.tool_name,
+ )
+ artificial_messages.append(AIMessage(content=final_msg_content))
- return None
+ return {
+ "thread_tool_call_count": thread_counts,
+ "run_tool_call_count": run_counts,
+ "jump_to": "end",
+ "messages": artificial_messages,
+ }
+
+ # For exit_behavior="continue", return error messages to block exceeded tools
+ return {
+ "thread_tool_call_count": thread_counts,
+ "run_tool_call_count": run_counts,
+ "messages": artificial_messages,
+ }
diff --git a/libs/langchain_v1/langchain/agents/middleware/tool_emulator.py b/libs/langchain_v1/langchain/agents/middleware/tool_emulator.py
new file mode 100644
index 00000000000..90018150e44
--- /dev/null
+++ b/libs/langchain_v1/langchain/agents/middleware/tool_emulator.py
@@ -0,0 +1,200 @@
+"""Tool emulator middleware for testing."""
+
+from __future__ import annotations
+
+from typing import TYPE_CHECKING
+
+from langchain_core.language_models.chat_models import BaseChatModel
+from langchain_core.messages import HumanMessage, ToolMessage
+
+from langchain.agents.middleware.types import AgentMiddleware
+from langchain.chat_models.base import init_chat_model
+
+if TYPE_CHECKING:
+ from collections.abc import Awaitable, Callable
+
+ from langgraph.types import Command
+
+ from langchain.agents.middleware.types import ToolCallRequest
+ from langchain.tools import BaseTool
+
+
+class LLMToolEmulator(AgentMiddleware):
+ """Emulates specified tools using an LLM instead of executing them.
+
+ This middleware allows selective emulation of tools for testing purposes.
+ By default (when tools=None), all tools are emulated. You can specify which
+ tools to emulate by passing a list of tool names or BaseTool instances.
+
+ Examples:
+ Emulate all tools (default behavior):
+ ```python
+ from langchain.agents.middleware import LLMToolEmulator
+
+ middleware = LLMToolEmulator()
+
+ agent = create_agent(
+ model="openai:gpt-4o",
+ tools=[get_weather, get_user_location, calculator],
+ middleware=[middleware],
+ )
+ ```
+
+ Emulate specific tools by name:
+ ```python
+ middleware = LLMToolEmulator(tools=["get_weather", "get_user_location"])
+ ```
+
+ Use a custom model for emulation:
+ ```python
+ middleware = LLMToolEmulator(
+ tools=["get_weather"], model="anthropic:claude-sonnet-4-5-20250929"
+ )
+ ```
+
+ Emulate specific tools by passing tool instances:
+ ```python
+ middleware = LLMToolEmulator(tools=[get_weather, get_user_location])
+ ```
+ """
+
+ def __init__(
+ self,
+ *,
+ tools: list[str | BaseTool] | None = None,
+ model: str | BaseChatModel | None = None,
+ ) -> None:
+ """Initialize the tool emulator.
+
+ Args:
+ tools: List of tool names (str) or BaseTool instances to emulate.
+ If None (default), ALL tools will be emulated.
+ If empty list, no tools will be emulated.
+ model: Model to use for emulation.
+ Defaults to "anthropic:claude-sonnet-4-5-20250929".
+ Can be a model identifier string or BaseChatModel instance.
+ """
+ super().__init__()
+
+ # Extract tool names from tools
+ # None means emulate all tools
+ self.emulate_all = tools is None
+ self.tools_to_emulate: set[str] = set()
+
+ if not self.emulate_all and tools is not None:
+ for tool in tools:
+ if isinstance(tool, str):
+ self.tools_to_emulate.add(tool)
+ else:
+ # Assume BaseTool with .name attribute
+ self.tools_to_emulate.add(tool.name)
+
+ # Initialize emulator model
+ if model is None:
+ self.model = init_chat_model("anthropic:claude-sonnet-4-5-20250929", temperature=1)
+ elif isinstance(model, BaseChatModel):
+ self.model = model
+ else:
+ self.model = init_chat_model(model, temperature=1)
+
+ def wrap_tool_call(
+ self,
+ request: ToolCallRequest,
+ handler: Callable[[ToolCallRequest], ToolMessage | Command],
+ ) -> ToolMessage | Command:
+ """Emulate tool execution using LLM if tool should be emulated.
+
+ Args:
+ request: Tool call request to potentially emulate.
+ handler: Callback to execute the tool (can be called multiple times).
+
+ Returns:
+ ToolMessage with emulated response if tool should be emulated,
+ otherwise calls handler for normal execution.
+ """
+ tool_name = request.tool_call["name"]
+
+ # Check if this tool should be emulated
+ should_emulate = self.emulate_all or tool_name in self.tools_to_emulate
+
+ if not should_emulate:
+ # Let it execute normally by calling the handler
+ return handler(request)
+
+ # Extract tool information for emulation
+ tool_args = request.tool_call["args"]
+ tool_description = request.tool.description if request.tool else "No description available"
+
+ # Build prompt for emulator LLM
+ prompt = (
+ f"You are emulating a tool call for testing purposes.\n\n"
+ f"Tool: {tool_name}\n"
+ f"Description: {tool_description}\n"
+ f"Arguments: {tool_args}\n\n"
+ f"Generate a realistic response that this tool would return "
+ f"given these arguments.\n"
+ f"Return ONLY the tool's output, no explanation or preamble. "
+ f"Introduce variation into your responses."
+ )
+
+ # Get emulated response from LLM
+ response = self.model.invoke([HumanMessage(prompt)])
+
+ # Short-circuit: return emulated result without executing real tool
+ return ToolMessage(
+ content=response.content,
+ tool_call_id=request.tool_call["id"],
+ name=tool_name,
+ )
+
+ async def awrap_tool_call(
+ self,
+ request: ToolCallRequest,
+ handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
+ ) -> ToolMessage | Command:
+ """Async version of wrap_tool_call.
+
+ Emulate tool execution using LLM if tool should be emulated.
+
+ Args:
+ request: Tool call request to potentially emulate.
+ handler: Async callback to execute the tool (can be called multiple times).
+
+ Returns:
+ ToolMessage with emulated response if tool should be emulated,
+ otherwise calls handler for normal execution.
+ """
+ tool_name = request.tool_call["name"]
+
+ # Check if this tool should be emulated
+ should_emulate = self.emulate_all or tool_name in self.tools_to_emulate
+
+ if not should_emulate:
+ # Let it execute normally by calling the handler
+ return await handler(request)
+
+ # Extract tool information for emulation
+ tool_args = request.tool_call["args"]
+ tool_description = request.tool.description if request.tool else "No description available"
+
+ # Build prompt for emulator LLM
+ prompt = (
+ f"You are emulating a tool call for testing purposes.\n\n"
+ f"Tool: {tool_name}\n"
+ f"Description: {tool_description}\n"
+ f"Arguments: {tool_args}\n\n"
+ f"Generate a realistic response that this tool would return "
+ f"given these arguments.\n"
+ f"Return ONLY the tool's output, no explanation or preamble. "
+ f"Introduce variation into your responses."
+ )
+
+ # Get emulated response from LLM (using async invoke)
+ response = await self.model.ainvoke([HumanMessage(prompt)])
+
+ # Short-circuit: return emulated result without executing real tool
+ return ToolMessage(
+ content=response.content,
+ tool_call_id=request.tool_call["id"],
+ name=tool_name,
+ )
diff --git a/libs/langchain_v1/langchain/agents/middleware/tool_retry.py b/libs/langchain_v1/langchain/agents/middleware/tool_retry.py
new file mode 100644
index 00000000000..361158b0c61
--- /dev/null
+++ b/libs/langchain_v1/langchain/agents/middleware/tool_retry.py
@@ -0,0 +1,384 @@
+"""Tool retry middleware for agents."""
+
+from __future__ import annotations
+
+import asyncio
+import random
+import time
+from typing import TYPE_CHECKING, Literal
+
+from langchain_core.messages import ToolMessage
+
+from langchain.agents.middleware.types import AgentMiddleware
+
+if TYPE_CHECKING:
+ from collections.abc import Awaitable, Callable
+
+ from langgraph.types import Command
+
+ from langchain.agents.middleware.types import ToolCallRequest
+ from langchain.tools import BaseTool
+
+
+class ToolRetryMiddleware(AgentMiddleware):
+ """Middleware that automatically retries failed tool calls with configurable backoff.
+
+ Supports retrying on specific exceptions and exponential backoff.
+
+ Examples:
+ Basic usage with default settings (2 retries, exponential backoff):
+ ```python
+ from langchain.agents import create_agent
+ from langchain.agents.middleware import ToolRetryMiddleware
+
+ agent = create_agent(model, tools=[search_tool], middleware=[ToolRetryMiddleware()])
+ ```
+
+ Retry specific exceptions only:
+ ```python
+ from requests.exceptions import RequestException, Timeout
+
+ retry = ToolRetryMiddleware(
+ max_retries=4,
+ retry_on=(RequestException, Timeout),
+ backoff_factor=1.5,
+ )
+ ```
+
+ Custom exception filtering:
+ ```python
+ from requests.exceptions import HTTPError
+
+
+ def should_retry(exc: Exception) -> bool:
+ # Only retry on 5xx errors
+ if isinstance(exc, HTTPError):
+ return 500 <= exc.status_code < 600
+ return False
+
+
+ retry = ToolRetryMiddleware(
+ max_retries=3,
+ retry_on=should_retry,
+ )
+ ```
+
+ Apply to specific tools with custom error handling:
+ ```python
+ def format_error(exc: Exception) -> str:
+ return "Database temporarily unavailable. Please try again later."
+
+
+ retry = ToolRetryMiddleware(
+ max_retries=4,
+ tools=["search_database"],
+ on_failure=format_error,
+ )
+ ```
+
+ Apply to specific tools using BaseTool instances:
+ ```python
+ from langchain_core.tools import tool
+
+
+ @tool
+ def search_database(query: str) -> str:
+ '''Search the database.'''
+ return results
+
+
+ retry = ToolRetryMiddleware(
+ max_retries=4,
+ tools=[search_database], # Pass BaseTool instance
+ )
+ ```
+
+ Constant backoff (no exponential growth):
+ ```python
+ retry = ToolRetryMiddleware(
+ max_retries=5,
+ backoff_factor=0.0, # No exponential growth
+ initial_delay=2.0, # Always wait 2 seconds
+ )
+ ```
+
+ Raise exception on failure:
+ ```python
+ retry = ToolRetryMiddleware(
+ max_retries=2,
+ on_failure="raise", # Re-raise exception instead of returning message
+ )
+ ```
+ """
+
+ def __init__(
+ self,
+ *,
+ max_retries: int = 2,
+ tools: list[BaseTool | str] | None = None,
+ retry_on: tuple[type[Exception], ...] | Callable[[Exception], bool] = (Exception,),
+ on_failure: (
+ Literal["raise", "return_message"] | Callable[[Exception], str]
+ ) = "return_message",
+ backoff_factor: float = 2.0,
+ initial_delay: float = 1.0,
+ max_delay: float = 60.0,
+ jitter: bool = True,
+ ) -> None:
+ """Initialize ToolRetryMiddleware.
+
+ Args:
+ max_retries: Maximum number of retry attempts after the initial call.
+ Default is 2 retries (3 total attempts). Must be >= 0.
+ tools: Optional list of tools or tool names to apply retry logic to.
+ Can be a list of `BaseTool` instances or tool name strings.
+ If `None`, applies to all tools. Default is `None`.
+ retry_on: Either a tuple of exception types to retry on, or a callable
+ that takes an exception and returns `True` if it should be retried.
+ Default is to retry on all exceptions.
+ on_failure: Behavior when all retries are exhausted. Options:
+ - `"return_message"` (default): Return a ToolMessage with error details,
+ allowing the LLM to handle the failure and potentially recover.
+ - `"raise"`: Re-raise the exception, stopping agent execution.
+ - Custom callable: Function that takes the exception and returns a string
+ for the ToolMessage content, allowing custom error formatting.
+ backoff_factor: Multiplier for exponential backoff. Each retry waits
+ `initial_delay * (backoff_factor ** retry_number)` seconds.
+ Set to 0.0 for constant delay. Default is 2.0.
+ initial_delay: Initial delay in seconds before first retry. Default is 1.0.
+ max_delay: Maximum delay in seconds between retries. Caps exponential
+ backoff growth. Default is 60.0.
+ jitter: Whether to add random jitter (Β±25%) to delay to avoid thundering herd.
+ Default is `True`.
+
+ Raises:
+ ValueError: If max_retries < 0 or delays are negative.
+ """
+ super().__init__()
+
+ # Validate parameters
+ if max_retries < 0:
+ msg = "max_retries must be >= 0"
+ raise ValueError(msg)
+ if initial_delay < 0:
+ msg = "initial_delay must be >= 0"
+ raise ValueError(msg)
+ if max_delay < 0:
+ msg = "max_delay must be >= 0"
+ raise ValueError(msg)
+ if backoff_factor < 0:
+ msg = "backoff_factor must be >= 0"
+ raise ValueError(msg)
+
+ self.max_retries = max_retries
+
+ # Extract tool names from BaseTool instances or strings
+ self._tool_filter: list[str] | None
+ if tools is not None:
+ self._tool_filter = [tool.name if not isinstance(tool, str) else tool for tool in tools]
+ else:
+ self._tool_filter = None
+
+ self.tools = [] # No additional tools registered by this middleware
+ self.retry_on = retry_on
+ self.on_failure = on_failure
+ self.backoff_factor = backoff_factor
+ self.initial_delay = initial_delay
+ self.max_delay = max_delay
+ self.jitter = jitter
+
+ def _should_retry_tool(self, tool_name: str) -> bool:
+ """Check if retry logic should apply to this tool.
+
+ Args:
+ tool_name: Name of the tool being called.
+
+ Returns:
+ `True` if retry logic should apply, `False` otherwise.
+ """
+ if self._tool_filter is None:
+ return True
+ return tool_name in self._tool_filter
+
+ def _should_retry_exception(self, exc: Exception) -> bool:
+ """Check if the exception should trigger a retry.
+
+ Args:
+ exc: The exception that occurred.
+
+ Returns:
+ `True` if the exception should be retried, `False` otherwise.
+ """
+ if callable(self.retry_on):
+ return self.retry_on(exc)
+ return isinstance(exc, self.retry_on)
+
+ def _calculate_delay(self, retry_number: int) -> float:
+ """Calculate delay for the given retry attempt.
+
+ Args:
+ retry_number: The retry attempt number (0-indexed).
+
+ Returns:
+ Delay in seconds before next retry.
+ """
+ if self.backoff_factor == 0.0:
+ delay = self.initial_delay
+ else:
+ delay = self.initial_delay * (self.backoff_factor**retry_number)
+
+ # Cap at max_delay
+ delay = min(delay, self.max_delay)
+
+ if self.jitter and delay > 0:
+ jitter_amount = delay * 0.25
+ delay = delay + random.uniform(-jitter_amount, jitter_amount) # noqa: S311
+ # Ensure delay is not negative after jitter
+ delay = max(0, delay)
+
+ return delay
+
+ def _format_failure_message(self, tool_name: str, exc: Exception, attempts_made: int) -> str:
+ """Format the failure message when retries are exhausted.
+
+ Args:
+ tool_name: Name of the tool that failed.
+ exc: The exception that caused the failure.
+ attempts_made: Number of attempts actually made.
+
+ Returns:
+ Formatted error message string.
+ """
+ exc_type = type(exc).__name__
+ attempt_word = "attempt" if attempts_made == 1 else "attempts"
+ return f"Tool '{tool_name}' failed after {attempts_made} {attempt_word} with {exc_type}"
+
+ def _handle_failure(
+ self, tool_name: str, tool_call_id: str | None, exc: Exception, attempts_made: int
+ ) -> ToolMessage:
+ """Handle failure when all retries are exhausted.
+
+ Args:
+ tool_name: Name of the tool that failed.
+ tool_call_id: ID of the tool call (may be None).
+ exc: The exception that caused the failure.
+ attempts_made: Number of attempts actually made.
+
+ Returns:
+ ToolMessage with error details.
+
+ Raises:
+ Exception: If on_failure is "raise", re-raises the exception.
+ """
+ if self.on_failure == "raise":
+ raise exc
+
+ if callable(self.on_failure):
+ content = self.on_failure(exc)
+ else:
+ content = self._format_failure_message(tool_name, exc, attempts_made)
+
+ return ToolMessage(
+ content=content,
+ tool_call_id=tool_call_id,
+ name=tool_name,
+ status="error",
+ )
+
+ def wrap_tool_call(
+ self,
+ request: ToolCallRequest,
+ handler: Callable[[ToolCallRequest], ToolMessage | Command],
+ ) -> ToolMessage | Command:
+ """Intercept tool execution and retry on failure.
+
+ Args:
+ request: Tool call request with call dict, BaseTool, state, and runtime.
+ handler: Callable to execute the tool (can be called multiple times).
+
+ Returns:
+ ToolMessage or Command (the final result).
+ """
+ tool_name = request.tool.name if request.tool else request.tool_call["name"]
+
+ # Check if retry should apply to this tool
+ if not self._should_retry_tool(tool_name):
+ return handler(request)
+
+ tool_call_id = request.tool_call["id"]
+
+ # Initial attempt + retries
+ for attempt in range(self.max_retries + 1):
+ try:
+ return handler(request)
+ except Exception as exc: # noqa: BLE001
+ attempts_made = attempt + 1 # attempt is 0-indexed
+
+ # Check if we should retry this exception
+ if not self._should_retry_exception(exc):
+ # Exception is not retryable, handle failure immediately
+ return self._handle_failure(tool_name, tool_call_id, exc, attempts_made)
+
+ # Check if we have more retries left
+ if attempt < self.max_retries:
+ # Calculate and apply backoff delay
+ delay = self._calculate_delay(attempt)
+ if delay > 0:
+ time.sleep(delay)
+ # Continue to next retry
+ else:
+ # No more retries, handle failure
+ return self._handle_failure(tool_name, tool_call_id, exc, attempts_made)
+
+ # Unreachable: loop always returns via handler success or _handle_failure
+ msg = "Unexpected: retry loop completed without returning"
+ raise RuntimeError(msg)
+
+ async def awrap_tool_call(
+ self,
+ request: ToolCallRequest,
+ handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
+ ) -> ToolMessage | Command:
+ """Intercept and control async tool execution with retry logic.
+
+ Args:
+ request: Tool call request with call dict, BaseTool, state, and runtime.
+ handler: Async callable to execute the tool and returns ToolMessage or Command.
+
+ Returns:
+ ToolMessage or Command (the final result).
+ """
+ tool_name = request.tool.name if request.tool else request.tool_call["name"]
+
+ # Check if retry should apply to this tool
+ if not self._should_retry_tool(tool_name):
+ return await handler(request)
+
+ tool_call_id = request.tool_call["id"]
+
+ # Initial attempt + retries
+ for attempt in range(self.max_retries + 1):
+ try:
+ return await handler(request)
+ except Exception as exc: # noqa: BLE001
+ attempts_made = attempt + 1 # attempt is 0-indexed
+
+ # Check if we should retry this exception
+ if not self._should_retry_exception(exc):
+ # Exception is not retryable, handle failure immediately
+ return self._handle_failure(tool_name, tool_call_id, exc, attempts_made)
+
+ # Check if we have more retries left
+ if attempt < self.max_retries:
+ # Calculate and apply backoff delay
+ delay = self._calculate_delay(attempt)
+ if delay > 0:
+ await asyncio.sleep(delay)
+ # Continue to next retry
+ else:
+ # No more retries, handle failure
+ return self._handle_failure(tool_name, tool_call_id, exc, attempts_made)
+
+ # Unreachable: loop always returns via handler success or _handle_failure
+ msg = "Unexpected: retry loop completed without returning"
+ raise RuntimeError(msg)
diff --git a/libs/langchain_v1/langchain/agents/middleware/tool_selection.py b/libs/langchain_v1/langchain/agents/middleware/tool_selection.py
index f63fb68be71..b6746738156 100644
--- a/libs/langchain_v1/langchain/agents/middleware/tool_selection.py
+++ b/libs/langchain_v1/langchain/agents/middleware/tool_selection.py
@@ -12,11 +12,16 @@ if TYPE_CHECKING:
from langchain.tools import BaseTool
from langchain_core.language_models.chat_models import BaseChatModel
-from langchain_core.messages import AIMessage, HumanMessage
+from langchain_core.messages import HumanMessage
from pydantic import Field, TypeAdapter
from typing_extensions import TypedDict
-from langchain.agents.middleware.types import AgentMiddleware, ModelRequest
+from langchain.agents.middleware.types import (
+ AgentMiddleware,
+ ModelCallResult,
+ ModelRequest,
+ ModelResponse,
+)
from langchain.chat_models.base import init_chat_model
logger = logging.getLogger(__name__)
@@ -245,8 +250,8 @@ class LLMToolSelectorMiddleware(AgentMiddleware):
def wrap_model_call(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
"""Filter tools based on LLM selection before invoking the model via handler."""
selection_request = self._prepare_selection_request(request)
if selection_request is None:
@@ -276,8 +281,8 @@ class LLMToolSelectorMiddleware(AgentMiddleware):
async def awrap_model_call(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], Awaitable[AIMessage]],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
"""Filter tools based on LLM selection before invoking the model via handler."""
selection_request = self._prepare_selection_request(request)
if selection_request is None:
diff --git a/libs/langchain_v1/langchain/agents/middleware/types.py b/libs/langchain_v1/langchain/agents/middleware/types.py
index d0cc40b3774..41f16e8f974 100644
--- a/libs/langchain_v1/langchain/agents/middleware/types.py
+++ b/libs/langchain_v1/langchain/agents/middleware/types.py
@@ -3,7 +3,7 @@
from __future__ import annotations
from collections.abc import Awaitable, Callable
-from dataclasses import dataclass, field
+from dataclasses import dataclass, field, replace
from inspect import iscoroutinefunction
from typing import (
TYPE_CHECKING,
@@ -19,16 +19,22 @@ from typing import (
if TYPE_CHECKING:
from collections.abc import Awaitable
- from langchain.tools.tool_node import ToolCallRequest
+# Needed as top level import for Pydantic schema generation on AgentState
+from typing import TypeAlias
-# needed as top level import for pydantic schema generation on AgentState
-from langchain_core.messages import AIMessage, AnyMessage, ToolMessage # noqa: TC002
+from langchain_core.messages import ( # noqa: TC002
+ AIMessage,
+ AnyMessage,
+ BaseMessage,
+ SystemMessage,
+ ToolMessage,
+)
from langgraph.channels.ephemeral_value import EphemeralValue
-from langgraph.channels.untracked_value import UntrackedValue
from langgraph.graph.message import add_messages
+from langgraph.prebuilt.tool_node import ToolCallRequest, ToolCallWrapper
from langgraph.types import Command # noqa: TC002
from langgraph.typing import ContextT
-from typing_extensions import NotRequired, Required, TypedDict, TypeVar
+from typing_extensions import NotRequired, Required, TypedDict, TypeVar, Unpack
if TYPE_CHECKING:
from langchain_core.language_models.chat_models import BaseChatModel
@@ -42,8 +48,12 @@ __all__ = [
"AgentState",
"ContextT",
"ModelRequest",
+ "ModelResponse",
"OmitFromSchema",
- "PublicAgentState",
+ "ResponseT",
+ "StateT_co",
+ "ToolCallRequest",
+ "ToolCallWrapper",
"after_agent",
"after_model",
"before_agent",
@@ -59,12 +69,24 @@ JumpTo = Literal["tools", "model", "end"]
ResponseT = TypeVar("ResponseT")
+class _ModelRequestOverrides(TypedDict, total=False):
+ """Possible overrides for ModelRequest.override() method."""
+
+ model: BaseChatModel
+ system_prompt: str | None
+ messages: list[AnyMessage]
+ tool_choice: Any | None
+ tools: list[BaseTool | dict]
+ response_format: ResponseFormat | None
+ model_settings: dict[str, Any]
+
+
@dataclass
class ModelRequest:
"""Model request information for the agent."""
model: BaseChatModel
- system_prompt: str | None
+ system_prompt: str | SystemMessage | None
messages: list[AnyMessage] # excluding system prompt
tool_choice: Any | None
tools: list[BaseTool | dict]
@@ -73,6 +95,61 @@ class ModelRequest:
runtime: Runtime[ContextT] # type: ignore[valid-type]
model_settings: dict[str, Any] = field(default_factory=dict)
+ def override(self, **overrides: Unpack[_ModelRequestOverrides]) -> ModelRequest:
+ """Replace the request with a new request with the given overrides.
+
+ Returns a new `ModelRequest` instance with the specified attributes replaced.
+ This follows an immutable pattern, leaving the original request unchanged.
+
+ Args:
+ **overrides: Keyword arguments for attributes to override. Supported keys:
+ - model: BaseChatModel instance
+ - system_prompt: Optional system prompt string or SystemMessage object
+ - messages: List of messages
+ - tool_choice: Tool choice configuration
+ - tools: List of available tools
+ - response_format: Response format specification
+ - model_settings: Additional model settings
+
+ Returns:
+ New ModelRequest instance with specified overrides applied.
+
+ Examples:
+ ```python
+ # Create a new request with different model
+ new_request = request.override(model=different_model)
+
+ # Override multiple attributes
+ new_request = request.override(system_prompt="New instructions", tool_choice="auto")
+ ```
+ """
+ return replace(self, **overrides)
+
+
+@dataclass
+class ModelResponse:
+ """Response from model execution including messages and optional structured output.
+
+ The result will usually contain a single AIMessage, but may include
+ an additional ToolMessage if the model used a tool for structured output.
+ """
+
+ result: list[BaseMessage]
+ """List of messages from model execution."""
+
+ structured_response: Any = None
+ """Parsed structured output if response_format was specified, None otherwise."""
+
+
+# Type alias for middleware return type - allows returning either full response or just AIMessage
+ModelCallResult: TypeAlias = "ModelResponse | AIMessage"
+"""Type alias for model call handler return value.
+
+Middleware can return either:
+- ModelResponse: Full response with messages and optional structured output
+- AIMessage: Simplified return for simple use cases
+"""
+
@dataclass
class OmitFromSchema:
@@ -101,21 +178,23 @@ class AgentState(TypedDict, Generic[ResponseT]):
messages: Required[Annotated[list[AnyMessage], add_messages]]
jump_to: NotRequired[Annotated[JumpTo | None, EphemeralValue, PrivateStateAttr]]
structured_response: NotRequired[Annotated[ResponseT, OmitFromInput]]
- thread_model_call_count: NotRequired[Annotated[int, PrivateStateAttr]]
- run_model_call_count: NotRequired[Annotated[int, UntrackedValue, PrivateStateAttr]]
-class PublicAgentState(TypedDict, Generic[ResponseT]):
- """Public state schema for the agent.
+class _InputAgentState(TypedDict): # noqa: PYI049
+ """Input state schema for the agent."""
- Just used for typing purposes.
- """
+ messages: Required[Annotated[list[AnyMessage | dict], add_messages]]
+
+
+class _OutputAgentState(TypedDict, Generic[ResponseT]): # noqa: PYI049
+ """Output state schema for the agent."""
messages: Required[Annotated[list[AnyMessage], add_messages]]
structured_response: NotRequired[ResponseT]
StateT = TypeVar("StateT", bound=AgentState, default=AgentState)
+StateT_co = TypeVar("StateT_co", bound=AgentState, default=AgentState, covariant=True)
StateT_contra = TypeVar("StateT_contra", bound=AgentState, contravariant=True)
@@ -167,23 +246,23 @@ class AgentMiddleware(Generic[StateT, ContextT]):
def wrap_model_call(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
"""Intercept and control model execution via handler callback.
- The handler callback executes the model request and returns an AIMessage.
+ The handler callback executes the model request and returns a `ModelResponse`.
Middleware can call the handler multiple times for retry logic, skip calling
it to short-circuit, or modify the request/response. Multiple middleware
compose with first in list as outermost layer.
Args:
request: Model request to execute (includes state and runtime).
- handler: Callback that executes the model request and returns AIMessage.
- Call this to execute the model. Can be called multiple times
- for retry logic. Can skip calling it to short-circuit.
+ handler: Callback that executes the model request and returns
+ `ModelResponse`. Call this to execute the model. Can be called multiple
+ times for retry logic. Can skip calling it to short-circuit.
Returns:
- Final AIMessage to use (from handler or custom).
+ `ModelCallResult`
Examples:
Retry on error:
@@ -200,8 +279,12 @@ class AgentMiddleware(Generic[StateT, ContextT]):
Rewrite response:
```python
def wrap_model_call(self, request, handler):
- result = handler(request)
- return AIMessage(content=f"[{result.content}]")
+ response = handler(request)
+ ai_msg = response.result[0]
+ return ModelResponse(
+ result=[AIMessage(content=f"[{ai_msg.content}]")],
+ structured_response=response.structured_response,
+ )
```
Error to fallback:
@@ -210,7 +293,7 @@ class AgentMiddleware(Generic[StateT, ContextT]):
try:
return handler(request)
except Exception:
- return AIMessage(content="Service unavailable")
+ return ModelResponse(result=[AIMessage(content="Service unavailable")])
```
Cache/short-circuit:
@@ -218,26 +301,51 @@ class AgentMiddleware(Generic[StateT, ContextT]):
def wrap_model_call(self, request, handler):
if cached := get_cache(request):
return cached # Short-circuit with cached result
- result = handler(request)
- save_cache(request, result)
- return result
+ response = handler(request)
+ save_cache(request, response)
+ return response
+ ```
+
+ Simple AIMessage return (converted automatically):
+ ```python
+ def wrap_model_call(self, request, handler):
+ response = handler(request)
+ # Can return AIMessage directly for simple cases
+ return AIMessage(content="Simplified response")
```
"""
- raise NotImplementedError
+ msg = (
+ "Synchronous implementation of wrap_model_call is not available. "
+ "You are likely encountering this error because you defined only the async version "
+ "(awrap_model_call) and invoked your agent in a synchronous context "
+ "(e.g., using `stream()` or `invoke()`). "
+ "To resolve this, either: "
+ "(1) subclass AgentMiddleware and implement the synchronous wrap_model_call method, "
+ "(2) use the @wrap_model_call decorator on a standalone sync function, or "
+ "(3) invoke your agent asynchronously using `astream()` or `ainvoke()`."
+ )
+ raise NotImplementedError(msg)
async def awrap_model_call(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], Awaitable[AIMessage]],
- ) -> AIMessage:
- """Async version of wrap_model_call.
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
+ """Intercept and control async model execution via handler callback.
+
+ The handler callback executes the model request and returns a `ModelResponse`.
+ Middleware can call the handler multiple times for retry logic, skip calling
+ it to short-circuit, or modify the request/response. Multiple middleware
+ compose with first in list as outermost layer.
Args:
request: Model request to execute (includes state and runtime).
- handler: Async callback that executes the model request.
+ handler: Async callback that executes the model request and returns
+ `ModelResponse`. Call this to execute the model. Can be called multiple
+ times for retry logic. Can skip calling it to short-circuit.
Returns:
- Final AIMessage to use (from handler or custom).
+ ModelCallResult
Examples:
Retry on error:
@@ -251,7 +359,17 @@ class AgentMiddleware(Generic[StateT, ContextT]):
raise
```
"""
- raise NotImplementedError
+ msg = (
+ "Asynchronous implementation of awrap_model_call is not available. "
+ "You are likely encountering this error because you defined only the sync version "
+ "(wrap_model_call) and invoked your agent in an asynchronous context "
+ "(e.g., using `astream()` or `ainvoke()`). "
+ "To resolve this, either: "
+ "(1) subclass AgentMiddleware and implement the asynchronous awrap_model_call method, "
+ "(2) use the @wrap_model_call decorator on a standalone async function, or "
+ "(3) invoke your agent synchronously using `stream()` or `invoke()`."
+ )
+ raise NotImplementedError(msg)
def after_agent(self, state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None:
"""Logic to run after the agent execution completes."""
@@ -269,15 +387,15 @@ class AgentMiddleware(Generic[StateT, ContextT]):
"""Intercept tool execution for retries, monitoring, or modification.
Multiple middleware compose automatically (first defined = outermost).
- Exceptions propagate unless handle_tool_errors is configured on ToolNode.
+ Exceptions propagate unless `handle_tool_errors` is configured on `ToolNode`.
Args:
- request: Tool call request with call dict, BaseTool, state, and runtime.
- Access state via request.state and runtime via request.runtime.
+ request: Tool call request with call `dict`, `BaseTool`, state, and runtime.
+ Access state via `request.state` and runtime via `request.runtime`.
handler: Callable to execute the tool (can be called multiple times).
Returns:
- ToolMessage or Command (the final result).
+ `ToolMessage` or `Command` (the final result).
The handler callable can be invoked multiple times for retry logic.
Each call to handler is independent and stateless.
@@ -285,12 +403,15 @@ class AgentMiddleware(Generic[StateT, ContextT]):
Examples:
Modify request before execution:
+ ```python
def wrap_tool_call(self, request, handler):
request.tool_call["args"]["value"] *= 2
return handler(request)
+ ```
Retry on error (call handler multiple times):
+ ```python
def wrap_tool_call(self, request, handler):
for attempt in range(3):
try:
@@ -301,9 +422,11 @@ class AgentMiddleware(Generic[StateT, ContextT]):
if attempt == 2:
raise
return result
+ ```
Conditional retry based on response:
+ ```python
def wrap_tool_call(self, request, handler):
for attempt in range(3):
result = handler(request)
@@ -312,12 +435,84 @@ class AgentMiddleware(Generic[StateT, ContextT]):
if attempt < 2:
continue
return result
+ ```
"""
- raise NotImplementedError
+ msg = (
+ "Synchronous implementation of wrap_tool_call is not available. "
+ "You are likely encountering this error because you defined only the async version "
+ "(awrap_tool_call) and invoked your agent in a synchronous context "
+ "(e.g., using `stream()` or `invoke()`). "
+ "To resolve this, either: "
+ "(1) subclass AgentMiddleware and implement the synchronous wrap_tool_call method, "
+ "(2) use the @wrap_tool_call decorator on a standalone sync function, or "
+ "(3) invoke your agent asynchronously using `astream()` or `ainvoke()`."
+ )
+ raise NotImplementedError(msg)
+
+ async def awrap_tool_call(
+ self,
+ request: ToolCallRequest,
+ handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
+ ) -> ToolMessage | Command:
+ """Intercept and control async tool execution via handler callback.
+
+ The handler callback executes the tool call and returns a `ToolMessage` or
+ `Command`. Middleware can call the handler multiple times for retry logic, skip
+ calling it to short-circuit, or modify the request/response. Multiple middleware
+ compose with first in list as outermost layer.
+
+ Args:
+ request: Tool call request with call `dict`, `BaseTool`, state, and runtime.
+ Access state via `request.state` and runtime via `request.runtime`.
+ handler: Async callable to execute the tool and returns `ToolMessage` or
+ `Command`. Call this to execute the tool. Can be called multiple times
+ for retry logic. Can skip calling it to short-circuit.
+
+ Returns:
+ `ToolMessage` or `Command` (the final result).
+
+ The handler callable can be invoked multiple times for retry logic.
+ Each call to handler is independent and stateless.
+
+ Examples:
+ Async retry on error:
+ ```python
+ async def awrap_tool_call(self, request, handler):
+ for attempt in range(3):
+ try:
+ result = await handler(request)
+ if is_valid(result):
+ return result
+ except Exception:
+ if attempt == 2:
+ raise
+ return result
+ ```
+
+ ```python
+ async def awrap_tool_call(self, request, handler):
+ if cached := await get_cache_async(request):
+ return ToolMessage(content=cached, tool_call_id=request.tool_call["id"])
+ result = await handler(request)
+ await save_cache_async(request, result)
+ return result
+ ```
+ """
+ msg = (
+ "Asynchronous implementation of awrap_tool_call is not available. "
+ "You are likely encountering this error because you defined only the sync version "
+ "(wrap_tool_call) and invoked your agent in an asynchronous context "
+ "(e.g., using `astream()` or `ainvoke()`). "
+ "To resolve this, either: "
+ "(1) subclass AgentMiddleware and implement the asynchronous awrap_tool_call method, "
+ "(2) use the @wrap_tool_call decorator on a standalone async function, or "
+ "(3) invoke your agent synchronously using `stream()` or `invoke()`."
+ )
+ raise NotImplementedError(msg)
class _CallableWithStateAndRuntime(Protocol[StateT_contra, ContextT]):
- """Callable with AgentState and Runtime as arguments."""
+ """Callable with `AgentState` and `Runtime` as arguments."""
def __call__(
self, state: StateT_contra, runtime: Runtime[ContextT]
@@ -327,7 +522,7 @@ class _CallableWithStateAndRuntime(Protocol[StateT_contra, ContextT]):
class _CallableReturningPromptString(Protocol[StateT_contra, ContextT]): # type: ignore[misc]
- """Callable that returns a prompt string given ModelRequest (contains state and runtime)."""
+ """Callable that returns a prompt string given `ModelRequest` (contains state and runtime)."""
def __call__(self, request: ModelRequest) -> str | Awaitable[str]:
"""Generate a system prompt string based on the request."""
@@ -337,14 +532,15 @@ class _CallableReturningPromptString(Protocol[StateT_contra, ContextT]): # type
class _CallableReturningModelResponse(Protocol[StateT_contra, ContextT]): # type: ignore[misc]
"""Callable for model call interception with handler callback.
- Receives handler callback to execute model and returns final AIMessage.
+ Receives handler callback to execute model and returns `ModelResponse` or
+ `AIMessage`.
"""
def __call__(
self,
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
"""Intercept model execution via handler callback."""
...
@@ -352,7 +548,8 @@ class _CallableReturningModelResponse(Protocol[StateT_contra, ContextT]): # typ
class _CallableReturningToolResponse(Protocol):
"""Callable for tool call interception with handler callback.
- Receives handler callback to execute tool and returns final ToolMessage or Command.
+ Receives handler callback to execute tool and returns final `ToolMessage` or
+ `Command`.
"""
def __call__(
@@ -445,22 +642,22 @@ def before_model(
Callable[[_CallableWithStateAndRuntime[StateT, ContextT]], AgentMiddleware[StateT, ContextT]]
| AgentMiddleware[StateT, ContextT]
):
- """Decorator used to dynamically create a middleware with the before_model hook.
+ """Decorator used to dynamically create a middleware with the `before_model` hook.
Args:
func: The function to be decorated. Must accept:
`state: StateT, runtime: Runtime[ContextT]` - State and runtime context
state_schema: Optional custom state schema type. If not provided, uses the default
- AgentState schema.
+ `AgentState` schema.
tools: Optional list of additional tools to register with this middleware.
can_jump_to: Optional list of valid jump destinations for conditional edges.
- Valid values are: "tools", "model", "end"
+ Valid values are: `"tools"`, `"model"`, `"end"`
name: Optional name for the generated middleware class. If not provided,
uses the decorated function's name.
Returns:
- Either an AgentMiddleware instance (if func is provided directly) or a decorator function
- that can be applied to a function it is wrapping.
+ Either an `AgentMiddleware` instance (if func is provided directly) or a
+ decorator function that can be applied to a function it is wrapping.
The decorated function should return:
- `dict[str, Any]` - State updates to merge into the agent state
@@ -587,22 +784,22 @@ def after_model(
Callable[[_CallableWithStateAndRuntime[StateT, ContextT]], AgentMiddleware[StateT, ContextT]]
| AgentMiddleware[StateT, ContextT]
):
- """Decorator used to dynamically create a middleware with the after_model hook.
+ """Decorator used to dynamically create a middleware with the `after_model` hook.
Args:
func: The function to be decorated. Must accept:
`state: StateT, runtime: Runtime[ContextT]` - State and runtime context
- state_schema: Optional custom state schema type. If not provided, uses the default
- AgentState schema.
+ state_schema: Optional custom state schema type. If not provided, uses the
+ default `AgentState` schema.
tools: Optional list of additional tools to register with this middleware.
can_jump_to: Optional list of valid jump destinations for conditional edges.
- Valid values are: "tools", "model", "end"
+ Valid values are: `"tools"`, `"model"`, `"end"`
name: Optional name for the generated middleware class. If not provided,
uses the decorated function's name.
Returns:
- Either an AgentMiddleware instance (if func is provided) or a decorator function
- that can be applied to a function.
+ Either an `AgentMiddleware` instance (if func is provided) or a decorator
+ function that can be applied to a function.
The decorated function should return:
- `dict[str, Any]` - State updates to merge into the agent state
@@ -718,22 +915,22 @@ def before_agent(
Callable[[_CallableWithStateAndRuntime[StateT, ContextT]], AgentMiddleware[StateT, ContextT]]
| AgentMiddleware[StateT, ContextT]
):
- """Decorator used to dynamically create a middleware with the before_agent hook.
+ """Decorator used to dynamically create a middleware with the `before_agent` hook.
Args:
func: The function to be decorated. Must accept:
`state: StateT, runtime: Runtime[ContextT]` - State and runtime context
- state_schema: Optional custom state schema type. If not provided, uses the default
- AgentState schema.
+ state_schema: Optional custom state schema type. If not provided, uses the
+ default `AgentState` schema.
tools: Optional list of additional tools to register with this middleware.
can_jump_to: Optional list of valid jump destinations for conditional edges.
- Valid values are: "tools", "model", "end"
+ Valid values are: `"tools"`, `"model"`, `"end"`
name: Optional name for the generated middleware class. If not provided,
uses the decorated function's name.
Returns:
- Either an AgentMiddleware instance (if func is provided directly) or a decorator function
- that can be applied to a function it is wrapping.
+ Either an `AgentMiddleware` instance (if func is provided directly) or a
+ decorator function that can be applied to a function it is wrapping.
The decorated function should return:
- `dict[str, Any]` - State updates to merge into the agent state
@@ -860,22 +1057,22 @@ def after_agent(
Callable[[_CallableWithStateAndRuntime[StateT, ContextT]], AgentMiddleware[StateT, ContextT]]
| AgentMiddleware[StateT, ContextT]
):
- """Decorator used to dynamically create a middleware with the after_agent hook.
+ """Decorator used to dynamically create a middleware with the `after_agent` hook.
Args:
func: The function to be decorated. Must accept:
`state: StateT, runtime: Runtime[ContextT]` - State and runtime context
- state_schema: Optional custom state schema type. If not provided, uses the default
- AgentState schema.
+ state_schema: Optional custom state schema type. If not provided, uses the
+ default `AgentState` schema.
tools: Optional list of additional tools to register with this middleware.
can_jump_to: Optional list of valid jump destinations for conditional edges.
- Valid values are: "tools", "model", "end"
+ Valid values are: `"tools"`, `"model"`, `"end"`
name: Optional name for the generated middleware class. If not provided,
uses the decorated function's name.
Returns:
- Either an AgentMiddleware instance (if func is provided) or a decorator function
- that can be applied to a function.
+ Either an `AgentMiddleware` instance (if func is provided) or a decorator
+ function that can be applied to a function.
The decorated function should return:
- `dict[str, Any]` - State updates to merge into the agent state
@@ -1037,8 +1234,8 @@ def dynamic_prompt(
async def async_wrapped(
self: AgentMiddleware[StateT, ContextT], # noqa: ARG001
request: ModelRequest,
- handler: Callable[[ModelRequest], Awaitable[AIMessage]],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
prompt = await func(request) # type: ignore[misc]
request.system_prompt = prompt
return await handler(request)
@@ -1058,12 +1255,22 @@ def dynamic_prompt(
def wrapped(
self: AgentMiddleware[StateT, ContextT], # noqa: ARG001
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
- prompt = cast("str", func(request))
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
+ prompt = cast("str | SystemMessage", func(request))
request.system_prompt = prompt
return handler(request)
+ async def async_wrapped_from_sync(
+ self: AgentMiddleware[StateT, ContextT], # noqa: ARG001
+ request: ModelRequest,
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
+ # Delegate to sync function
+ prompt = cast("str | SystemMessage", func(request))
+ request.system_prompt = prompt
+ return await handler(request)
+
middleware_name = cast("str", getattr(func, "__name__", "DynamicPromptMiddleware"))
return type(
@@ -1073,6 +1280,7 @@ def dynamic_prompt(
"state_schema": AgentState,
"tools": [],
"wrap_model_call": wrapped,
+ "awrap_model_call": async_wrapped_from_sync,
},
)()
@@ -1113,20 +1321,21 @@ def wrap_model_call(
]
| AgentMiddleware[StateT, ContextT]
):
- """Create middleware with wrap_model_call hook from a function.
+ """Create middleware with `wrap_model_call` hook from a function.
Converts a function with handler callback into middleware that can intercept
model calls, implement retry logic, handle errors, and rewrite responses.
Args:
func: Function accepting (request, handler) that calls handler(request)
- to execute the model and returns final AIMessage. Request contains state and runtime.
- state_schema: Custom state schema. Defaults to AgentState.
+ to execute the model and returns `ModelResponse` or `AIMessage`.
+ Request contains state and runtime.
+ state_schema: Custom state schema. Defaults to `AgentState`.
tools: Additional tools to register with this middleware.
name: Middleware class name. Defaults to function name.
Returns:
- AgentMiddleware instance if func provided, otherwise a decorator.
+ `AgentMiddleware` instance if func provided, otherwise a decorator.
Examples:
Basic retry logic:
@@ -1157,12 +1366,24 @@ def wrap_model_call(
return handler(request)
```
- Rewrite response content:
+ Rewrite response content (full ModelResponse):
```python
@wrap_model_call
def uppercase_responses(request, handler):
- result = handler(request)
- return AIMessage(content=result.content.upper())
+ response = handler(request)
+ ai_msg = response.result[0]
+ return ModelResponse(
+ result=[AIMessage(content=ai_msg.content.upper())],
+ structured_response=response.structured_response,
+ )
+ ```
+
+ Simple AIMessage return (converted automatically):
+ ```python
+ @wrap_model_call
+ def simple_response(request, handler):
+ # AIMessage is automatically converted to ModelResponse
+ return AIMessage(content="Simple response")
```
"""
@@ -1176,8 +1397,8 @@ def wrap_model_call(
async def async_wrapped(
self: AgentMiddleware[StateT, ContextT], # noqa: ARG001
request: ModelRequest,
- handler: Callable[[ModelRequest], Awaitable[AIMessage]],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
+ ) -> ModelCallResult:
return await func(request, handler) # type: ignore[misc, arg-type]
middleware_name = name or cast(
@@ -1197,8 +1418,8 @@ def wrap_model_call(
def wrapped(
self: AgentMiddleware[StateT, ContextT], # noqa: ARG001
request: ModelRequest,
- handler: Callable[[ModelRequest], AIMessage],
- ) -> AIMessage:
+ handler: Callable[[ModelRequest], ModelResponse],
+ ) -> ModelCallResult:
return func(request, handler)
middleware_name = name or cast("str", getattr(func, "__name__", "WrapModelCallMiddleware"))
@@ -1248,28 +1469,22 @@ def wrap_tool_call(
]
| AgentMiddleware
):
- """Create middleware with wrap_tool_call hook from a function.
+ """Create middleware with `wrap_tool_call` hook from a function.
Converts a function with handler callback into middleware that can intercept
tool calls, implement retry logic, monitor execution, and modify responses.
Args:
func: Function accepting (request, handler) that calls
- handler(request) to execute the tool and returns final ToolMessage or Command.
+ handler(request) to execute the tool and returns final `ToolMessage` or
+ `Command`. Can be sync or async.
tools: Additional tools to register with this middleware.
name: Middleware class name. Defaults to function name.
Returns:
- AgentMiddleware instance if func provided, otherwise a decorator.
+ `AgentMiddleware` instance if func provided, otherwise a decorator.
Examples:
- Basic passthrough:
- ```python
- @wrap_tool_call
- def passthrough(request, handler):
- return handler(request)
- ```
-
Retry logic:
```python
@wrap_tool_call
@@ -1283,6 +1498,18 @@ def wrap_tool_call(
raise
```
+ Async retry logic:
+ ```python
+ @wrap_tool_call
+ async def async_retry(request, handler):
+ for attempt in range(3):
+ try:
+ return await handler(request)
+ except Exception:
+ if attempt == 2:
+ raise
+ ```
+
Modify request:
```python
@wrap_tool_call
@@ -1306,6 +1533,31 @@ def wrap_tool_call(
def decorator(
func: _CallableReturningToolResponse,
) -> AgentMiddleware:
+ is_async = iscoroutinefunction(func)
+
+ if is_async:
+
+ async def async_wrapped(
+ self: AgentMiddleware, # noqa: ARG001
+ request: ToolCallRequest,
+ handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
+ ) -> ToolMessage | Command:
+ return await func(request, handler) # type: ignore[arg-type,misc]
+
+ middleware_name = name or cast(
+ "str", getattr(func, "__name__", "WrapToolCallMiddleware")
+ )
+
+ return type(
+ middleware_name,
+ (AgentMiddleware,),
+ {
+ "state_schema": AgentState,
+ "tools": tools or [],
+ "awrap_tool_call": async_wrapped,
+ },
+ )()
+
def wrapped(
self: AgentMiddleware, # noqa: ARG001
request: ToolCallRequest,
diff --git a/libs/langchain_v1/langchain/agents/structured_output.py b/libs/langchain_v1/langchain/agents/structured_output.py
index d7824ecaf2a..75038675807 100644
--- a/libs/langchain_v1/langchain/agents/structured_output.py
+++ b/libs/langchain_v1/langchain/agents/structured_output.py
@@ -34,17 +34,21 @@ SchemaKind = Literal["pydantic", "dataclass", "typeddict", "json_schema"]
class StructuredOutputError(Exception):
"""Base class for structured output errors."""
+ ai_message: AIMessage
+
class MultipleStructuredOutputsError(StructuredOutputError):
"""Raised when model returns multiple structured output tool calls when only one is expected."""
- def __init__(self, tool_names: list[str]) -> None:
- """Initialize MultipleStructuredOutputsError.
+ def __init__(self, tool_names: list[str], ai_message: AIMessage) -> None:
+ """Initialize `MultipleStructuredOutputsError`.
Args:
tool_names: The names of the tools called for structured output.
+ ai_message: The AI message that contained the invalid multiple tool calls.
"""
self.tool_names = tool_names
+ self.ai_message = ai_message
super().__init__(
"Model incorrectly returned multiple structured responses "
@@ -55,15 +59,17 @@ class MultipleStructuredOutputsError(StructuredOutputError):
class StructuredOutputValidationError(StructuredOutputError):
"""Raised when structured output tool call arguments fail to parse according to the schema."""
- def __init__(self, tool_name: str, source: Exception) -> None:
- """Initialize StructuredOutputValidationError.
+ def __init__(self, tool_name: str, source: Exception, ai_message: AIMessage) -> None:
+ """Initialize `StructuredOutputValidationError`.
Args:
tool_name: The name of the tool that failed.
source: The exception that occurred.
+ ai_message: The AI message that contained the invalid structured output.
"""
self.tool_name = tool_name
self.source = source
+ self.ai_message = ai_message
super().__init__(f"Failed to parse structured output for tool '{tool_name}': {source}.")
@@ -73,8 +79,9 @@ def _parse_with_schema(
"""Parse data using for any supported schema type.
Args:
- schema: The schema type (Pydantic model, dataclass, or TypedDict)
- schema_kind: One of "pydantic", "dataclass", "typeddict", or "json_schema"
+ schema: The schema type (Pydantic model, `dataclass`, or `TypedDict`)
+ schema_kind: One of `"pydantic"`, `"dataclass"`, `"typeddict"`, or
+ `"json_schema"`
data: The data to parse
Returns:
@@ -99,13 +106,14 @@ class _SchemaSpec(Generic[SchemaT]):
"""Describes a structured output schema."""
schema: type[SchemaT]
- """The schema for the response, can be a Pydantic model, dataclass, TypedDict,
+ """The schema for the response, can be a Pydantic model, `dataclass`, `TypedDict`,
or JSON schema dict."""
name: str
"""Name of the schema, used for tool calling.
- If not provided, the name will be the model name or "response_format" if it's a JSON schema.
+ If not provided, the name will be the model name or `"response_format"` if it's a
+ JSON schema.
"""
description: str
@@ -186,14 +194,15 @@ class ToolStrategy(Generic[SchemaT]):
handle_errors: (
bool | str | type[Exception] | tuple[type[Exception], ...] | Callable[[Exception], str]
)
- """Error handling strategy for structured output via ToolStrategy. Default is True.
+ """Error handling strategy for structured output via `ToolStrategy`.
- - True: Catch all errors with default error template
- - str: Catch all errors with this custom message
- - type[Exception]: Only catch this exception type with default message
- - tuple[type[Exception], ...]: Only catch these exception types with default message
- - Callable[[Exception], str]: Custom function that returns error message
- - False: No retry, let exceptions propagate
+ - `True`: Catch all errors with default error template
+ - `str`: Catch all errors with this custom message
+ - `type[Exception]`: Only catch this exception type with default message
+ - `tuple[type[Exception], ...]`: Only catch these exception types with default
+ message
+ - `Callable[[Exception], str]`: Custom function that returns error message
+ - `False`: No retry, let exceptions propagate
"""
def __init__(
@@ -207,9 +216,10 @@ class ToolStrategy(Generic[SchemaT]):
| tuple[type[Exception], ...]
| Callable[[Exception], str] = True,
) -> None:
- """Initialize ToolStrategy.
+ """Initialize `ToolStrategy`.
- Initialize ToolStrategy with schemas, tool message content, and error handling strategy.
+ Initialize `ToolStrategy` with schemas, tool message content, and error handling
+ strategy.
"""
self.schema = schema
self.tool_message_content = tool_message_content
@@ -285,13 +295,13 @@ class OutputToolBinding(Generic[SchemaT]):
@classmethod
def from_schema_spec(cls, schema_spec: _SchemaSpec[SchemaT]) -> Self:
- """Create an OutputToolBinding instance from a SchemaSpec.
+ """Create an `OutputToolBinding` instance from a `SchemaSpec`.
Args:
- schema_spec: The SchemaSpec to convert
+ schema_spec: The `SchemaSpec` to convert
Returns:
- An OutputToolBinding instance with the appropriate tool created
+ An `OutputToolBinding` instance with the appropriate tool created
"""
return cls(
schema=schema_spec.schema,
@@ -329,20 +339,20 @@ class ProviderStrategyBinding(Generic[SchemaT]):
schema: type[SchemaT]
"""The original schema provided for structured output
- (Pydantic model, dataclass, TypedDict, or JSON schema dict)."""
+ (Pydantic model, `dataclass`, `TypedDict`, or JSON schema dict)."""
schema_kind: SchemaKind
"""Classification of the schema type for proper response construction."""
@classmethod
def from_schema_spec(cls, schema_spec: _SchemaSpec[SchemaT]) -> Self:
- """Create a ProviderStrategyBinding instance from a SchemaSpec.
+ """Create a `ProviderStrategyBinding` instance from a `SchemaSpec`.
Args:
- schema_spec: The SchemaSpec to convert
+ schema_spec: The `SchemaSpec` to convert
Returns:
- A ProviderStrategyBinding instance for parsing native structured output
+ A `ProviderStrategyBinding` instance for parsing native structured output
"""
return cls(
schema=schema_spec.schema,
@@ -350,10 +360,10 @@ class ProviderStrategyBinding(Generic[SchemaT]):
)
def parse(self, response: AIMessage) -> SchemaT:
- """Parse AIMessage content according to the schema.
+ """Parse `AIMessage` content according to the schema.
Args:
- response: The AI message containing the structured output
+ response: The `AIMessage` containing the structured output
Returns:
The parsed response according to the schema
diff --git a/libs/langchain_v1/langchain/chat_models/__init__.py b/libs/langchain_v1/langchain/chat_models/__init__.py
index 324c590865b..f52b04c4e06 100644
--- a/libs/langchain_v1/langchain/chat_models/__init__.py
+++ b/libs/langchain_v1/langchain/chat_models/__init__.py
@@ -1,4 +1,10 @@
-"""Chat models."""
+"""Entrypoint to using [chat models](https://docs.langchain.com/oss/python/langchain/models) in LangChain.
+
+!!! warning "Reference docs"
+ This page contains **reference documentation** for chat models. See
+ [the docs](https://docs.langchain.com/oss/python/langchain/models) for conceptual
+ guides, tutorials, and examples on using chat models.
+""" # noqa: E501
from langchain_core.language_models import BaseChatModel
diff --git a/libs/langchain_v1/langchain/chat_models/base.py b/libs/langchain_v1/langchain/chat_models/base.py
index 28e92b0458f..d481eda62a2 100644
--- a/libs/langchain_v1/langchain/chat_models/base.py
+++ b/libs/langchain_v1/langchain/chat_models/base.py
@@ -4,14 +4,7 @@ from __future__ import annotations
import warnings
from importlib import util
-from typing import (
- TYPE_CHECKING,
- Any,
- Literal,
- TypeAlias,
- cast,
- overload,
-)
+from typing import TYPE_CHECKING, Any, Literal, TypeAlias, cast, overload
from langchain_core.language_models import BaseChatModel, LanguageModelInput
from langchain_core.messages import AIMessage, AnyMessage
@@ -71,167 +64,199 @@ def init_chat_model(
config_prefix: str | None = None,
**kwargs: Any,
) -> BaseChatModel | _ConfigurableModel:
- """Initialize a ChatModel from the model name and provider.
+ """Initialize a chat model from any supported provider using a unified interface.
+
+ **Two main use cases:**
+
+ 1. **Fixed model** β specify the model upfront and get a ready-to-use chat model.
+ 2. **Configurable model** β choose to specify parameters (including model name) at
+ runtime via `config`. Makes it easy to switch between models/providers without
+ changing your code
!!! note
- Must have the integration package corresponding to the model provider
- installed.
+ Requires the integration package for the chosen model provider to be installed.
+
+ See the `model_provider` parameter below for specific package names
+ (e.g., `pip install langchain-openai`).
+
+ Refer to the [provider integration's API reference](https://docs.langchain.com/oss/python/integrations/providers)
+ for supported model parameters to use as `**kwargs`.
Args:
- model: The name of the model, e.g. "o3-mini", "claude-3-5-sonnet-latest". You can
- also specify model and model provider in a single argument using
- '{model_provider}:{model}' format, e.g. "openai:o1".
- model_provider: The model provider if not specified as part of model arg (see
- above). Supported model_provider values and the corresponding integration
- package are:
+ model: The name or ID of the model, e.g. `'o3-mini'`, `'claude-sonnet-4-5-20250929'`.
- - 'openai' -> langchain-openai
- - 'anthropic' -> langchain-anthropic
- - 'azure_openai' -> langchain-openai
- - 'azure_ai' -> langchain-azure-ai
- - 'google_vertexai' -> langchain-google-vertexai
- - 'google_genai' -> langchain-google-genai
- - 'bedrock' -> langchain-aws
- - 'bedrock_converse' -> langchain-aws
- - 'cohere' -> langchain-cohere
- - 'fireworks' -> langchain-fireworks
- - 'together' -> langchain-together
- - 'mistralai' -> langchain-mistralai
- - 'huggingface' -> langchain-huggingface
- - 'groq' -> langchain-groq
- - 'ollama' -> langchain-ollama
- - 'google_anthropic_vertex' -> langchain-google-vertexai
- - 'deepseek' -> langchain-deepseek
- - 'ibm' -> langchain-ibm
- - 'nvidia' -> langchain-nvidia-ai-endpoints
- - 'xai' -> langchain-xai
- - 'perplexity' -> langchain-perplexity
+ You can also specify model and model provider in a single argument using
+ `'{model_provider}:{model}'` format, e.g. `'openai:o1'`.
+ model_provider: The model provider if not specified as part of the model arg
+ (see above).
- Will attempt to infer model_provider from model if not specified. The
+ Supported `model_provider` values and the corresponding integration package
+ are:
+
+ - `openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `anthropic` -> [`langchain-anthropic`](https://docs.langchain.com/oss/python/integrations/providers/anthropic)
+ - `azure_openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `azure_ai` -> [`langchain-azure-ai`](https://docs.langchain.com/oss/python/integrations/providers/microsoft)
+ - `google_vertexai` -> [`langchain-google-vertexai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `google_genai` -> [`langchain-google-genai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `bedrock` -> [`langchain-aws`](https://docs.langchain.com/oss/python/integrations/providers/aws)
+ - `bedrock_converse` -> [`langchain-aws`](https://docs.langchain.com/oss/python/integrations/providers/aws)
+ - `cohere` -> [`langchain-cohere`](https://docs.langchain.com/oss/python/integrations/providers/cohere)
+ - `fireworks` -> [`langchain-fireworks`](https://docs.langchain.com/oss/python/integrations/providers/fireworks)
+ - `together` -> [`langchain-together`](https://docs.langchain.com/oss/python/integrations/providers/together)
+ - `mistralai` -> [`langchain-mistralai`](https://docs.langchain.com/oss/python/integrations/providers/mistralai)
+ - `huggingface` -> [`langchain-huggingface`](https://docs.langchain.com/oss/python/integrations/providers/huggingface)
+ - `groq` -> [`langchain-groq`](https://docs.langchain.com/oss/python/integrations/providers/groq)
+ - `ollama` -> [`langchain-ollama`](https://docs.langchain.com/oss/python/integrations/providers/ollama)
+ - `google_anthropic_vertex` -> [`langchain-google-vertexai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `deepseek` -> [`langchain-deepseek`](https://docs.langchain.com/oss/python/integrations/providers/deepseek)
+ - `ibm` -> [`langchain-ibm`](https://docs.langchain.com/oss/python/integrations/providers/deepseek)
+ - `nvidia` -> [`langchain-nvidia-ai-endpoints`](https://docs.langchain.com/oss/python/integrations/providers/nvidia)
+ - `xai` -> [`langchain-xai`](https://docs.langchain.com/oss/python/integrations/providers/xai)
+ - `perplexity` -> [`langchain-perplexity`](https://docs.langchain.com/oss/python/integrations/providers/perplexity)
+
+ Will attempt to infer `model_provider` from model if not specified. The
following providers will be inferred based on these model prefixes:
- - 'gpt-...' | 'o1...' | 'o3...' -> 'openai'
- - 'claude...' -> 'anthropic'
- - 'amazon....' -> 'bedrock'
- - 'gemini...' -> 'google_vertexai'
- - 'command...' -> 'cohere'
- - 'accounts/fireworks...' -> 'fireworks'
- - 'mistral...' -> 'mistralai'
- - 'deepseek...' -> 'deepseek'
- - 'grok...' -> 'xai'
- - 'sonar...' -> 'perplexity'
- configurable_fields: Which model parameters are
- configurable:
+ - `gpt-...` | `o1...` | `o3...` -> `openai`
+ - `claude...` -> `anthropic`
+ - `amazon...` -> `bedrock`
+ - `gemini...` -> `google_vertexai`
+ - `command...` -> `cohere`
+ - `accounts/fireworks...` -> `fireworks`
+ - `mistral...` -> `mistralai`
+ - `deepseek...` -> `deepseek`
+ - `grok...` -> `xai`
+ - `sonar...` -> `perplexity`
+ configurable_fields: Which model parameters are configurable at runtime:
- - None: No configurable fields.
- - "any": All fields are configurable. *See Security Note below.*
- - Union[List[str], Tuple[str, ...]]: Specified fields are configurable.
+ - `None`: No configurable fields (i.e., a fixed model).
+ - `'any'`: All fields are configurable. **See security note below.**
+ - `list[str] | Tuple[str, ...]`: Specified fields are configurable.
- Fields are assumed to have config_prefix stripped if there is a
- config_prefix. If model is specified, then defaults to None. If model is
- not specified, then defaults to `("model", "model_provider")`.
+ Fields are assumed to have `config_prefix` stripped if a `config_prefix` is
+ specified.
- **Security Note**: Setting `configurable_fields="any"` means fields like
- api_key, base_url, etc. can be altered at runtime, potentially redirecting
- model requests to a different service/user. Make sure that if you're
- accepting untrusted configurations that you enumerate the
- `configurable_fields=(...)` explicitly.
+ If `model` is specified, then defaults to `None`.
- config_prefix: If config_prefix is a non-empty string then model will be
- configurable at runtime via the
- `config["configurable"]["{config_prefix}_{param}"]` keys. If
- config_prefix is an empty string then model will be configurable via
+ If `model` is not specified, then defaults to `("model", "model_provider")`.
+
+ !!! warning "Security note"
+ Setting `configurable_fields="any"` means fields like `api_key`,
+ `base_url`, etc., can be altered at runtime, potentially redirecting
+ model requests to a different service/user.
+
+ Make sure that if you're accepting untrusted configurations that you
+ enumerate the `configurable_fields=(...)` explicitly.
+
+ config_prefix: Optional prefix for configuration keys.
+
+ Useful when you have multiple configurable models in the same application.
+
+ If `'config_prefix'` is a non-empty string then `model` will be configurable
+ at runtime via the `config["configurable"]["{config_prefix}_{param}"]` keys.
+ See examples below.
+
+ If `'config_prefix'` is an empty string then model will be configurable via
`config["configurable"]["{param}"]`.
- kwargs: Additional model-specific keyword args to pass to
- `<>.__init__(model=model_name, **kwargs)`. Examples
- include:
- * temperature: Model temperature.
- * max_tokens: Max output tokens.
- * timeout: The maximum time (in seconds) to wait for a response from the model
- before canceling the request.
- * max_retries: The maximum number of attempts the system will make to resend a
- request if it fails due to issues like network timeouts or rate limits.
- * base_url: The URL of the API endpoint where requests are sent.
- * rate_limiter: A `BaseRateLimiter` to space out requests to avoid exceeding
- rate limits.
+ **kwargs: Additional model-specific keyword args to pass to the underlying
+ chat model's `__init__` method. Common parameters include:
+
+ - `temperature`: Model temperature for controlling randomness.
+ - `max_tokens`: Maximum number of output tokens.
+ - `timeout`: Maximum time (in seconds) to wait for a response.
+ - `max_retries`: Maximum number of retry attempts for failed requests.
+ - `base_url`: Custom API endpoint URL.
+ - `rate_limiter`: A
+ [`BaseRateLimiter`][langchain_core.rate_limiters.BaseRateLimiter]
+ instance to control request rate.
+
+ Refer to the specific model provider's
+ [integration reference](https://reference.langchain.com/python/integrations/)
+ for all available parameters.
Returns:
- A BaseChatModel corresponding to the model_name and model_provider specified if
- configurability is inferred to be False. If configurable, a chat model emulator
- that initializes the underlying model at runtime once a config is passed in.
+ A `BaseChatModel` corresponding to the `model_name` and `model_provider`
+ specified if configurability is inferred to be `False`. If configurable, a
+ chat model emulator that initializes the underlying model at runtime once a
+ config is passed in.
Raises:
- ValueError: If model_provider cannot be inferred or isn't supported.
+ ValueError: If `model_provider` cannot be inferred or isn't supported.
ImportError: If the model provider integration package is not installed.
- ???+ note "Init non-configurable model"
+ ???+ example "Initialize a non-configurable model"
```python
# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
+
from langchain.chat_models import init_chat_model
o3_mini = init_chat_model("openai:o3-mini", temperature=0)
- claude_sonnet = init_chat_model("anthropic:claude-3-5-sonnet-latest", temperature=0)
- gemini_2_flash = init_chat_model("google_vertexai:gemini-2.5-flash", temperature=0)
+ claude_sonnet = init_chat_model("anthropic:claude-sonnet-4-5-20250929", temperature=0)
+ gemini_2-5_flash = init_chat_model("google_vertexai:gemini-2.5-flash", temperature=0)
o3_mini.invoke("what's your name")
claude_sonnet.invoke("what's your name")
- gemini_2_flash.invoke("what's your name")
+ gemini_2-5_flash.invoke("what's your name")
```
- ??? note "Partially configurable model with no default"
+ ??? example "Partially configurable model with no default"
```python
# pip install langchain langchain-openai langchain-anthropic
+
from langchain.chat_models import init_chat_model
- # We don't need to specify configurable=True if a model isn't specified.
+ # (We don't need to specify configurable=True if a model isn't specified.)
configurable_model = init_chat_model(temperature=0)
configurable_model.invoke("what's your name", config={"configurable": {"model": "gpt-4o"}})
- # GPT-4o response
+ # Use GPT-4o to generate the response
configurable_model.invoke(
- "what's your name", config={"configurable": {"model": "claude-3-5-sonnet-latest"}}
+ "what's your name",
+ config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
)
- # claude-3.5 sonnet response
```
- ??? note "Fully configurable model with a default"
+ ??? example "Fully configurable model with a default"
```python
# pip install langchain langchain-openai langchain-anthropic
+
from langchain.chat_models import init_chat_model
configurable_model_with_default = init_chat_model(
"openai:gpt-4o",
- configurable_fields="any", # this allows us to configure other params like temperature, max_tokens, etc at runtime.
+ configurable_fields="any", # This allows us to configure other params like temperature, max_tokens, etc at runtime.
config_prefix="foo",
temperature=0,
)
configurable_model_with_default.invoke("what's your name")
- # GPT-4o response with temperature 0
+ # GPT-4o response with temperature 0 (as set in default)
configurable_model_with_default.invoke(
"what's your name",
config={
"configurable": {
- "foo_model": "anthropic:claude-3-5-sonnet-latest",
+ "foo_model": "anthropic:claude-sonnet-4-5-20250929",
"foo_temperature": 0.6,
}
},
)
- # Claude-3.5 sonnet response with temperature 0.6
+ # Override default to use Sonnet 4.5 with temperature 0.6 to generate response
```
- ??? note "Bind tools to a configurable model"
+ ??? example "Bind tools to a configurable model"
- You can call any ChatModel declarative methods on a configurable model in the
- same way that you would with a normal model.
+ You can call any chat model declarative methods on a configurable model in the
+ same way that you would with a normal model:
```python
# pip install langchain langchain-openai langchain-anthropic
+
from langchain.chat_models import init_chat_model
from pydantic import BaseModel, Field
@@ -252,39 +277,24 @@ def init_chat_model(
"gpt-4o", configurable_fields=("model", "model_provider"), temperature=0
)
- configurable_model_with_tools = configurable_model.bind_tools([GetWeather, GetPopulation])
+ configurable_model_with_tools = configurable_model.bind_tools(
+ [
+ GetWeather,
+ GetPopulation,
+ ]
+ )
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?"
)
- # GPT-4o response with tool calls
+ # Use GPT-4o
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?",
- config={"configurable": {"model": "claude-3-5-sonnet-latest"}},
+ config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
)
- # Claude-3.5 sonnet response with tools
+ # Use Sonnet 4.5
```
- !!! version-added "Added in version 0.2.7"
-
- !!! warning "Behavior changed in 0.2.8"
- Support for `configurable_fields` and `config_prefix` added.
-
- !!! warning "Behavior changed in 0.2.12"
- Support for Ollama via langchain-ollama package added
- (langchain_ollama.ChatOllama). Previously,
- the now-deprecated langchain-community version of Ollama was imported
- (langchain_community.chat_models.ChatOllama).
-
- Support for AWS Bedrock models via the Converse API added
- (model_provider="bedrock_converse").
-
- !!! warning "Behavior changed in 0.3.5"
- Out of beta.
-
- !!! warning "Behavior changed in 0.3.19"
- Support for Deepseek, IBM, Nvidia, and xAI models added.
-
""" # noqa: E501
if not model and not configurable_fields:
configurable_fields = ("model", "model_provider")
@@ -592,7 +602,7 @@ class _ConfigurableModel(Runnable[LanguageModelInput, Any]):
config: RunnableConfig | None = None,
**kwargs: Any,
) -> _ConfigurableModel:
- """Bind config to a Runnable, returning a new Runnable."""
+ """Bind config to a `Runnable`, returning a new `Runnable`."""
config = RunnableConfig(**(config or {}), **cast("RunnableConfig", kwargs))
model_params = self._model_params(config)
remaining_config = {k: v for k, v in config.items() if k != "configurable"}
@@ -622,10 +632,7 @@ class _ConfigurableModel(Runnable[LanguageModelInput, Any]):
@property
def InputType(self) -> TypeAlias:
"""Get the input type for this `Runnable`."""
- from langchain_core.prompt_values import (
- ChatPromptValueConcrete,
- StringPromptValue,
- )
+ from langchain_core.prompt_values import ChatPromptValueConcrete, StringPromptValue
# This is a version of LanguageModelInput which replaces the abstract
# base class BaseMessage with a union of its subclasses, which makes
diff --git a/libs/langchain_v1/langchain/documents/__init__.py b/libs/langchain_v1/langchain/documents/__init__.py
deleted file mode 100644
index dcc14ce68b5..00000000000
--- a/libs/langchain_v1/langchain/documents/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-"""Document."""
-
-from langchain_core.documents import Document
-
-__all__ = [
- "Document",
-]
diff --git a/libs/langchain_v1/langchain/embeddings/__init__.py b/libs/langchain_v1/langchain/embeddings/__init__.py
index 84c453a9f60..6943b1cd7cc 100644
--- a/libs/langchain_v1/langchain/embeddings/__init__.py
+++ b/libs/langchain_v1/langchain/embeddings/__init__.py
@@ -1,12 +1,22 @@
-"""Embeddings."""
+"""Embeddings models.
+
+!!! warning "Reference docs"
+ This page contains **reference documentation** for Embeddings. See
+ [the docs](https://docs.langchain.com/oss/python/langchain/retrieval#embedding-models)
+ for conceptual guides, tutorials, and examples on using Embeddings.
+
+!!! warning "Modules moved"
+ With the release of `langchain 1.0.0`, several embeddings modules were moved to
+ `langchain-classic`, such as `CacheBackedEmbeddings` and all community
+ embeddings. See [list](https://github.com/langchain-ai/langchain/blob/bdf1cd383ce36dc18381a3bf3fb0a579337a32b5/libs/langchain/langchain/embeddings/__init__.py)
+ of moved modules to inform your migration.
+"""
from langchain_core.embeddings import Embeddings
from langchain.embeddings.base import init_embeddings
-from langchain.embeddings.cache import CacheBackedEmbeddings
__all__ = [
- "CacheBackedEmbeddings",
"Embeddings",
"init_embeddings",
]
diff --git a/libs/langchain_v1/langchain/embeddings/base.py b/libs/langchain_v1/langchain/embeddings/base.py
index 5a5ed45712a..97b5c62a2ef 100644
--- a/libs/langchain_v1/langchain/embeddings/base.py
+++ b/libs/langchain_v1/langchain/embeddings/base.py
@@ -126,35 +126,55 @@ def init_embeddings(
provider: str | None = None,
**kwargs: Any,
) -> Embeddings:
- """Initialize an embeddings model from a model name and optional provider.
+ """Initialize an embedding model from a model name and optional provider.
!!! note
- Must have the integration package corresponding to the model provider
- installed.
+ Requires the integration package for the chosen model provider to be installed.
+
+ See the `model_provider` parameter below for specific package names
+ (e.g., `pip install langchain-openai`).
+
+ Refer to the [provider integration's API reference](https://docs.langchain.com/oss/python/integrations/providers)
+ for supported model parameters to use as `**kwargs`.
Args:
- model: Name of the model to use. Can be either:
- - A model string like "openai:text-embedding-3-small"
- - Just the model name if provider is specified
- provider: Optional explicit provider name. If not specified,
- will attempt to parse from the model string. Supported providers
- and their required packages:
+ model: The name of the model, e.g. `'openai:text-embedding-3-small'`.
- {_get_provider_list()}
+ You can also specify model and model provider in a single argument using
+ `'{model_provider}:{model}'` format, e.g. `'openai:text-embedding-3-small'`.
+ provider: The model provider if not specified as part of the model arg
+ (see above).
+
+ Supported `provider` values and the corresponding integration package
+ are:
+
+ - `openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `azure_openai` -> [`langchain-openai`](https://docs.langchain.com/oss/python/integrations/providers/openai)
+ - `bedrock` -> [`langchain-aws`](https://docs.langchain.com/oss/python/integrations/providers/aws)
+ - `cohere` -> [`langchain-cohere`](https://docs.langchain.com/oss/python/integrations/providers/cohere)
+ - `google_vertexai` -> [`langchain-google-vertexai`](https://docs.langchain.com/oss/python/integrations/providers/google)
+ - `huggingface` -> [`langchain-huggingface`](https://docs.langchain.com/oss/python/integrations/providers/huggingface)
+ - `mistraiai` -> [`langchain-mistralai`](https://docs.langchain.com/oss/python/integrations/providers/mistralai)
+ - `ollama` -> [`langchain-ollama`](https://docs.langchain.com/oss/python/integrations/providers/ollama)
**kwargs: Additional model-specific parameters passed to the embedding model.
- These vary by provider, see the provider-specific documentation for details.
+
+ These vary by provider. Refer to the specific model provider's
+ [integration reference](https://reference.langchain.com/python/integrations/)
+ for all available parameters.
Returns:
- An Embeddings instance that can generate embeddings for text.
+ An `Embeddings` instance that can generate embeddings for text.
Raises:
ValueError: If the model provider is not supported or cannot be determined
ImportError: If the required provider package is not installed
- ???+ note "Example Usage"
+ ???+ example
```python
+ # pip install langchain langchain-openai
+
# Using a model string
model = init_embeddings("openai:text-embedding-3-small")
model.embed_query("Hello, world!")
@@ -167,7 +187,7 @@ def init_embeddings(
model = init_embeddings("openai:text-embedding-3-small", api_key="sk-...")
```
- !!! version-added "Added in version 0.3.9"
+ !!! version-added "Added in `langchain` 0.3.9"
"""
if not model:
diff --git a/libs/langchain_v1/langchain/embeddings/cache.py b/libs/langchain_v1/langchain/embeddings/cache.py
deleted file mode 100644
index 3096d340cf2..00000000000
--- a/libs/langchain_v1/langchain/embeddings/cache.py
+++ /dev/null
@@ -1,361 +0,0 @@
-"""Module contains code for a cache backed embedder.
-
-The cache backed embedder is a wrapper around an embedder that caches
-embeddings in a key-value store. The cache is used to avoid recomputing
-embeddings for the same text.
-
-The text is hashed and the hash is used as the key in the cache.
-"""
-
-from __future__ import annotations
-
-import hashlib
-import json
-import uuid
-import warnings
-from typing import TYPE_CHECKING, Literal, cast
-
-from langchain_core.embeddings import Embeddings
-from langchain_core.utils.iter import batch_iterate
-
-from langchain.storage.encoder_backed import EncoderBackedStore
-
-if TYPE_CHECKING:
- from collections.abc import Callable, Sequence
-
- from langchain_core.stores import BaseStore, ByteStore
-
-NAMESPACE_UUID = uuid.UUID(int=1985)
-
-
-def _sha1_hash_to_uuid(text: str) -> uuid.UUID:
- """Return a UUID derived from *text* using SHA-1 (deterministic).
-
- Deterministic and fast, **but not collision-resistant**.
-
- A malicious attacker could try to create two different texts that hash to the same
- UUID. This may not necessarily be an issue in the context of caching embeddings,
- but new applications should swap this out for a stronger hash function like
- xxHash, BLAKE2 or SHA-256, which are collision-resistant.
- """
- sha1_hex = hashlib.sha1(text.encode("utf-8"), usedforsecurity=False).hexdigest()
- # Embed the hex string in `uuid5` to obtain a valid UUID.
- return uuid.uuid5(NAMESPACE_UUID, sha1_hex)
-
-
-def _make_default_key_encoder(namespace: str, algorithm: str) -> Callable[[str], str]:
- """Create a default key encoder function.
-
- Args:
- namespace: Prefix that segregates keys from different embedding models.
- algorithm:
- * `'sha1'` - fast but not collision-resistant
- * `'blake2b'` - cryptographically strong, faster than SHA-1
- * `'sha256'` - cryptographically strong, slower than SHA-1
- * `'sha512'` - cryptographically strong, slower than SHA-1
-
- Returns:
- A function that encodes a key using the specified algorithm.
- """
- if algorithm == "sha1":
- _warn_about_sha1_encoder()
-
- def _key_encoder(key: str) -> str:
- """Encode a key using the specified algorithm."""
- if algorithm == "sha1":
- return f"{namespace}{_sha1_hash_to_uuid(key)}"
- if algorithm == "blake2b":
- return f"{namespace}{hashlib.blake2b(key.encode('utf-8')).hexdigest()}"
- if algorithm == "sha256":
- return f"{namespace}{hashlib.sha256(key.encode('utf-8')).hexdigest()}"
- if algorithm == "sha512":
- return f"{namespace}{hashlib.sha512(key.encode('utf-8')).hexdigest()}"
- msg = f"Unsupported algorithm: {algorithm}"
- raise ValueError(msg)
-
- return _key_encoder
-
-
-def _value_serializer(value: Sequence[float]) -> bytes:
- """Serialize a value."""
- return json.dumps(value).encode()
-
-
-def _value_deserializer(serialized_value: bytes) -> list[float]:
- """Deserialize a value."""
- return cast("list[float]", json.loads(serialized_value.decode()))
-
-
-# The warning is global; track emission, so it appears only once.
-_warned_about_sha1: bool = False
-
-
-def _warn_about_sha1_encoder() -> None:
- """Emit a one-time warning about SHA-1 collision weaknesses."""
- global _warned_about_sha1 # noqa: PLW0603
- if not _warned_about_sha1:
- warnings.warn(
- "Using default key encoder: SHA-1 is *not* collision-resistant. "
- "While acceptable for most cache scenarios, a motivated attacker "
- "can craft two different payloads that map to the same cache key. "
- "If that risk matters in your environment, supply a stronger "
- "encoder (e.g. SHA-256 or BLAKE2) via the `key_encoder` argument. "
- "If you change the key encoder, consider also creating a new cache, "
- "to avoid (the potential for) collisions with existing keys.",
- category=UserWarning,
- stacklevel=2,
- )
- _warned_about_sha1 = True
-
-
-class CacheBackedEmbeddings(Embeddings):
- """Interface for caching results from embedding models.
-
- The interface allows works with any store that implements
- the abstract store interface accepting keys of type str and values of list of
- floats.
-
- If need be, the interface can be extended to accept other implementations
- of the value serializer and deserializer, as well as the key encoder.
-
- Note that by default only document embeddings are cached. To cache query
- embeddings too, pass in a query_embedding_store to constructor.
-
- Examples:
- ```python
- from langchain.embeddings import CacheBackedEmbeddings
- from langchain.storage import LocalFileStore
- from langchain_community.embeddings import OpenAIEmbeddings
-
- store = LocalFileStore("./my_cache")
-
- underlying_embedder = OpenAIEmbeddings()
- embedder = CacheBackedEmbeddings.from_bytes_store(
- underlying_embedder, store, namespace=underlying_embedder.model
- )
-
- # Embedding is computed and cached
- embeddings = embedder.embed_documents(["hello", "goodbye"])
-
- # Embeddings are retrieved from the cache, no computation is done
- embeddings = embedder.embed_documents(["hello", "goodbye"])
- ```
- """
-
- def __init__(
- self,
- underlying_embeddings: Embeddings,
- document_embedding_store: BaseStore[str, list[float]],
- *,
- batch_size: int | None = None,
- query_embedding_store: BaseStore[str, list[float]] | None = None,
- ) -> None:
- """Initialize the embedder.
-
- Args:
- underlying_embeddings: the embedder to use for computing embeddings.
- document_embedding_store: The store to use for caching document embeddings.
- batch_size: The number of documents to embed between store updates.
- query_embedding_store: The store to use for caching query embeddings.
- If `None`, query embeddings are not cached.
- """
- super().__init__()
- self.document_embedding_store = document_embedding_store
- self.query_embedding_store = query_embedding_store
- self.underlying_embeddings = underlying_embeddings
- self.batch_size = batch_size
-
- def embed_documents(self, texts: list[str]) -> list[list[float]]:
- """Embed a list of texts.
-
- The method first checks the cache for the embeddings.
- If the embeddings are not found, the method uses the underlying embedder
- to embed the documents and stores the results in the cache.
-
- Args:
- texts: A list of texts to embed.
-
- Returns:
- A list of embeddings for the given texts.
- """
- vectors: list[list[float] | None] = self.document_embedding_store.mget(
- texts,
- )
- all_missing_indices: list[int] = [i for i, vector in enumerate(vectors) if vector is None]
-
- for missing_indices in batch_iterate(self.batch_size, all_missing_indices):
- missing_texts = [texts[i] for i in missing_indices]
- missing_vectors = self.underlying_embeddings.embed_documents(missing_texts)
- self.document_embedding_store.mset(
- list(zip(missing_texts, missing_vectors, strict=False)),
- )
- for index, updated_vector in zip(missing_indices, missing_vectors, strict=False):
- vectors[index] = updated_vector
-
- return cast(
- "list[list[float]]",
- vectors,
- ) # Nones should have been resolved by now
-
- async def aembed_documents(self, texts: list[str]) -> list[list[float]]:
- """Embed a list of texts.
-
- The method first checks the cache for the embeddings.
- If the embeddings are not found, the method uses the underlying embedder
- to embed the documents and stores the results in the cache.
-
- Args:
- texts: A list of texts to embed.
-
- Returns:
- A list of embeddings for the given texts.
- """
- vectors: list[list[float] | None] = await self.document_embedding_store.amget(texts)
- all_missing_indices: list[int] = [i for i, vector in enumerate(vectors) if vector is None]
-
- # batch_iterate supports None batch_size which returns all elements at once
- # as a single batch.
- for missing_indices in batch_iterate(self.batch_size, all_missing_indices):
- missing_texts = [texts[i] for i in missing_indices]
- missing_vectors = await self.underlying_embeddings.aembed_documents(
- missing_texts,
- )
- await self.document_embedding_store.amset(
- list(zip(missing_texts, missing_vectors, strict=False)),
- )
- for index, updated_vector in zip(missing_indices, missing_vectors, strict=False):
- vectors[index] = updated_vector
-
- return cast(
- "list[list[float]]",
- vectors,
- ) # Nones should have been resolved by now
-
- def embed_query(self, text: str) -> list[float]:
- """Embed query text.
-
- By default, this method does not cache queries. To enable caching, set the
- `cache_query` parameter to `True` when initializing the embedder.
-
- Args:
- text: The text to embed.
-
- Returns:
- The embedding for the given text.
- """
- if not self.query_embedding_store:
- return self.underlying_embeddings.embed_query(text)
-
- (cached,) = self.query_embedding_store.mget([text])
- if cached is not None:
- return cached
-
- vector = self.underlying_embeddings.embed_query(text)
- self.query_embedding_store.mset([(text, vector)])
- return vector
-
- async def aembed_query(self, text: str) -> list[float]:
- """Embed query text.
-
- By default, this method does not cache queries. To enable caching, set the
- `cache_query` parameter to `True` when initializing the embedder.
-
- Args:
- text: The text to embed.
-
- Returns:
- The embedding for the given text.
- """
- if not self.query_embedding_store:
- return await self.underlying_embeddings.aembed_query(text)
-
- (cached,) = await self.query_embedding_store.amget([text])
- if cached is not None:
- return cached
-
- vector = await self.underlying_embeddings.aembed_query(text)
- await self.query_embedding_store.amset([(text, vector)])
- return vector
-
- @classmethod
- def from_bytes_store(
- cls,
- underlying_embeddings: Embeddings,
- document_embedding_cache: ByteStore,
- *,
- namespace: str = "",
- batch_size: int | None = None,
- query_embedding_cache: bool | ByteStore = False,
- key_encoder: Callable[[str], str] | Literal["sha1", "blake2b", "sha256", "sha512"] = "sha1",
- ) -> CacheBackedEmbeddings:
- """On-ramp that adds the necessary serialization and encoding to the store.
-
- Args:
- underlying_embeddings: The embedder to use for embedding.
- document_embedding_cache: The cache to use for storing document embeddings.
- namespace: The namespace to use for document cache.
- This namespace is used to avoid collisions with other caches.
- For example, set it to the name of the embedding model used.
- batch_size: The number of documents to embed between store updates.
- query_embedding_cache: The cache to use for storing query embeddings.
- True to use the same cache as document embeddings.
- False to not cache query embeddings.
- key_encoder: Optional callable to encode keys. If not provided,
- a default encoder using SHA-1 will be used. SHA-1 is not
- collision-resistant, and a motivated attacker could craft two
- different texts that hash to the same cache key.
-
- New applications should use one of the alternative encoders
- or provide a custom and strong key encoder function to avoid this risk.
-
- If you change a key encoder in an existing cache, consider
- just creating a new cache, to avoid (the potential for)
- collisions with existing keys or having duplicate keys
- for the same text in the cache.
-
- Returns:
- An instance of CacheBackedEmbeddings that uses the provided cache.
- """
- if isinstance(key_encoder, str):
- key_encoder = _make_default_key_encoder(namespace, key_encoder)
- elif callable(key_encoder):
- # If a custom key encoder is provided, it should not be used with a
- # namespace.
- # A user can handle namespacing in directly their custom key encoder.
- if namespace:
- msg = (
- "Do not supply `namespace` when using a custom key_encoder; "
- "add any prefixing inside the encoder itself."
- )
- raise ValueError(msg)
- else:
- msg = (
- "key_encoder must be either 'blake2b', 'sha1', 'sha256', 'sha512' "
- "or a callable that encodes keys."
- )
- raise ValueError(msg) # noqa: TRY004
-
- document_embedding_store = EncoderBackedStore[str, list[float]](
- document_embedding_cache,
- key_encoder,
- _value_serializer,
- _value_deserializer,
- )
- if query_embedding_cache is True:
- query_embedding_store = document_embedding_store
- elif query_embedding_cache is False:
- query_embedding_store = None
- else:
- query_embedding_store = EncoderBackedStore[str, list[float]](
- query_embedding_cache,
- key_encoder,
- _value_serializer,
- _value_deserializer,
- )
-
- return cls(
- underlying_embeddings,
- document_embedding_store,
- batch_size=batch_size,
- query_embedding_store=query_embedding_store,
- )
diff --git a/libs/langchain_v1/langchain/messages/__init__.py b/libs/langchain_v1/langchain/messages/__init__.py
index 1aecc4a3307..b757afd1f47 100644
--- a/libs/langchain_v1/langchain/messages/__init__.py
+++ b/libs/langchain_v1/langchain/messages/__init__.py
@@ -1,29 +1,78 @@
-"""Message types."""
+"""Message and message content types.
+
+Includes message types for different roles (e.g., human, AI, system), as well as types
+for message content blocks (e.g., text, image, audio) and tool calls.
+
+!!! warning "Reference docs"
+ This page contains **reference documentation** for Messages. See
+ [the docs](https://docs.langchain.com/oss/python/langchain/messages) for conceptual
+ guides, tutorials, and examples on using Messages.
+"""
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
+ Annotation,
AnyMessage,
+ AudioContentBlock,
+ Citation,
+ ContentBlock,
+ DataContentBlock,
+ FileContentBlock,
HumanMessage,
+ ImageContentBlock,
+ InputTokenDetails,
InvalidToolCall,
MessageLikeRepresentation,
+ NonStandardAnnotation,
+ NonStandardContentBlock,
+ OutputTokenDetails,
+ PlainTextContentBlock,
+ ReasoningContentBlock,
+ RemoveMessage,
+ ServerToolCall,
+ ServerToolCallChunk,
+ ServerToolResult,
SystemMessage,
+ TextContentBlock,
ToolCall,
ToolCallChunk,
ToolMessage,
+ UsageMetadata,
+ VideoContentBlock,
trim_messages,
)
__all__ = [
"AIMessage",
"AIMessageChunk",
+ "Annotation",
"AnyMessage",
+ "AudioContentBlock",
+ "Citation",
+ "ContentBlock",
+ "DataContentBlock",
+ "FileContentBlock",
"HumanMessage",
+ "ImageContentBlock",
+ "InputTokenDetails",
"InvalidToolCall",
"MessageLikeRepresentation",
+ "NonStandardAnnotation",
+ "NonStandardContentBlock",
+ "OutputTokenDetails",
+ "PlainTextContentBlock",
+ "ReasoningContentBlock",
+ "RemoveMessage",
+ "ServerToolCall",
+ "ServerToolCallChunk",
+ "ServerToolResult",
"SystemMessage",
+ "TextContentBlock",
"ToolCall",
"ToolCallChunk",
"ToolMessage",
+ "UsageMetadata",
+ "VideoContentBlock",
"trim_messages",
]
diff --git a/libs/langchain_v1/langchain/storage/__init__.py b/libs/langchain_v1/langchain/storage/__init__.py
deleted file mode 100644
index 74680e1db9b..00000000000
--- a/libs/langchain_v1/langchain/storage/__init__.py
+++ /dev/null
@@ -1,22 +0,0 @@
-"""Implementations of key-value stores and storage helpers.
-
-Module provides implementations of various key-value stores that conform
-to a simple key-value interface.
-
-The primary goal of these storages is to support implementation of caching.
-"""
-
-from langchain_core.stores import (
- InMemoryByteStore,
- InMemoryStore,
- InvalidKeyException,
-)
-
-from langchain.storage.encoder_backed import EncoderBackedStore
-
-__all__ = [
- "EncoderBackedStore",
- "InMemoryByteStore",
- "InMemoryStore",
- "InvalidKeyException",
-]
diff --git a/libs/langchain_v1/langchain/storage/encoder_backed.py b/libs/langchain_v1/langchain/storage/encoder_backed.py
deleted file mode 100644
index 80e60c32a53..00000000000
--- a/libs/langchain_v1/langchain/storage/encoder_backed.py
+++ /dev/null
@@ -1,122 +0,0 @@
-"""Encoder-backed store implementation."""
-
-from collections.abc import AsyncIterator, Callable, Iterator, Sequence
-from typing import (
- Any,
- TypeVar,
-)
-
-from langchain_core.stores import BaseStore
-
-K = TypeVar("K")
-V = TypeVar("V")
-
-
-class EncoderBackedStore(BaseStore[K, V]):
- """Wraps a store with key and value encoders/decoders.
-
- Examples that uses JSON for encoding/decoding:
-
- ```python
- import json
-
-
- def key_encoder(key: int) -> str:
- return json.dumps(key)
-
-
- def value_serializer(value: float) -> str:
- return json.dumps(value)
-
-
- def value_deserializer(serialized_value: str) -> float:
- return json.loads(serialized_value)
-
-
- # Create an instance of the abstract store
- abstract_store = MyCustomStore()
-
- # Create an instance of the encoder-backed store
- store = EncoderBackedStore(
- store=abstract_store,
- key_encoder=key_encoder,
- value_serializer=value_serializer,
- value_deserializer=value_deserializer,
- )
-
- # Use the encoder-backed store methods
- store.mset([(1, 3.14), (2, 2.718)])
- values = store.mget([1, 2]) # Retrieves [3.14, 2.718]
- store.mdelete([1, 2]) # Deletes the keys 1 and 2
- ```
- """
-
- def __init__(
- self,
- store: BaseStore[str, Any],
- key_encoder: Callable[[K], str],
- value_serializer: Callable[[V], bytes],
- value_deserializer: Callable[[Any], V],
- ) -> None:
- """Initialize an EncodedStore."""
- self.store = store
- self.key_encoder = key_encoder
- self.value_serializer = value_serializer
- self.value_deserializer = value_deserializer
-
- def mget(self, keys: Sequence[K]) -> list[V | None]:
- """Get the values associated with the given keys."""
- encoded_keys: list[str] = [self.key_encoder(key) for key in keys]
- values = self.store.mget(encoded_keys)
- return [self.value_deserializer(value) if value is not None else value for value in values]
-
- async def amget(self, keys: Sequence[K]) -> list[V | None]:
- """Get the values associated with the given keys."""
- encoded_keys: list[str] = [self.key_encoder(key) for key in keys]
- values = await self.store.amget(encoded_keys)
- return [self.value_deserializer(value) if value is not None else value for value in values]
-
- def mset(self, key_value_pairs: Sequence[tuple[K, V]]) -> None:
- """Set the values for the given keys."""
- encoded_pairs = [
- (self.key_encoder(key), self.value_serializer(value)) for key, value in key_value_pairs
- ]
- self.store.mset(encoded_pairs)
-
- async def amset(self, key_value_pairs: Sequence[tuple[K, V]]) -> None:
- """Set the values for the given keys."""
- encoded_pairs = [
- (self.key_encoder(key), self.value_serializer(value)) for key, value in key_value_pairs
- ]
- await self.store.amset(encoded_pairs)
-
- def mdelete(self, keys: Sequence[K]) -> None:
- """Delete the given keys and their associated values."""
- encoded_keys = [self.key_encoder(key) for key in keys]
- self.store.mdelete(encoded_keys)
-
- async def amdelete(self, keys: Sequence[K]) -> None:
- """Delete the given keys and their associated values."""
- encoded_keys = [self.key_encoder(key) for key in keys]
- await self.store.amdelete(encoded_keys)
-
- def yield_keys(
- self,
- *,
- prefix: str | None = None,
- ) -> Iterator[K] | Iterator[str]:
- """Get an iterator over keys that match the given prefix."""
- # For the time being this does not return K, but str
- # it's for debugging purposes. Should fix this.
- yield from self.store.yield_keys(prefix=prefix)
-
- async def ayield_keys(
- self,
- *,
- prefix: str | None = None,
- ) -> AsyncIterator[K] | AsyncIterator[str]:
- """Get an iterator over keys that match the given prefix."""
- # For the time being this does not return K, but str
- # it's for debugging purposes. Should fix this.
- async for key in self.store.ayield_keys(prefix=prefix):
- yield key
diff --git a/libs/langchain_v1/langchain/storage/exceptions.py b/libs/langchain_v1/langchain/storage/exceptions.py
deleted file mode 100644
index 74d2a43c531..00000000000
--- a/libs/langchain_v1/langchain/storage/exceptions.py
+++ /dev/null
@@ -1,5 +0,0 @@
-"""Store exceptions."""
-
-from langchain_core.stores import InvalidKeyException
-
-__all__ = ["InvalidKeyException"]
diff --git a/libs/langchain_v1/langchain/storage/in_memory.py b/libs/langchain_v1/langchain/storage/in_memory.py
deleted file mode 100644
index 296bc19a02d..00000000000
--- a/libs/langchain_v1/langchain/storage/in_memory.py
+++ /dev/null
@@ -1,13 +0,0 @@
-"""In memory store that is not thread safe and has no eviction policy.
-
-This is a simple implementation of the BaseStore using a dictionary that is useful
-primarily for unit testing purposes.
-"""
-
-from langchain_core.stores import InMemoryBaseStore, InMemoryByteStore, InMemoryStore
-
-__all__ = [
- "InMemoryBaseStore",
- "InMemoryByteStore",
- "InMemoryStore",
-]
diff --git a/libs/langchain_v1/langchain/tools/__init__.py b/libs/langchain_v1/langchain/tools/__init__.py
index 8f1fa554f53..92aca31aec1 100644
--- a/libs/langchain_v1/langchain/tools/__init__.py
+++ b/libs/langchain_v1/langchain/tools/__init__.py
@@ -1,4 +1,10 @@
-"""Tools."""
+"""Tools.
+
+!!! warning "Reference docs"
+ This page contains **reference documentation** for Tools. See
+ [the docs](https://docs.langchain.com/oss/python/langchain/tools) for conceptual
+ guides, tutorials, and examples on using Tools.
+"""
from langchain_core.tools import (
BaseTool,
@@ -8,11 +14,7 @@ from langchain_core.tools import (
tool,
)
-from langchain.tools.tool_node import (
- InjectedState,
- InjectedStore,
- ToolNode,
-)
+from langchain.tools.tool_node import InjectedState, InjectedStore, ToolRuntime
__all__ = [
"BaseTool",
@@ -21,6 +23,6 @@ __all__ = [
"InjectedToolArg",
"InjectedToolCallId",
"ToolException",
- "ToolNode",
+ "ToolRuntime",
"tool",
]
diff --git a/libs/langchain_v1/langchain/tools/tool_node.py b/libs/langchain_v1/langchain/tools/tool_node.py
index d856aef94a1..4474c8ba935 100644
--- a/libs/langchain_v1/langchain/tools/tool_node.py
+++ b/libs/langchain_v1/langchain/tools/tool_node.py
@@ -1,1487 +1,20 @@
-"""Tool execution node for LangGraph workflows.
+"""Utils file included for backwards compat imports."""
-This module provides prebuilt functionality for executing tools in LangGraph.
-
-Tools are functions that models can call to interact with external systems,
-APIs, databases, or perform computations.
-
-The module implements design patterns for:
-- Parallel execution of multiple tool calls for efficiency
-- Robust error handling with customizable error messages
-- State injection for tools that need access to graph state
-- Store injection for tools that need persistent storage
-- Command-based state updates for advanced control flow
-
-Key Components:
- ToolNode: Main class for executing tools in LangGraph workflows
- InjectedState: Annotation for injecting graph state into tools
- InjectedStore: Annotation for injecting persistent store into tools
- tools_condition: Utility function for conditional routing based on tool calls
-
-Typical Usage:
- ```python
- from langchain_core.tools import tool
- from langchain.tools import ToolNode
-
-
- @tool
- def my_tool(x: int) -> str:
- return f"Result: {x}"
-
-
- tool_node = ToolNode([my_tool])
- ```
-"""
-
-from __future__ import annotations
-
-import asyncio
-import inspect
-import json
-from collections.abc import Callable
-from copy import copy, deepcopy
-from dataclasses import dataclass, replace
-from types import UnionType
-from typing import (
- TYPE_CHECKING,
- Annotated,
- Any,
- Literal,
- Optional,
- TypedDict,
- Union,
- cast,
- get_args,
- get_origin,
- get_type_hints,
+from langgraph.prebuilt import InjectedState, InjectedStore, ToolRuntime
+from langgraph.prebuilt.tool_node import (
+ ToolCallRequest,
+ ToolCallWithContext,
+ ToolCallWrapper,
+)
+from langgraph.prebuilt.tool_node import (
+ ToolNode as _ToolNode, # noqa: F401
)
-from langchain_core.messages import (
- AIMessage,
- AnyMessage,
- RemoveMessage,
- ToolCall,
- ToolMessage,
- convert_to_messages,
-)
-from langchain_core.runnables.config import (
- get_config_list,
- get_executor_for_config,
-)
-from langchain_core.tools import BaseTool, InjectedToolArg
-from langchain_core.tools import tool as create_tool
-from langchain_core.tools.base import (
- TOOL_MESSAGE_BLOCK_TYPES,
- get_all_basemodel_annotations,
-)
-from langgraph._internal._runnable import RunnableCallable
-from langgraph.errors import GraphBubbleUp
-from langgraph.graph.message import REMOVE_ALL_MESSAGES
-from langgraph.runtime import get_runtime
-from langgraph.types import Command, Send
-from pydantic import BaseModel, ValidationError
-
-if TYPE_CHECKING:
- from collections.abc import Sequence
-
- from langchain_core.runnables import RunnableConfig
- from langgraph.store.base import BaseStore
-
-INVALID_TOOL_NAME_ERROR_TEMPLATE = (
- "Error: {requested_tool} is not a valid tool, try one of [{available_tools}]."
-)
-TOOL_CALL_ERROR_TEMPLATE = "Error: {error}\n Please fix your mistakes."
-TOOL_EXECUTION_ERROR_TEMPLATE = (
- "Error executing tool '{tool_name}' with kwargs {tool_kwargs} with error:\n"
- " {error}\n"
- " Please fix the error and try again."
-)
-TOOL_INVOCATION_ERROR_TEMPLATE = (
- "Error invoking tool '{tool_name}' with kwargs {tool_kwargs} with error:\n"
- " {error}\n"
- " Please fix the error and try again."
-)
-
-
-@dataclass()
-class ToolCallRequest:
- """Tool execution request passed to tool call interceptors.
-
- Attributes:
- tool_call: Tool call dict with name, args, and id from model output.
- tool: BaseTool instance to be invoked.
- state: Agent state (dict, list, or BaseModel).
- runtime: LangGraph runtime context (optional, None if outside graph).
- """
-
- tool_call: ToolCall
- tool: BaseTool
- state: Any
- runtime: Any
-
-
-ToolCallHandler = Callable[
- [ToolCallRequest, Callable[[ToolCallRequest], ToolMessage | Command]],
- ToolMessage | Command,
+__all__ = [
+ "InjectedState",
+ "InjectedStore",
+ "ToolCallRequest",
+ "ToolCallWithContext",
+ "ToolCallWrapper",
+ "ToolRuntime",
]
-"""Handler-based tool call interceptor with multi-call support.
-
-Handler receives:
- request: ToolCallRequest with tool_call, tool, state, and runtime.
- execute: Callable to execute the tool (CAN BE CALLED MULTIPLE TIMES).
-
-Returns:
- ToolMessage or Command (the final result).
-
-The execute callable can be invoked multiple times for retry logic,
-with potentially modified requests each time. Each call to execute
-is independent and stateless.
-
-Note:
- When implementing middleware for `create_agent`, use
- `AgentMiddleware.wrap_tool_call` which provides properly typed
- state parameter for better type safety.
-
-Examples:
- Passthrough (execute once):
-
- def handler(request, execute):
- return execute(request)
-
- Modify request before execution:
-
- def handler(request, execute):
- request.tool_call["args"]["value"] *= 2
- return execute(request)
-
- Retry on error (execute multiple times):
-
- def handler(request, execute):
- for attempt in range(3):
- try:
- result = execute(request)
- if is_valid(result):
- return result
- except Exception:
- if attempt == 2:
- raise
- return result
-
- Conditional retry based on response:
-
- def handler(request, execute):
- for attempt in range(3):
- result = execute(request)
- if isinstance(result, ToolMessage) and result.status != "error":
- return result
- if attempt < 2:
- continue
- return result
-
- Cache/short-circuit without calling execute:
-
- def handler(request, execute):
- if cached := get_cache(request):
- return ToolMessage(content=cached, tool_call_id=request.tool_call["id"])
- result = execute(request)
- save_cache(request, result)
- return result
-"""
-
-
-class ToolCallWithContext(TypedDict):
- """ToolCall with additional context for graph state.
-
- This is an internal data structure meant to help the ToolNode accept
- tool calls with additional context (e.g. state) when dispatched using the
- Send API.
-
- The Send API is used in create_agent to distribute tool calls in parallel
- and support human-in-the-loop workflows where graph execution may be paused
- for an indefinite time.
- """
-
- tool_call: ToolCall
- __type: Literal["tool_call_with_context"]
- """Type to parameterize the payload.
-
- Using "__" as a prefix to be defensive against potential name collisions with
- regular user state.
- """
- state: Any
- """The state is provided as additional context."""
-
-
-def msg_content_output(output: Any) -> str | list[dict]:
- """Convert tool output to ToolMessage content format.
-
- Handles str, list[dict] (content blocks), and arbitrary objects by attempting
- JSON serialization with fallback to str().
-
- Args:
- output: Tool execution output of any type.
-
- Returns:
- String or list of content blocks suitable for ToolMessage.content.
- """
- if isinstance(output, str) or (
- isinstance(output, list)
- and all(isinstance(x, dict) and x.get("type") in TOOL_MESSAGE_BLOCK_TYPES for x in output)
- ):
- return output
- # Technically a list of strings is also valid message content, but it's
- # not currently well tested that all chat models support this.
- # And for backwards compatibility we want to make sure we don't break
- # any existing ToolNode usage.
- try:
- return json.dumps(output, ensure_ascii=False)
- except Exception: # noqa: BLE001
- return str(output)
-
-
-class ToolInvocationError(Exception):
- """Exception raised when a tool invocation fails due to invalid arguments."""
-
- def __init__(
- self, tool_name: str, source: ValidationError, tool_kwargs: dict[str, Any]
- ) -> None:
- """Initialize the ToolInvocationError.
-
- Args:
- tool_name: The name of the tool that failed.
- source: The exception that occurred.
- tool_kwargs: The keyword arguments that were passed to the tool.
- """
- self.message = TOOL_INVOCATION_ERROR_TEMPLATE.format(
- tool_name=tool_name, tool_kwargs=tool_kwargs, error=source
- )
- self.tool_name = tool_name
- self.tool_kwargs = tool_kwargs
- self.source = source
- super().__init__(self.message)
-
-
-def _default_handle_tool_errors(e: Exception) -> str:
- """Default error handler for tool errors.
-
- If the tool is a tool invocation error, return its message.
- Otherwise, raise the error.
- """
- if isinstance(e, ToolInvocationError):
- return e.message
- raise e
-
-
-def _handle_tool_error(
- e: Exception,
- *,
- flag: bool | str | Callable[..., str] | type[Exception] | tuple[type[Exception], ...],
-) -> str:
- """Generate error message content based on exception handling configuration.
-
- This function centralizes error message generation logic, supporting different
- error handling strategies configured via the ToolNode's handle_tool_errors
- parameter.
-
- Args:
- e: The exception that occurred during tool execution.
- flag: Configuration for how to handle the error. Can be:
- - bool: If `True`, use default error template
- - str: Use this string as the error message
- - Callable: Call this function with the exception to get error message
- - tuple: Not used in this context (handled by caller)
-
- Returns:
- A string containing the error message to include in the ToolMessage.
-
- Raises:
- ValueError: If flag is not one of the supported types.
-
- Note:
- The tuple case is handled by the caller through exception type checking,
- not by this function directly.
- """
- if isinstance(flag, (bool, tuple)) or (isinstance(flag, type) and issubclass(flag, Exception)):
- content = TOOL_CALL_ERROR_TEMPLATE.format(error=repr(e))
- elif isinstance(flag, str):
- content = flag
- elif callable(flag):
- content = flag(e) # type: ignore [assignment, call-arg]
- else:
- msg = (
- f"Got unexpected type of `handle_tool_error`. Expected bool, str "
- f"or callable. Received: {flag}"
- )
- raise ValueError(msg)
- return content
-
-
-def _infer_handled_types(handler: Callable[..., str]) -> tuple[type[Exception], ...]:
- """Infer exception types handled by a custom error handler function.
-
- This function analyzes the type annotations of a custom error handler to determine
- which exception types it's designed to handle. This enables type-safe error handling
- where only specific exceptions are caught and processed by the handler.
-
- Args:
- handler: A callable that takes an exception and returns an error message string.
- The first parameter (after self/cls if present) should be type-annotated
- with the exception type(s) to handle.
-
- Returns:
- A tuple of exception types that the handler can process. Returns (Exception,)
- if no specific type information is available for backward compatibility.
-
- Raises:
- ValueError: If the handler's annotation contains non-Exception types or
- if Union types contain non-Exception types.
-
- Note:
- This function supports both single exception types and Union types for
- handlers that need to handle multiple exception types differently.
- """
- sig = inspect.signature(handler)
- params = list(sig.parameters.values())
- if params:
- # If it's a method, the first argument is typically 'self' or 'cls'
- if params[0].name in ["self", "cls"] and len(params) == 2:
- first_param = params[1]
- else:
- first_param = params[0]
-
- type_hints = get_type_hints(handler)
- if first_param.name in type_hints:
- origin = get_origin(first_param.annotation)
- if origin in [Union, UnionType]:
- args = get_args(first_param.annotation)
- if all(issubclass(arg, Exception) for arg in args):
- return tuple(args)
- msg = (
- "All types in the error handler error annotation must be "
- "Exception types. For example, "
- "`def custom_handler(e: Union[ValueError, TypeError])`. "
- f"Got '{first_param.annotation}' instead."
- )
- raise ValueError(msg)
-
- exception_type = type_hints[first_param.name]
- if Exception in exception_type.__mro__:
- return (exception_type,)
- msg = (
- f"Arbitrary types are not supported in the error handler "
- f"signature. Please annotate the error with either a "
- f"specific Exception type or a union of Exception types. "
- "For example, `def custom_handler(e: ValueError)` or "
- "`def custom_handler(e: Union[ValueError, TypeError])`. "
- f"Got '{exception_type}' instead."
- )
- raise ValueError(msg)
-
- # If no type information is available, return (Exception,)
- # for backwards compatibility.
- return (Exception,)
-
-
-class ToolNode(RunnableCallable):
- """A node for executing tools in LangGraph workflows.
-
- Handles tool execution patterns including function calls, state injection,
- persistent storage, and control flow. Manages parallel execution,
- error handling.
-
- Input Formats:
- 1. Graph state with `messages` key that has a list of messages:
- - Common representation for agentic workflows
- - Supports custom messages key via `messages_key` parameter
-
- 2. **Message List**: `[AIMessage(..., tool_calls=[...])]`
- - List of messages with tool calls in the last AIMessage
-
- 3. **Direct Tool Calls**: `[{"name": "tool", "args": {...}, "id": "1", "type": "tool_call"}]`
- - Bypasses message parsing for direct tool execution
- - For programmatic tool invocation and testing
-
- Output Formats:
- Output format depends on input type and tool behavior:
-
- **For Regular tools**:
- - Dict input β `{"messages": [ToolMessage(...)]}`
- - List input β `[ToolMessage(...)]`
-
- **For Command tools**:
- - Returns `[Command(...)]` or mixed list with regular tool outputs
- - Commands can update state, trigger navigation, or send messages
-
- Args:
- tools: A sequence of tools that can be invoked by this node. Supports:
- - **BaseTool instances**: Tools with schemas and metadata
- - **Plain functions**: Automatically converted to tools with inferred schemas
- name: The name identifier for this node in the graph. Used for debugging
- and visualization. Defaults to "tools".
- tags: Optional metadata tags to associate with the node for filtering
- and organization. Defaults to `None`.
- handle_tool_errors: Configuration for error handling during tool execution.
- Supports multiple strategies:
-
- - **True**: Catch all errors and return a ToolMessage with the default
- error template containing the exception details.
- - **str**: Catch all errors and return a ToolMessage with this custom
- error message string.
- - **type[Exception]**: Only catch exceptions with the specified type and
- return the default error message for it.
- - **tuple[type[Exception], ...]**: Only catch exceptions with the specified
- types and return default error messages for them.
- - **Callable[..., str]**: Catch exceptions matching the callable's signature
- and return the string result of calling it with the exception.
- - **False**: Disable error handling entirely, allowing exceptions to
- propagate.
-
- Defaults to a callable that:
- - catches tool invocation errors (due to invalid arguments provided by the model) and returns a descriptive error message
- - ignores tool execution errors (they will be re-raised)
-
- messages_key: The key in the state dictionary that contains the message list.
- This same key will be used for the output `ToolMessage` objects.
- Defaults to "messages".
- Allows custom state schemas with different message field names.
-
- Examples:
- Basic usage:
-
- ```python
- from langchain.tools import ToolNode
- from langchain_core.tools import tool
-
- @tool
- def calculator(a: int, b: int) -> int:
- \"\"\"Add two numbers.\"\"\"
- return a + b
-
- tool_node = ToolNode([calculator])
- ```
-
- State injection:
-
- ```python
- from typing_extensions import Annotated
- from langchain.tools import InjectedState
-
- @tool
- def context_tool(query: str, state: Annotated[dict, InjectedState]) -> str:
- \"\"\"Some tool that uses state.\"\"\"
- return f"Query: {query}, Messages: {len(state['messages'])}"
-
- tool_node = ToolNode([context_tool])
- ```
-
- Error handling:
-
- ```python
- def handle_errors(e: ValueError) -> str:
- return "Invalid input provided"
-
-
- tool_node = ToolNode([my_tool], handle_tool_errors=handle_errors)
- ```
- """ # noqa: E501
-
- name: str = "tools"
-
- def __init__(
- self,
- tools: Sequence[BaseTool | Callable],
- *,
- name: str = "tools",
- tags: list[str] | None = None,
- handle_tool_errors: bool
- | str
- | Callable[..., str]
- | type[Exception]
- | tuple[type[Exception], ...] = _default_handle_tool_errors,
- messages_key: str = "messages",
- on_tool_call: ToolCallHandler | None = None,
- ) -> None:
- """Initialize ToolNode with tools and configuration.
-
- Args:
- tools: Sequence of tools to make available for execution.
- name: Node name for graph identification.
- tags: Optional metadata tags.
- handle_tool_errors: Error handling configuration.
- messages_key: State key containing messages.
- on_tool_call: Generator handler to intercept tool execution. Receives
- ToolCallRequest, yields requests, messages, or Commands; receives
- ToolMessage or Command via .send(). Final result is last value sent to
- handler. Enables retries, caching, request modification, and control flow.
- """
- super().__init__(self._func, self._afunc, name=name, tags=tags, trace=False)
- self._tools_by_name: dict[str, BaseTool] = {}
- self._tool_to_state_args: dict[str, dict[str, str | None]] = {}
- self._tool_to_store_arg: dict[str, str | None] = {}
- self._handle_tool_errors = handle_tool_errors
- self._messages_key = messages_key
- self._on_tool_call = on_tool_call
- for tool in tools:
- if not isinstance(tool, BaseTool):
- tool_ = create_tool(cast("type[BaseTool]", tool))
- else:
- tool_ = tool
- self._tools_by_name[tool_.name] = tool_
- self._tool_to_state_args[tool_.name] = _get_state_args(tool_)
- self._tool_to_store_arg[tool_.name] = _get_store_arg(tool_)
-
- @property
- def tools_by_name(self) -> dict[str, BaseTool]:
- """Mapping from tool name to BaseTool instance."""
- return self._tools_by_name
-
- def _func(
- self,
- input: list[AnyMessage] | dict[str, Any] | BaseModel,
- config: RunnableConfig,
- *,
- store: Optional[BaseStore], # noqa: UP045
- ) -> Any:
- try:
- runtime = get_runtime()
- except RuntimeError:
- # Running outside of LangGraph runtime context (e.g., unit tests)
- runtime = None
-
- tool_calls, input_type = self._parse_input(input)
- tool_calls = [self._inject_tool_args(call, input, store) for call in tool_calls]
-
- config_list = get_config_list(config, len(tool_calls))
- input_types = [input_type] * len(tool_calls)
- inputs = [input] * len(tool_calls)
- runtimes = [runtime] * len(tool_calls)
- with get_executor_for_config(config) as executor:
- outputs = [
- *executor.map(self._run_one, tool_calls, input_types, config_list, inputs, runtimes)
- ]
-
- return self._combine_tool_outputs(outputs, input_type)
-
- async def _afunc(
- self,
- input: list[AnyMessage] | dict[str, Any] | BaseModel,
- config: RunnableConfig,
- *,
- store: Optional[BaseStore], # noqa: UP045
- ) -> Any:
- try:
- runtime = get_runtime()
- except RuntimeError:
- # Running outside of LangGraph runtime context (e.g., unit tests)
- runtime = None
-
- tool_calls, input_type = self._parse_input(input)
- tool_calls = [self._inject_tool_args(call, input, store) for call in tool_calls]
- outputs = await asyncio.gather(
- *(self._arun_one(call, input_type, config, input, runtime) for call in tool_calls)
- )
-
- return self._combine_tool_outputs(outputs, input_type)
-
- def _combine_tool_outputs(
- self,
- outputs: list[ToolMessage | Command],
- input_type: Literal["list", "dict", "tool_calls"],
- ) -> list[Command | list[ToolMessage] | dict[str, list[ToolMessage]]]:
- # preserve existing behavior for non-command tool outputs for backwards
- # compatibility
- if not any(isinstance(output, Command) for output in outputs):
- # TypedDict, pydantic, dataclass, etc. should all be able to load from dict
- return outputs if input_type == "list" else {self._messages_key: outputs} # type: ignore[return-value, return-value]
-
- # LangGraph will automatically handle list of Command and non-command node
- # updates
- combined_outputs: list[Command | list[ToolMessage] | dict[str, list[ToolMessage]]] = []
-
- # combine all parent commands with goto into a single parent command
- parent_command: Command | None = None
- for output in outputs:
- if isinstance(output, Command):
- if (
- output.graph is Command.PARENT
- and isinstance(output.goto, list)
- and all(isinstance(send, Send) for send in output.goto)
- ):
- if parent_command:
- parent_command = replace(
- parent_command,
- goto=cast("list[Send]", parent_command.goto) + output.goto,
- )
- else:
- parent_command = Command(graph=Command.PARENT, goto=output.goto)
- else:
- combined_outputs.append(output)
- else:
- combined_outputs.append(
- [output] if input_type == "list" else {self._messages_key: [output]}
- )
-
- if parent_command:
- combined_outputs.append(parent_command)
- return combined_outputs
-
- def _execute_tool_sync(
- self,
- request: ToolCallRequest,
- input_type: Literal["list", "dict", "tool_calls"],
- config: RunnableConfig,
- ) -> ToolMessage | Command:
- """Execute tool call with configured error handling.
-
- Args:
- request: Tool execution request.
- input_type: Input format.
- config: Runnable configuration.
-
- Returns:
- ToolMessage or Command.
-
- Raises:
- Exception: If tool fails and handle_tool_errors is False.
- """
- call = request.tool_call
- tool = request.tool
- call_args = {**call, "type": "tool_call"}
-
- try:
- try:
- response = tool.invoke(call_args, config)
- except ValidationError as exc:
- raise ToolInvocationError(call["name"], exc, call["args"]) from exc
-
- # GraphInterrupt is a special exception that will always be raised.
- # It can be triggered in the following scenarios,
- # Where GraphInterrupt(GraphBubbleUp) is raised from an `interrupt` invocation
- # most commonly:
- # (1) a GraphInterrupt is raised inside a tool
- # (2) a GraphInterrupt is raised inside a graph node for a graph called as a tool
- # (3) a GraphInterrupt is raised when a subgraph is interrupted inside a graph
- # called as a tool
- # (2 and 3 can happen in a "supervisor w/ tools" multi-agent architecture)
- except GraphBubbleUp:
- raise
- except Exception as e:
- # Determine which exception types are handled
- handled_types: tuple[type[Exception], ...]
- if isinstance(self._handle_tool_errors, type) and issubclass(
- self._handle_tool_errors, Exception
- ):
- handled_types = (self._handle_tool_errors,)
- elif isinstance(self._handle_tool_errors, tuple):
- handled_types = self._handle_tool_errors
- elif callable(self._handle_tool_errors) and not isinstance(
- self._handle_tool_errors, type
- ):
- handled_types = _infer_handled_types(self._handle_tool_errors)
- else:
- # default behavior is catching all exceptions
- handled_types = (Exception,)
-
- # Check if this error should be handled
- if not self._handle_tool_errors or not isinstance(e, handled_types):
- raise
-
- # Error is handled - create error ToolMessage
- content = _handle_tool_error(e, flag=self._handle_tool_errors)
- return ToolMessage(
- content=content,
- name=call["name"],
- tool_call_id=call["id"],
- status="error",
- )
-
- # Process successful response
- if isinstance(response, Command):
- # Validate Command before returning to handler
- return self._validate_tool_command(response, request.tool_call, input_type)
- if isinstance(response, ToolMessage):
- response.content = cast("str | list", msg_content_output(response.content))
- return response
-
- msg = f"Tool {call['name']} returned unexpected type: {type(response)}"
- raise TypeError(msg)
-
- def _run_one(
- self,
- call: ToolCall,
- input_type: Literal["list", "dict", "tool_calls"],
- config: RunnableConfig,
- input: list[AnyMessage] | dict[str, Any] | BaseModel,
- runtime: Any,
- ) -> ToolMessage | Command:
- """Execute single tool call with on_tool_call handler if configured.
-
- Args:
- call: Tool call dict.
- input_type: Input format.
- config: Runnable configuration.
- input: Agent state.
- runtime: LangGraph runtime or None.
-
- Returns:
- ToolMessage or Command.
- """
- if invalid_tool_message := self._validate_tool_call(call):
- return invalid_tool_message
-
- tool = self.tools_by_name[call["name"]]
-
- # Extract state from ToolCallWithContext if present
- state = self._extract_state(input)
-
- # Create the tool request with state and runtime
- tool_request = ToolCallRequest(
- tool_call=call,
- tool=tool,
- state=state,
- runtime=runtime,
- )
-
- if self._on_tool_call is None:
- # No handler - execute directly
- return self._execute_tool_sync(tool_request, input_type, config)
-
- # Define execute callable that can be called multiple times
- def execute(req: ToolCallRequest) -> ToolMessage | Command:
- """Execute tool with given request. Can be called multiple times."""
- return self._execute_tool_sync(req, input_type, config)
-
- # Call handler with request and execute callable
- try:
- return self._on_tool_call(tool_request, execute)
- except Exception as e:
- # Handler threw an exception
- if not self._handle_tool_errors:
- raise
- # Convert to error message
- content = _handle_tool_error(e, flag=self._handle_tool_errors)
- return ToolMessage(
- content=content,
- name=tool_request.tool_call["name"],
- tool_call_id=tool_request.tool_call["id"],
- status="error",
- )
-
- async def _execute_tool_async(
- self,
- request: ToolCallRequest,
- input_type: Literal["list", "dict", "tool_calls"],
- config: RunnableConfig,
- ) -> ToolMessage | Command:
- """Execute tool call asynchronously with configured error handling.
-
- Args:
- request: Tool execution request.
- input_type: Input format.
- config: Runnable configuration.
-
- Returns:
- ToolMessage or Command.
-
- Raises:
- Exception: If tool fails and handle_tool_errors is False.
- """
- call = request.tool_call
- tool = request.tool
- call_args = {**call, "type": "tool_call"}
-
- try:
- try:
- response = await tool.ainvoke(call_args, config)
- except ValidationError as exc:
- raise ToolInvocationError(call["name"], exc, call["args"]) from exc
-
- # GraphInterrupt is a special exception that will always be raised.
- # It can be triggered in the following scenarios,
- # Where GraphInterrupt(GraphBubbleUp) is raised from an `interrupt` invocation
- # most commonly:
- # (1) a GraphInterrupt is raised inside a tool
- # (2) a GraphInterrupt is raised inside a graph node for a graph called as a tool
- # (3) a GraphInterrupt is raised when a subgraph is interrupted inside a graph
- # called as a tool
- # (2 and 3 can happen in a "supervisor w/ tools" multi-agent architecture)
- except GraphBubbleUp:
- raise
- except Exception as e:
- # Determine which exception types are handled
- handled_types: tuple[type[Exception], ...]
- if isinstance(self._handle_tool_errors, type) and issubclass(
- self._handle_tool_errors, Exception
- ):
- handled_types = (self._handle_tool_errors,)
- elif isinstance(self._handle_tool_errors, tuple):
- handled_types = self._handle_tool_errors
- elif callable(self._handle_tool_errors) and not isinstance(
- self._handle_tool_errors, type
- ):
- handled_types = _infer_handled_types(self._handle_tool_errors)
- else:
- # default behavior is catching all exceptions
- handled_types = (Exception,)
-
- # Check if this error should be handled
- if not self._handle_tool_errors or not isinstance(e, handled_types):
- raise
-
- # Error is handled - create error ToolMessage
- content = _handle_tool_error(e, flag=self._handle_tool_errors)
- return ToolMessage(
- content=content,
- name=call["name"],
- tool_call_id=call["id"],
- status="error",
- )
-
- # Process successful response
- if isinstance(response, Command):
- # Validate Command before returning to handler
- return self._validate_tool_command(response, request.tool_call, input_type)
- if isinstance(response, ToolMessage):
- response.content = cast("str | list", msg_content_output(response.content))
- return response
-
- msg = f"Tool {call['name']} returned unexpected type: {type(response)}"
- raise TypeError(msg)
-
- async def _arun_one(
- self,
- call: ToolCall,
- input_type: Literal["list", "dict", "tool_calls"],
- config: RunnableConfig,
- input: list[AnyMessage] | dict[str, Any] | BaseModel,
- runtime: Any,
- ) -> ToolMessage | Command:
- """Execute single tool call asynchronously with on_tool_call handler if configured.
-
- Args:
- call: Tool call dict.
- input_type: Input format.
- config: Runnable configuration.
- input: Agent state.
- runtime: LangGraph runtime or None.
-
- Returns:
- ToolMessage or Command.
- """
- if invalid_tool_message := self._validate_tool_call(call):
- return invalid_tool_message
-
- tool = self.tools_by_name[call["name"]]
-
- # Extract state from ToolCallWithContext if present
- state = self._extract_state(input)
-
- # Create the tool request with state and runtime
- tool_request = ToolCallRequest(
- tool_call=call,
- tool=tool,
- state=state,
- runtime=runtime,
- )
-
- if self._on_tool_call is None:
- # No handler - execute directly
- return await self._execute_tool_async(tool_request, input_type, config)
-
- # Define async execute callable that can be called multiple times
- async def execute(req: ToolCallRequest) -> ToolMessage | Command:
- """Execute tool with given request. Can be called multiple times."""
- return await self._execute_tool_async(req, input_type, config)
-
- # Call handler with request and execute callable
- # Note: handler is sync, but execute callable is async
- try:
- result = self._on_tool_call(tool_request, execute) # type: ignore[arg-type]
- # If result is a coroutine, await it (though handler should be sync)
- return await result if hasattr(result, "__await__") else result
- except Exception as e:
- # Handler threw an exception
- if not self._handle_tool_errors:
- raise
- # Convert to error message
- content = _handle_tool_error(e, flag=self._handle_tool_errors)
- return ToolMessage(
- content=content,
- name=tool_request.tool_call["name"],
- tool_call_id=tool_request.tool_call["id"],
- status="error",
- )
-
- def _parse_input(
- self,
- input: list[AnyMessage] | dict[str, Any] | BaseModel,
- ) -> tuple[list[ToolCall], Literal["list", "dict", "tool_calls"]]:
- input_type: Literal["list", "dict", "tool_calls"]
- if isinstance(input, list):
- if isinstance(input[-1], dict) and input[-1].get("type") == "tool_call":
- input_type = "tool_calls"
- tool_calls = cast("list[ToolCall]", input)
- return tool_calls, input_type
- input_type = "list"
- messages = input
- elif isinstance(input, dict) and input.get("__type") == "tool_call_with_context":
- # Handle ToolCallWithContext from Send API
- # mypy will not be able to type narrow correctly since the signature
- # for input contains dict[str, Any]. We'd need to narrow dict[str, Any]
- # before we can apply correct typing.
- input_with_ctx = cast("ToolCallWithContext", input)
- input_type = "tool_calls"
- return [input_with_ctx["tool_call"]], input_type
- elif isinstance(input, dict) and (messages := input.get(self._messages_key, [])):
- input_type = "dict"
- elif messages := getattr(input, self._messages_key, []):
- # Assume dataclass-like state that can coerce from dict
- input_type = "dict"
- else:
- msg = "No message found in input"
- raise ValueError(msg)
-
- try:
- latest_ai_message = next(m for m in reversed(messages) if isinstance(m, AIMessage))
- except StopIteration:
- msg = "No AIMessage found in input"
- raise ValueError(msg)
-
- tool_calls = list(latest_ai_message.tool_calls)
- return tool_calls, input_type
-
- def _validate_tool_call(self, call: ToolCall) -> ToolMessage | None:
- requested_tool = call["name"]
- if requested_tool not in self.tools_by_name:
- all_tool_names = list(self.tools_by_name.keys())
- content = INVALID_TOOL_NAME_ERROR_TEMPLATE.format(
- requested_tool=requested_tool,
- available_tools=", ".join(all_tool_names),
- )
- return ToolMessage(
- content, name=requested_tool, tool_call_id=call["id"], status="error"
- )
- return None
-
- def _extract_state(
- self, input: list[AnyMessage] | dict[str, Any] | BaseModel
- ) -> list[AnyMessage] | dict[str, Any] | BaseModel:
- """Extract state from input, handling ToolCallWithContext if present.
-
- Args:
- input: The input which may be raw state or ToolCallWithContext.
-
- Returns:
- The actual state to pass to on_tool_call handlers.
- """
- if isinstance(input, dict) and input.get("__type") == "tool_call_with_context":
- return input["state"]
- return input
-
- def _inject_state(
- self,
- tool_call: ToolCall,
- input: list[AnyMessage] | dict[str, Any] | BaseModel,
- ) -> ToolCall:
- state_args = self._tool_to_state_args[tool_call["name"]]
- if state_args and isinstance(input, list):
- required_fields = list(state_args.values())
- if (
- len(required_fields) == 1 and required_fields[0] == self._messages_key
- ) or required_fields[0] is None:
- input = {self._messages_key: input}
- else:
- err_msg = (
- f"Invalid input to ToolNode. Tool {tool_call['name']} requires "
- f"graph state dict as input."
- )
- if any(state_field for state_field in state_args.values()):
- required_fields_str = ", ".join(f for f in required_fields if f)
- err_msg += f" State should contain fields {required_fields_str}."
- raise ValueError(err_msg)
-
- # Extract state from ToolCallWithContext if present
- if isinstance(input, dict) and input.get("__type") == "tool_call_with_context":
- state = input["state"]
- else:
- state = input
-
- if isinstance(state, dict):
- tool_state_args = {
- tool_arg: state[state_field] if state_field else state
- for tool_arg, state_field in state_args.items()
- }
- else:
- tool_state_args = {
- tool_arg: getattr(state, state_field) if state_field else state
- for tool_arg, state_field in state_args.items()
- }
-
- tool_call["args"] = {
- **tool_call["args"],
- **tool_state_args,
- }
- return tool_call
-
- def _inject_store(self, tool_call: ToolCall, store: BaseStore | None) -> ToolCall:
- store_arg = self._tool_to_store_arg[tool_call["name"]]
- if not store_arg:
- return tool_call
-
- if store is None:
- msg = (
- "Cannot inject store into tools with InjectedStore annotations - "
- "please compile your graph with a store."
- )
- raise ValueError(msg)
-
- tool_call["args"] = {
- **tool_call["args"],
- store_arg: store,
- }
- return tool_call
-
- def _inject_tool_args(
- self,
- tool_call: ToolCall,
- input: list[AnyMessage] | dict[str, Any] | BaseModel,
- store: BaseStore | None,
- ) -> ToolCall:
- """Inject graph state and store into tool call arguments.
-
- This is an internal method that enables tools to access graph context that
- should not be controlled by the model. Tools can declare dependencies on graph
- state or persistent storage using InjectedState and InjectedStore annotations.
- This method automatically identifies these dependencies and injects the
- appropriate values.
-
- The injection process preserves the original tool call structure while adding
- the necessary context arguments. This allows tools to be both model-callable
- and context-aware without exposing internal state management to the model.
-
- Args:
- tool_call: The tool call dictionary to augment with injected arguments.
- Must contain 'name', 'args', 'id', and 'type' fields.
- input: The current graph state to inject into tools requiring state access.
- Can be a message list, state dictionary, or BaseModel instance.
- store: The persistent store instance to inject into tools requiring storage.
- Will be None if no store is configured for the graph.
-
- Returns:
- A new ToolCall dictionary with the same structure as the input but with
- additional arguments injected based on the tool's annotation requirements.
-
- Raises:
- ValueError: If a tool requires store injection but no store is provided,
- or if state injection requirements cannot be satisfied.
-
- Note:
- This method is called automatically during tool execution. It should not
- be called from outside the ToolNode.
- """
- if tool_call["name"] not in self.tools_by_name:
- return tool_call
-
- tool_call_copy: ToolCall = copy(tool_call)
- tool_call_with_state = self._inject_state(tool_call_copy, input)
- return self._inject_store(tool_call_with_state, store)
-
- def _validate_tool_command(
- self,
- command: Command,
- call: ToolCall,
- input_type: Literal["list", "dict", "tool_calls"],
- ) -> Command:
- if isinstance(command.update, dict):
- # input type is dict when ToolNode is invoked with a dict input
- # (e.g. {"messages": [AIMessage(..., tool_calls=[...])]})
- if input_type not in ("dict", "tool_calls"):
- msg = (
- "Tools can provide a dict in Command.update only when using dict "
- f"with '{self._messages_key}' key as ToolNode input, "
- f"got: {command.update} for tool '{call['name']}'"
- )
- raise ValueError(msg)
-
- updated_command = deepcopy(command)
- state_update = cast("dict[str, Any]", updated_command.update) or {}
- messages_update = state_update.get(self._messages_key, [])
- elif isinstance(command.update, list):
- # Input type is list when ToolNode is invoked with a list input
- # (e.g. [AIMessage(..., tool_calls=[...])])
- if input_type != "list":
- msg = (
- "Tools can provide a list of messages in Command.update "
- "only when using list of messages as ToolNode input, "
- f"got: {command.update} for tool '{call['name']}'"
- )
- raise ValueError(msg)
-
- updated_command = deepcopy(command)
- messages_update = updated_command.update
- else:
- return command
-
- # convert to message objects if updates are in a dict format
- messages_update = convert_to_messages(messages_update)
-
- # no validation needed if all messages are being removed
- if messages_update == [RemoveMessage(id=REMOVE_ALL_MESSAGES)]:
- return updated_command
-
- has_matching_tool_message = False
- for message in messages_update:
- if not isinstance(message, ToolMessage):
- continue
-
- if message.tool_call_id == call["id"]:
- message.name = call["name"]
- has_matching_tool_message = True
-
- # validate that we always have a ToolMessage matching the tool call in
- # Command.update if command is sent to the CURRENT graph
- if updated_command.graph is None and not has_matching_tool_message:
- example_update = (
- '`Command(update={"messages": '
- '[ToolMessage("Success", tool_call_id=tool_call_id), ...]}, ...)`'
- if input_type == "dict"
- else "`Command(update="
- '[ToolMessage("Success", tool_call_id=tool_call_id), ...], ...)`'
- )
- msg = (
- "Expected to have a matching ToolMessage in Command.update "
- f"for tool '{call['name']}', got: {messages_update}. "
- "Every tool call (LLM requesting to call a tool) "
- "in the message history MUST have a corresponding ToolMessage. "
- f"You can fix it by modifying the tool to return {example_update}."
- )
- raise ValueError(msg)
- return updated_command
-
-
-def tools_condition(
- state: list[AnyMessage] | dict[str, Any] | BaseModel,
- messages_key: str = "messages",
-) -> Literal["tools", "__end__"]:
- """Conditional routing function for tool-calling workflows.
-
- This utility function implements the standard conditional logic for ReAct-style
- agents: if the last AI message contains tool calls, route to the tool execution
- node; otherwise, end the workflow. This pattern is fundamental to most tool-calling
- agent architectures.
-
- The function handles multiple state formats commonly used in LangGraph applications,
- making it flexible for different graph designs while maintaining consistent behavior.
-
- Args:
- state: The current graph state to examine for tool calls. Supported formats:
- - Dictionary containing a messages key (for StateGraph)
- - BaseModel instance with a messages attribute
- messages_key: The key or attribute name containing the message list in the state.
- This allows customization for graphs using different state schemas.
- Defaults to "messages".
-
- Returns:
- Either "tools" if tool calls are present in the last AI message, or "__end__"
- to terminate the workflow. These are the standard routing destinations for
- tool-calling conditional edges.
-
- Raises:
- ValueError: If no messages can be found in the provided state format.
-
- Example:
- Basic usage in a ReAct agent:
-
- ```python
- from langgraph.graph import StateGraph
- from langchain.tools import ToolNode
- from langchain.tools.tool_node import tools_condition
- from typing_extensions import TypedDict
-
-
- class State(TypedDict):
- messages: list
-
-
- graph = StateGraph(State)
- graph.add_node("llm", call_model)
- graph.add_node("tools", ToolNode([my_tool]))
- graph.add_conditional_edges(
- "llm",
- tools_condition, # Routes to "tools" or "__end__"
- {"tools": "tools", "__end__": "__end__"},
- )
- ```
-
- Custom messages key:
-
- ```python
- def custom_condition(state):
- return tools_condition(state, messages_key="chat_history")
- ```
-
- Note:
- This function is designed to work seamlessly with ToolNode and standard
- LangGraph patterns. It expects the last message to be an AIMessage when
- tool calls are present, which is the standard output format for tool-calling
- language models.
- """
- if isinstance(state, list):
- ai_message = state[-1]
- elif (isinstance(state, dict) and (messages := state.get(messages_key, []))) or (
- messages := getattr(state, messages_key, [])
- ):
- ai_message = messages[-1]
- else:
- msg = f"No messages found in input state to tool_edge: {state}"
- raise ValueError(msg)
- if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
- return "tools"
- return "__end__"
-
-
-class InjectedState(InjectedToolArg):
- """Annotation for injecting graph state into tool arguments.
-
- This annotation enables tools to access graph state without exposing state
- management details to the language model. Tools annotated with InjectedState
- receive state data automatically during execution while remaining invisible
- to the model's tool-calling interface.
-
- Args:
- field: Optional key to extract from the state dictionary. If `None`, the entire
- state is injected. If specified, only that field's value is injected.
- This allows tools to request specific state components rather than
- processing the full state structure.
-
- Example:
- ```python
- from typing import List
- from typing_extensions import Annotated, TypedDict
-
- from langchain_core.messages import BaseMessage, AIMessage
- from langchain.tools import InjectedState, ToolNode, tool
-
-
- class AgentState(TypedDict):
- messages: List[BaseMessage]
- foo: str
-
-
- @tool
- def state_tool(x: int, state: Annotated[dict, InjectedState]) -> str:
- '''Do something with state.'''
- if len(state["messages"]) > 2:
- return state["foo"] + str(x)
- else:
- return "not enough messages"
-
-
- @tool
- def foo_tool(x: int, foo: Annotated[str, InjectedState("foo")]) -> str:
- '''Do something else with state.'''
- return foo + str(x + 1)
-
-
- node = ToolNode([state_tool, foo_tool])
-
- tool_call1 = {"name": "state_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
- tool_call2 = {"name": "foo_tool", "args": {"x": 1}, "id": "2", "type": "tool_call"}
- state = {
- "messages": [AIMessage("", tool_calls=[tool_call1, tool_call2])],
- "foo": "bar",
- }
- node.invoke(state)
- ```
-
- ```python
- [
- ToolMessage(content="not enough messages", name="state_tool", tool_call_id="1"),
- ToolMessage(content="bar2", name="foo_tool", tool_call_id="2"),
- ]
- ```
-
- Note:
- - InjectedState arguments are automatically excluded from tool schemas
- presented to language models
- - ToolNode handles the injection process during execution
- - Tools can mix regular arguments (controlled by the model) with injected
- arguments (controlled by the system)
- - State injection occurs after the model generates tool calls but before
- tool execution
- """
-
- def __init__(self, field: str | None = None) -> None:
- """Initialize the InjectedState annotation."""
- self.field = field
-
-
-class InjectedStore(InjectedToolArg):
- """Annotation for injecting persistent store into tool arguments.
-
- This annotation enables tools to access LangGraph's persistent storage system
- without exposing storage details to the language model. Tools annotated with
- InjectedStore receive the store instance automatically during execution while
- remaining invisible to the model's tool-calling interface.
-
- The store provides persistent, cross-session data storage that tools can use
- for maintaining context, user preferences, or any other data that needs to
- persist beyond individual workflow executions.
-
- !!! warning
- `InjectedStore` annotation requires `langchain-core >= 0.3.8`
-
- Example:
- ```python
- from typing_extensions import Annotated
- from langgraph.store.memory import InMemoryStore
- from langchain.tools import InjectedStore, ToolNode, tool
-
- @tool
- def save_preference(
- key: str,
- value: str,
- store: Annotated[Any, InjectedStore()]
- ) -> str:
- \"\"\"Save user preference to persistent storage.\"\"\"
- store.put(("preferences",), key, value)
- return f"Saved {key} = {value}"
-
- @tool
- def get_preference(
- key: str,
- store: Annotated[Any, InjectedStore()]
- ) -> str:
- \"\"\"Retrieve user preference from persistent storage.\"\"\"
- result = store.get(("preferences",), key)
- return result.value if result else "Not found"
- ```
-
- Usage with ToolNode and graph compilation:
-
- ```python
- from langgraph.graph import StateGraph
- from langgraph.store.memory import InMemoryStore
-
- store = InMemoryStore()
- tool_node = ToolNode([save_preference, get_preference])
-
- graph = StateGraph(State)
- graph.add_node("tools", tool_node)
- compiled_graph = graph.compile(store=store) # Store is injected automatically
- ```
-
- Cross-session persistence:
-
- ```python
- # First session
- result1 = graph.invoke({"messages": [HumanMessage("Save my favorite color as blue")]})
-
- # Later session - data persists
- result2 = graph.invoke({"messages": [HumanMessage("What's my favorite color?")]})
- ```
-
- Note:
- - InjectedStore arguments are automatically excluded from tool schemas
- presented to language models
- - The store instance is automatically injected by ToolNode during execution
- - Tools can access namespaced storage using the store's get/put methods
- - Store injection requires the graph to be compiled with a store instance
- - Multiple tools can share the same store instance for data consistency
- """
-
-
-def _is_injection(type_arg: Any, injection_type: type[InjectedState | InjectedStore]) -> bool:
- """Check if a type argument represents an injection annotation.
-
- This utility function determines whether a type annotation indicates that
- an argument should be injected with state or store data. It handles both
- direct annotations and nested annotations within Union or Annotated types.
-
- Args:
- type_arg: The type argument to check for injection annotations.
- injection_type: The injection type to look for (InjectedState or InjectedStore).
-
- Returns:
- True if the type argument contains the specified injection annotation.
- """
- if isinstance(type_arg, injection_type) or (
- isinstance(type_arg, type) and issubclass(type_arg, injection_type)
- ):
- return True
- origin_ = get_origin(type_arg)
- if origin_ is Union or origin_ is Annotated:
- return any(_is_injection(ta, injection_type) for ta in get_args(type_arg))
- return False
-
-
-def _get_state_args(tool: BaseTool) -> dict[str, str | None]:
- """Extract state injection mappings from tool annotations.
-
- This function analyzes a tool's input schema to identify arguments that should
- be injected with graph state. It processes InjectedState annotations to build
- a mapping of tool argument names to state field names.
-
- Args:
- tool: The tool to analyze for state injection requirements.
-
- Returns:
- A dictionary mapping tool argument names to state field names. If a field
- name is None, the entire state should be injected for that argument.
- """
- full_schema = tool.get_input_schema()
- tool_args_to_state_fields: dict = {}
-
- for name, type_ in get_all_basemodel_annotations(full_schema).items():
- injections = [
- type_arg for type_arg in get_args(type_) if _is_injection(type_arg, InjectedState)
- ]
- if len(injections) > 1:
- msg = (
- "A tool argument should not be annotated with InjectedState more than "
- f"once. Received arg {name} with annotations {injections}."
- )
- raise ValueError(msg)
- if len(injections) == 1:
- injection = injections[0]
- if isinstance(injection, InjectedState) and injection.field:
- tool_args_to_state_fields[name] = injection.field
- else:
- tool_args_to_state_fields[name] = None
- else:
- pass
- return tool_args_to_state_fields
-
-
-def _get_store_arg(tool: BaseTool) -> str | None:
- """Extract store injection argument from tool annotations.
-
- This function analyzes a tool's input schema to identify the argument that
- should be injected with the graph store. Only one store argument is supported
- per tool.
-
- Args:
- tool: The tool to analyze for store injection requirements.
-
- Returns:
- The name of the argument that should receive the store injection, or None
- if no store injection is required.
-
- Raises:
- ValueError: If a tool argument has multiple InjectedStore annotations.
- """
- full_schema = tool.get_input_schema()
- for name, type_ in get_all_basemodel_annotations(full_schema).items():
- injections = [
- type_arg for type_arg in get_args(type_) if _is_injection(type_arg, InjectedStore)
- ]
- if len(injections) > 1:
- msg = (
- "A tool argument should not be annotated with InjectedStore more than "
- f"once. Received arg {name} with annotations {injections}."
- )
- raise ValueError(msg)
- if len(injections) == 1:
- return name
-
- return None
diff --git a/libs/langchain_v1/pyproject.toml b/libs/langchain_v1/pyproject.toml
index a4a2ee01a91..3366a428f90 100644
--- a/libs/langchain_v1/pyproject.toml
+++ b/libs/langchain_v1/pyproject.toml
@@ -3,25 +3,26 @@ requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
-authors = []
+name = "langchain"
+description = "Building applications with LLMs through composability"
license = { text = "MIT" }
+readme = "README.md"
+authors = []
+
+version = "1.0.4"
requires-python = ">=3.10.0,<4.0.0"
dependencies = [
- "langchain-core>=1.0.0a7,<2.0.0",
- "langgraph>=1.0.0a4,<2.0.0",
+ "langchain-core>=1.0.2,<2.0.0",
+ "langgraph>=1.0.2,<1.1.0",
"pydantic>=2.7.4,<3.0.0",
]
-name = "langchain"
-version = "1.0.0a12"
-description = "Building applications with LLMs through composability"
-readme = "README.md"
-
[project.optional-dependencies]
+model-profiles = ["langchain-model-profiles"]
community = ["langchain-community"]
anthropic = ["langchain-anthropic"]
openai = ["langchain-openai"]
-#azure-ai = ["langchain-azure-ai"]
+azure-ai = ["langchain-azure-ai"]
#cohere = ["langchain-cohere"]
google-vertexai = ["langchain-google-vertexai"]
google-genai = ["langchain-google-genai"]
@@ -29,7 +30,7 @@ fireworks = ["langchain-fireworks"]
ollama = ["langchain-ollama"]
together = ["langchain-together"]
mistralai = ["langchain-mistralai"]
-#huggingface = ["langchain-huggingface"]
+huggingface = ["langchain-huggingface"]
groq = ["langchain-groq"]
aws = ["langchain-aws"]
deepseek = ["langchain-deepseek"]
@@ -37,12 +38,13 @@ xai = ["langchain-xai"]
perplexity = ["langchain-perplexity"]
[project.urls]
-homepage = "https://docs.langchain.com/"
-repository = "https://github.com/langchain-ai/langchain/tree/master/libs/langchain"
-changelog = "https://github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain%3D%3D1%22"
-twitter = "https://x.com/LangChainAI"
-slack = "https://www.langchain.com/join-community"
-reddit = "https://www.reddit.com/r/LangChain/"
+Homepage = "https://docs.langchain.com/"
+Documentation = "https://reference.langchain.com/python/langchain/langchain/"
+Source = "https://github.com/langchain-ai/langchain/tree/master/libs/langchain"
+Changelog = "https://github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain%3D%3D1%22"
+Twitter = "https://x.com/LangChainAI"
+Slack = "https://www.langchain.com/join-community"
+Reddit = "https://www.reddit.com/r/LangChain/"
[dependency-groups]
test = [
@@ -56,8 +58,7 @@ test = [
"syrupy>=4.0.2,<5.0.0",
"toml>=0.10.2,<1.0.0",
"langchain-tests",
- "langchain-text-splitters",
- "langchain-openai"
+ "langchain-openai",
]
lint = [
"ruff>=0.12.2,<0.13.0",
@@ -115,6 +116,7 @@ ignore = [
"PLC0415", # Imports should be at the top. Not always desirable
"PLR0913", # Too many arguments in function definition
"PLC0414", # Inconsistent with how type checkers expect to be notified of intentional re-exports
+ "RUF002", # Em-dash in docstring
]
unfixable = ["B028"] # People should intentionally tune the stacklevel
diff --git a/libs/langchain_v1/tests/integration_tests/agents/middleware/__init__.py b/libs/langchain_v1/tests/integration_tests/agents/middleware/__init__.py
new file mode 100644
index 00000000000..df702936a4a
--- /dev/null
+++ b/libs/langchain_v1/tests/integration_tests/agents/middleware/__init__.py
@@ -0,0 +1 @@
+"""Integration tests for agent middleware."""
diff --git a/libs/langchain_v1/tests/integration_tests/agents/middleware/test_shell_tool_integration.py b/libs/langchain_v1/tests/integration_tests/agents/middleware/test_shell_tool_integration.py
new file mode 100644
index 00000000000..82b3e1b2588
--- /dev/null
+++ b/libs/langchain_v1/tests/integration_tests/agents/middleware/test_shell_tool_integration.py
@@ -0,0 +1,146 @@
+"""Integration tests for ShellToolMiddleware with create_agent."""
+
+from __future__ import annotations
+
+from pathlib import Path
+from typing import Any
+
+import pytest
+from langchain_core.messages import HumanMessage
+
+from langchain.agents import create_agent
+from langchain.agents.middleware.shell_tool import ShellToolMiddleware
+
+
+def _get_model(provider: str) -> Any:
+ """Get chat model for the specified provider."""
+ if provider == "anthropic":
+ from langchain_anthropic import ChatAnthropic
+
+ return ChatAnthropic(model="claude-sonnet-4-5-20250929")
+ elif provider == "openai":
+ from langchain_openai import ChatOpenAI
+
+ return ChatOpenAI(model="gpt-4o-mini")
+ else:
+ msg = f"Unknown provider: {provider}"
+ raise ValueError(msg)
+
+
+@pytest.mark.parametrize("provider", ["anthropic", "openai"])
+def test_shell_tool_basic_execution(tmp_path: Path, provider: str) -> None:
+ """Test basic shell command execution across different models."""
+ pytest.importorskip(f"langchain_{provider}")
+
+ workspace = tmp_path / "workspace"
+ agent = create_agent(
+ model=_get_model(provider),
+ middleware=[ShellToolMiddleware(workspace_root=workspace)],
+ )
+
+ result = agent.invoke(
+ {"messages": [HumanMessage("Run the command 'echo hello' and tell me what it outputs")]}
+ )
+
+ tool_messages = [msg for msg in result["messages"] if msg.type == "tool"]
+ assert len(tool_messages) > 0, "Shell tool should have been called"
+
+ tool_outputs = [msg.content for msg in tool_messages]
+ assert any("hello" in output.lower() for output in tool_outputs), (
+ "Shell output should contain 'hello'"
+ )
+
+
+@pytest.mark.requires("langchain_anthropic")
+def test_shell_session_persistence(tmp_path: Path) -> None:
+ """Test shell session state persists across multiple tool calls."""
+ workspace = tmp_path / "workspace"
+ agent = create_agent(
+ model=_get_model("anthropic"),
+ middleware=[ShellToolMiddleware(workspace_root=workspace)],
+ )
+
+ result = agent.invoke(
+ {
+ "messages": [
+ HumanMessage(
+ "First run 'export TEST_VAR=hello'. "
+ "Then run 'echo $TEST_VAR' to verify it persists."
+ )
+ ]
+ }
+ )
+
+ tool_messages = [msg for msg in result["messages"] if msg.type == "tool"]
+ assert len(tool_messages) >= 2, "Shell tool should be called multiple times"
+
+ tool_outputs = [msg.content for msg in tool_messages]
+ assert any("hello" in output for output in tool_outputs), "Environment variable should persist"
+
+
+@pytest.mark.requires("langchain_anthropic")
+def test_shell_tool_error_handling(tmp_path: Path) -> None:
+ """Test shell tool captures command errors."""
+ workspace = tmp_path / "workspace"
+ agent = create_agent(
+ model=_get_model("anthropic"),
+ middleware=[ShellToolMiddleware(workspace_root=workspace)],
+ )
+
+ result = agent.invoke(
+ {
+ "messages": [
+ HumanMessage(
+ "Run the command 'ls /nonexistent_directory_12345' and show me the result"
+ )
+ ]
+ }
+ )
+
+ tool_messages = [msg for msg in result["messages"] if msg.type == "tool"]
+ assert len(tool_messages) > 0, "Shell tool should have been called"
+
+ tool_outputs = " ".join(msg.content for msg in tool_messages)
+ assert (
+ "no such file" in tool_outputs.lower()
+ or "cannot access" in tool_outputs.lower()
+ or "not found" in tool_outputs.lower()
+ or "exit code" in tool_outputs.lower()
+ ), "Error should be captured in tool output"
+
+
+@pytest.mark.requires("langchain_anthropic")
+def test_shell_tool_with_custom_tools(tmp_path: Path) -> None:
+ """Test shell tool works alongside custom tools."""
+ from langchain_core.tools import tool
+
+ workspace = tmp_path / "workspace"
+
+ @tool
+ def custom_greeting(name: str) -> str:
+ """Greet someone by name."""
+ return f"Hello, {name}!"
+
+ agent = create_agent(
+ model=_get_model("anthropic"),
+ tools=[custom_greeting],
+ middleware=[ShellToolMiddleware(workspace_root=workspace)],
+ )
+
+ result = agent.invoke(
+ {
+ "messages": [
+ HumanMessage(
+ "First, use the custom_greeting tool to greet 'Alice'. "
+ "Then run the shell command 'echo world'."
+ )
+ ]
+ }
+ )
+
+ tool_messages = [msg for msg in result["messages"] if msg.type == "tool"]
+ assert len(tool_messages) >= 2, "Both tools should have been called"
+
+ tool_outputs = " ".join(msg.content for msg in tool_messages)
+ assert "Alice" in tool_outputs, "Custom tool should be used"
+ assert "world" in tool_outputs, "Shell tool should be used"
diff --git a/libs/langchain_v1/tests/integration_tests/agents/test_response_format.py b/libs/langchain_v1/tests/integration_tests/agents/test_response_format.py
index 2c8a5a401c4..db3edf6dcf6 100644
--- a/libs/langchain_v1/tests/integration_tests/agents/test_response_format.py
+++ b/libs/langchain_v1/tests/integration_tests/agents/test_response_format.py
@@ -26,7 +26,7 @@ def test_inference_to_native_output() -> None:
model = ChatOpenAI(model="gpt-5")
agent = create_agent(
model,
- prompt=(
+ system_prompt=(
"You are a helpful weather assistant. Please call the get_weather tool, "
"then use the WeatherReport tool to generate the final response."
),
@@ -56,7 +56,7 @@ def test_inference_to_tool_output() -> None:
model = ChatOpenAI(model="gpt-4")
agent = create_agent(
model,
- prompt=(
+ system_prompt=(
"You are a helpful weather assistant. Please call the get_weather tool, "
"then use the WeatherReport tool to generate the final response."
),
diff --git a/libs/langchain_v1/tests/integration_tests/chat_models/test_base.py b/libs/langchain_v1/tests/integration_tests/chat_models/test_base.py
index c2ebc5638ea..ad0a90fa1da 100644
--- a/libs/langchain_v1/tests/integration_tests/chat_models/test_base.py
+++ b/libs/langchain_v1/tests/integration_tests/chat_models/test_base.py
@@ -25,7 +25,7 @@ async def test_init_chat_model_chain() -> None:
model_with_config = model_with_tools.with_config(
RunnableConfig(tags=["foo"]),
- configurable={"bar_model": "claude-3-7-sonnet-20250219"},
+ configurable={"bar_model": "claude-sonnet-4-5-20250929"},
)
prompt = ChatPromptTemplate.from_messages([("system", "foo"), ("human", "{input}")])
chain = prompt | model_with_config
diff --git a/libs/langchain_v1/tests/unit_tests/agents/__snapshots__/test_middleware_agent.ambr b/libs/langchain_v1/tests/unit_tests/agents/__snapshots__/test_middleware_agent.ambr
index 6f1e17badcc..75be381108b 100644
--- a/libs/langchain_v1/tests/unit_tests/agents/__snapshots__/test_middleware_agent.ambr
+++ b/libs/langchain_v1/tests/unit_tests/agents/__snapshots__/test_middleware_agent.ambr
@@ -20,7 +20,6 @@
__start__ --> NoopZero\2ebefore_agent;
model -.-> NoopTwo\2eafter_agent;
model -.-> tools;
- tools -.-> NoopTwo\2eafter_agent;
tools -.-> model;
NoopOne\2eafter_agent --> __end__;
classDef default fill:#f2f0ff,line-height:1.2
@@ -343,7 +342,6 @@
__start__ --> NoopSeven\2ebefore_model;
model --> NoopEight\2eafter_model;
tools -.-> NoopSeven\2ebefore_model;
- tools -.-> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
@@ -376,7 +374,6 @@
__start__ --> NoopSeven\2ebefore_model;
model --> NoopEight\2eafter_model;
tools -.-> NoopSeven\2ebefore_model;
- tools -.-> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
@@ -409,7 +406,6 @@
__start__ --> NoopSeven\2ebefore_model;
model --> NoopEight\2eafter_model;
tools -.-> NoopSeven\2ebefore_model;
- tools -.-> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
@@ -442,7 +438,6 @@
__start__ --> NoopSeven\2ebefore_model;
model --> NoopEight\2eafter_model;
tools -.-> NoopSeven\2ebefore_model;
- tools -.-> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
@@ -475,7 +470,6 @@
__start__ --> NoopSeven\2ebefore_model;
model --> NoopEight\2eafter_model;
tools -.-> NoopSeven\2ebefore_model;
- tools -.-> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
@@ -497,7 +491,6 @@
__start__ --> model;
model -.-> __end__;
model -.-> tools;
- tools -.-> __end__;
tools -.-> model;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
diff --git a/libs/langchain_v1/tests/unit_tests/agents/__snapshots__/test_return_direct_graph.ambr b/libs/langchain_v1/tests/unit_tests/agents/__snapshots__/test_return_direct_graph.ambr
new file mode 100644
index 00000000000..f3e223f8df6
--- /dev/null
+++ b/libs/langchain_v1/tests/unit_tests/agents/__snapshots__/test_return_direct_graph.ambr
@@ -0,0 +1,69 @@
+# serializer version: 1
+# name: test_agent_graph_with_mixed_tools
+ '''
+ ---
+ config:
+ flowchart:
+ curve: linear
+ ---
+ graph TD;
+ __start__([