Fixes #34282 **Before:** When using agents with tools (like file reading, web search, etc.), the conversation looks like this: ``` [User] "Read these 10 files and summarize them" [AI] "I'll read all 10 files" + [tool_call: read_file x 10] [Tool] "Contents of file1.txt..." [Tool] "Contents of file2.txt..." [Tool] "Contents of file3.txt..." ... (7 more tool responses) ``` When the conversation gets too long, `SummarizationMiddleware` kicks in to compress older messages. The problem was: If you asked to keep the last 6 messages, you'd get: ``` [Summary] "Here's what happened before..." [Tool] "Contents of file5.txt..." [Tool] "Contents of file6.txt..." [Tool] "Contents of file7.txt..." [Tool] "Contents of file8.txt..." [Tool] "Contents of file9.txt..." [Tool] "Contents of file10.txt..." ``` The AI's original request to read the files (`[AI]` message with `tool_calls`) was summarized away, but the tool responses remained. This caused the error: ``` Error code: 400 - "No tool call found for function call output with call_id..." ``` Many APIs require that every tool response has a matching tool request. Without the AI message, the tool responses are "orphaned." ## The fix Now when the cutoff lands on tool messages, we **move backward** to include the AI message that requested those tools: Same scenario, keeping last 6 messages: ``` [Summary] "Here's what happened before..." [AI] "I'll read all 10 files" + [tool_call: read_file x 10] [Tool] "Contents of file1.txt..." [Tool] "Contents of file2.txt..." ... (all 10 tool responses) ``` The AI message is preserved along with its tool responses, keeping them paired together. ## Practical examples ### Example 1: Parallel tool calls **Scenario:** Agent reads 10 files in parallel, summarization triggers (see above) ### Example 2: Mixed conversation **Scenario:** User asks question, AI uses tools, user says thanks ``` [User] "What's the weather?" [AI] "Let me check" + [tool_call: get_weather] [Tool] "72F and sunny" [AI] "It's 72F and sunny!" [User] "Thanks!" ``` Keeping last 2 messages: | Before (Bug) | After (Fix) | |--------------|-------------| | Only `[User] "Thanks!"` kept | `[AI] + [Tool] + [AI] + [User]` all kept | | Lost the weather info | Tool pair preserved with response | ### Example 3: Multiple tool sequences ``` [User] "Search for X" [AI] [tool_call: search] [Tool] "Results for X" [User] "Now search for Y" [AI] [tool_call: search] [Tool] "Results for Y" [User] "Great!" ``` **Keeping last 3 messages:** If cutoff lands on `[Tool] "Results for Y"`, we now include `[AI] [tool_call: search]` to keep the pair together.
Packages
Important
This repository is structured as a monorepo, with various packages located in this libs/ directory. Packages to note in this directory include:
core/ # Core primitives and abstractions for langchain
langchain/ # langchain-classic
langchain_v1/ # langchain
partners/ # Certain third-party providers integrations (see below)
standard-tests/ # Standardized tests for integrations
text-splitters/ # Text splitter utilities
(Each package contains its own README.md file with specific details about that package.)
Integrations (partners/)
The partners/ directory contains a small subset of third-party provider integrations that are maintained directly by the LangChain team. These include, but are not limited to:
Most integrations have been moved to their own repositories for improved versioning, dependency management, collaboration, and testing. This includes packages from popular providers such as Google and AWS. Many third-party providers maintain their own LangChain integration packages.
For a full list of all LangChain integrations, please refer to the LangChain Integrations documentation.