Alternative to https://github.com/langchain-ai/langchain/pull/35024. Paving the way for summarization in `wrap_model_call` (which requires state updates). --- Add `ExtendedModelResponse` dataclass that allows `wrap_model_call` middleware to return a `Command` alongside the model response for additional state updates. ```py @dataclass class ExtendedModelResponse(Generic[ResponseT]): model_response: ModelResponse[ResponseT] command: Command ``` ## Motivation Previously, `wrap_model_call` middleware could only return a `ModelResponse` or `AIMessage` — there was no way to inject additional state updates (e.g. custom state fields) from the model call middleware layer. `ExtendedModelResponse` fills this gap by accepting an optional `Command`. This feature is needed by the summarization middleware, which needs to track summarization trigger points calculated during `wrap_model_call`. ## Why `Command` instead of a plain `state_update` dict? We chose `Command` rather than the raw `state_update: dict` approach from the earlier iteration because `Command` is the established LangGraph primitive for state updates from nodes. Using `Command` means: - State updates flow through the graph's reducers (e.g. `add_messages`) rather than being merged as raw dicts. This makes messages updates additive alongside the model response instead of replacing them. - Consistency with `wrap_tool_call`, which already returns `Command`. - Future-proof: as `Command` gains new capabilities (e.g. `goto`, `send`), middleware can leverage them without API changes. ## Why keep `model_response` separate instead of using `Command` directly? The model node needs to distinguish the model's actual response (messages + structured output) from supplementary middleware state updates. If middleware returned only a `Command`, there would be no clean way to extract the `ModelResponse` for structured output handling, response validation, and the core model-to-tools routing logic. Keeping `model_response` explicit preserves a clear boundary between "what the model said" and "what middleware wants to update." Also, in order to avoid breaking, the `handler` passed to `wrap_tool_call` needs to always return a `ModelResponse`. There's no easy way to preserve this if we pump it into a `Command`. One nice thing about having this `ExtendedModelResponse` structure is that it's extensible if we want to add more metadata in the future. ## Composition When multiple middleware layers return `ExtendedModelResponse`, their commands compose naturally: - **Inner commands propagate outward:** At composition boundaries, `ExtendedModelResponse` is unwrapped to its underlying `ModelResponse` so outer middleware always sees a plain `ModelResponse` from `handler()`. The inner command is captured and accumulated. - **Commands are applied through reducers:** Each `Command` becomes a separate state update applied through the graph's reducers. For messages, this means they're additive (via `add_messages`), not replacing. - **Outer wins on conflicts:** For non-reducer state fields, commands are applied inner-first then outer, so the outermost middleware's value takes precedence on conflicting keys. - **Retry-safe:** When outer middleware retries by calling `handler()` again, accumulated inner commands are cleared and re-collected from the fresh call. ```python class Outer(AgentMiddleware): def wrap_model_call(self, request, handler): response = handler(request) # sees ModelResponse, not ExtendedModelResponse return ExtendedModelResponse( model_response=response, command=Command(update={"outer_key": "val"}), ) class Inner(AgentMiddleware): def wrap_model_call(self, request, handler): response = handler(request) return ExtendedModelResponse( model_response=response, command=Command(update={"inner_key": "val"}), ) # Final state merges both commands: {"inner_key": "val", "outer_key": "val"} ``` ## Backwards compatibility Fully backwards compatible. The `ModelCallResult` type alias is widened from `ModelResponse | AIMessage` to `ModelResponse | AIMessage | ExtendedModelResponse`, but existing middleware returning `ModelResponse` or `AIMessage` continues to work identically. ## Internals - `model_node` / `amodel_node` now return `list[Command]` instead of `dict[str, Any]` - `_build_commands` converts the model response + accumulated middleware commands into a list of `Command` objects for LangGraph - `_ComposedExtendedModelResponse` is the internal type that accumulates commands across layers during composition
Packages
Important
This repository is structured as a monorepo, with various packages located in this libs/ directory. Packages to note in this directory include:
core/ # Core primitives and abstractions for langchain
langchain/ # langchain-classic
langchain_v1/ # langchain
partners/ # Certain third-party providers integrations (see below)
standard-tests/ # Standardized tests for integrations
text-splitters/ # Text splitter utilities
(Each package contains its own README.md file with specific details about that package.)
Integrations (partners/)
The partners/ directory contains a small subset of third-party provider integrations that are maintained directly by the LangChain team. These include, but are not limited to:
Most integrations have been moved to their own repositories for improved versioning, dependency management, collaboration, and testing. This includes packages from popular providers such as Google and AWS. Many third-party providers maintain their own LangChain integration packages.
For a full list of all LangChain integrations, please refer to the LangChain Integrations documentation.