Sydney Runkle 8767a462ca feat: support state updates from wrap_model_call with command(s) (#35033)
Alternative to https://github.com/langchain-ai/langchain/pull/35024.
Paving the way for summarization in `wrap_model_call` (which requires
state updates).

---

Add `ExtendedModelResponse` dataclass that allows `wrap_model_call`
middleware to return a `Command` alongside the model response for
additional state updates.

```py
@dataclass
class ExtendedModelResponse(Generic[ResponseT]):
    model_response: ModelResponse[ResponseT]
    command: Command
```

## Motivation

Previously, `wrap_model_call` middleware could only return a
`ModelResponse` or `AIMessage` — there was no way to inject additional
state updates (e.g. custom state fields) from the model call middleware
layer. `ExtendedModelResponse` fills this gap by accepting an optional
`Command`.

This feature is needed by the summarization middleware, which needs to
track summarization trigger points calculated during `wrap_model_call`.

## Why `Command` instead of a plain `state_update` dict?

We chose `Command` rather than the raw `state_update: dict` approach
from the earlier iteration because `Command` is the established
LangGraph primitive for state updates from nodes. Using `Command` means:

- State updates flow through the graph's reducers (e.g. `add_messages`)
rather than being merged as raw dicts. This makes messages updates
additive alongside the model response instead of replacing them.
- Consistency with `wrap_tool_call`, which already returns `Command`.
- Future-proof: as `Command` gains new capabilities (e.g. `goto`,
`send`), middleware can leverage them without API changes.

## Why keep `model_response` separate instead of using `Command`
directly?

The model node needs to distinguish the model's actual response
(messages + structured output) from supplementary middleware state
updates. If middleware returned only a `Command`, there would be no
clean way to extract the `ModelResponse` for structured output handling,
response validation, and the core model-to-tools routing logic. Keeping
`model_response` explicit preserves a clear boundary between "what the
model said" and "what middleware wants to update."

Also, in order to avoid breaking, the `handler` passed to
`wrap_tool_call` needs to always return a `ModelResponse`. There's no
easy way to preserve this if we pump it into a `Command`.

One nice thing about having this `ExtendedModelResponse` structure is
that it's extensible if we want to add more metadata in the future.

## Composition

When multiple middleware layers return `ExtendedModelResponse`, their
commands compose naturally:

- **Inner commands propagate outward:** At composition boundaries,
`ExtendedModelResponse` is unwrapped to its underlying `ModelResponse`
so outer middleware always sees a plain `ModelResponse` from
`handler()`. The inner command is captured and accumulated.
- **Commands are applied through reducers:** Each `Command` becomes a
separate state update applied through the graph's reducers. For
messages, this means they're additive (via `add_messages`), not
replacing.
- **Outer wins on conflicts:** For non-reducer state fields, commands
are applied inner-first then outer, so the outermost middleware's value
takes precedence on conflicting keys.
- **Retry-safe:** When outer middleware retries by calling `handler()`
again, accumulated inner commands are cleared and re-collected from the
fresh call.

```python
class Outer(AgentMiddleware):
    def wrap_model_call(self, request, handler):
        response = handler(request)  # sees ModelResponse, not ExtendedModelResponse
        return ExtendedModelResponse(
            model_response=response,
            command=Command(update={"outer_key": "val"}),
        )

class Inner(AgentMiddleware):
    def wrap_model_call(self, request, handler):
        response = handler(request)
        return ExtendedModelResponse(
            model_response=response,
            command=Command(update={"inner_key": "val"}),
        )

# Final state merges both commands: {"inner_key": "val", "outer_key": "val"}
```

## Backwards compatibility

Fully backwards compatible. The `ModelCallResult` type alias is widened
from `ModelResponse | AIMessage` to `ModelResponse | AIMessage |
ExtendedModelResponse`, but existing middleware returning
`ModelResponse` or `AIMessage` continues to work identically.

## Internals

- `model_node` / `amodel_node` now return `list[Command]` instead of
`dict[str, Any]`
- `_build_commands` converts the model response + accumulated middleware
commands into a list of `Command` objects for LangGraph
- `_ComposedExtendedModelResponse` is the internal type that accumulates
commands across layers during composition
2026-02-06 07:28:04 -05:00
2023-11-28 17:34:27 -08:00

The platform for reliable agents.

PyPI - License PyPI - Downloads Version Open in Dev Containers Open in Github Codespace CodSpeed Badge Twitter / X

LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development all while future-proofing decisions as the underlying technology evolves.

pip install langchain

If you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.


Documentation:

Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.

Note

Looking for the JS/TS library? Check out LangChain.js.

Why use LangChain?

LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.

Use LangChain for:

  • Real-time data augmentation. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more.
  • Model interoperability. Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly LangChain's abstractions keep you moving without losing momentum.
  • Rapid prototyping. Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle.
  • Production-ready features. Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices.
  • Vibrant community and ecosystem. Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community.
  • Flexible abstraction layers. Work at the level of abstraction that suits your needs - from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity.

LangChain ecosystem

While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.

To improve your LLM application development, pair LangChain with:

  • Deep Agents (new!) Build agents that can plan, use subagents, and leverage file systems for complex tasks
  • LangGraph Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
  • Integrations List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
  • LangSmith Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
  • LangSmith Deployment Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams and iterate quickly with visual prototyping in LangSmith Studio.

Additional resources

  • API Reference Detailed reference on navigating base packages and integrations for LangChain.
  • Contributing Guide Learn how to contribute to LangChain projects and find good first issues.
  • Code of Conduct Our community guidelines and standards for participation.
  • LangChain Academy Comprehensive, free courses on LangChain libraries and products, made by the LangChain team.
Description
Building applications with LLMs through composability
Readme MIT Cite this repository 4.8 GiB
Languages
Python 83.9%
omnetpp-msg 15.6%
Makefile 0.4%