**Description:** Right now, we interrupt even if the provided ToolConfig
has all false values. We should ignore ToolConfigs which do not have at
least one value marked as true (just as we would if tool_name: False was
passed into the dict).
# Main Changes
1. Adding decorator utilities for dynamically defining middleware with
single hook functions (see an example below for dynamic system prompt)
2. Adding better conditional edge drawing with jump configuration
attached to middleware. Can be registered w/ the decorator new
decorator!
## Decorator Utilities
```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import modify_model_request
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver
@modify_model_request
def modify_system_prompt(request: ModelRequest, state: AgentState) -> ModelRequest:
request.system_prompt = (
"You are a helpful assistant."
f"Please record the number of previous messages in your response: {len(state['messages'])}"
)
return request
agent = create_agent(
model="openai:gpt-4o-mini",
middleware=[modify_system_prompt]
).compile(checkpointer=InMemorySaver())
```
## Visualization and Routing improvements
We now require that middlewares define the valid jumps for each hook.
If using the new decorator syntax, this can be done with:
```py
@before_model(jump_to=["__end__"])
@after_model(jump_to=["tools", "__end__"])
```
If using the subclassing syntax, you can use these two class vars:
```py
class MyMiddlewareAgentMiddleware):
before_model_jump_to = ["__end__"]
after_model_jump_to = ["tools", "__end__"]
```
Open for debate if we want to bundle these in a single jump map / config
for a middleware. Easy to migrate later if we decide to add more hooks.
We will need to **really clearly document** that these must be
explicitly set in order to enable conditional edges.
Notice for the below case, `Middleware2` does actually enable jumps.
<table>
<thead>
<tr>
<th>Before (broken), adding conditional edges unconditionally</th>
<th>After (fixed), adding conditional edges sparingly</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<img width="619" height="508" alt="Screenshot 2025-09-23 at 10 23 23 AM"
src="https://github.com/user-attachments/assets/bba2d098-a839-4335-8e8c-b50dd8090959"
/>
</td>
<td>
<img width="469" height="490" alt="Screenshot 2025-09-23 at 10 23 13 AM"
src="https://github.com/user-attachments/assets/717abf0b-fc73-4d5f-9313-b81247d8fe26"
/>
</td>
</tr>
</tbody>
</table>
<details>
<summary>Snippet for the above</summary>
```py
from typing import Any
from langchain.agents.tool_node import InjectedState
from langgraph.runtime import Runtime
from langchain.agents.middleware.types import AgentMiddleware, AgentState
from langchain.agents.middleware_agent import create_agent
from langchain_core.tools import tool
from typing import Annotated
from langchain_core.messages import HumanMessage
from typing_extensions import NotRequired
@tool
def simple_tool(input: str) -> str:
"""A simple tool."""
return "successful tool call"
class Middleware1(AgentMiddleware):
"""Custom middleware that adds a simple tool."""
tools = [simple_tool]
def before_model(self, state: AgentState, runtime: Runtime) -> None:
return None
def after_model(self, state: AgentState, runtime: Runtime) -> None:
return None
class Middleware2(AgentMiddleware):
before_model_jump_to = ["tools", "__end__"]
def before_model(self, state: AgentState, runtime: Runtime) -> None:
return None
def after_model(self, state: AgentState, runtime: Runtime) -> None:
return None
class Middleware3(AgentMiddleware):
def before_model(self, state: AgentState, runtime: Runtime) -> None:
return None
def after_model(self, state: AgentState, runtime: Runtime) -> None:
return None
builder = create_agent(
model="openai:gpt-4o-mini",
middleware=[Middleware1(), Middleware2(), Middleware3()],
system_prompt="You are a helpful assistant.",
)
agent = builder.compile()
```
</details>
## More Examples
### Guardrails `after_model`
<img width="379" height="335" alt="Screenshot 2025-09-23 at 10 40 09 AM"
src="https://github.com/user-attachments/assets/45bac7dd-398e-45d1-ae58-6ecfa27dfc87"
/>
<details>
<summary>Code</summary>
```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import after_model
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.checkpoint.memory import InMemorySaver
from typing import cast, Any
@after_model(jump_to=["model", "__end__"])
def after_model_hook(state: AgentState) -> dict[str, Any]:
"""Check the last AI message for safety violations."""
last_message_content = cast(AIMessage, state["messages"][-1]).content.lower()
print(last_message_content)
unsafe_keywords = ["pineapple"]
if any(keyword in last_message_content for keyword in unsafe_keywords):
# Jump back to model to regenerate response
return {"jump_to": "model", "messages": [HumanMessage("Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.")]}
return {"jump_to": "__end__"}
# Create agent with guardrails middleware
agent = create_agent(
model="openai:gpt-4o-mini",
middleware=[after_model_hook],
system_prompt="Keep your responses to one sentence please!"
).compile()
# Test with potentially unsafe input
result = agent.invoke(
{"messages": [HumanMessage("Tell me something about pineapples")]},
)
for msg in result["messages"]:
print(msg.pretty_print())
"""
================================ Human Message =================================
Tell me something about pineapples
None
================================== Ai Message ==================================
Pineapples are tropical fruits known for their sweet, tangy flavor and distinctive spiky exterior.
None
================================ Human Message =================================
Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.
None
================================== Ai Message ==================================
Apples are popular fruits that come in various varieties, known for their crisp texture and sweetness, and are often used in cooking and baking.
None
"""
```
</details>
Mostly adding a descriptive frontmatter to workflow files. Also address
some formatting and outdated artifacts
No functional changes outside of
[d5457c3](d5457c39ee),
[90708a0](90708a0d99),
and
[338c82d](338c82d21e)
The file-based and title-based labeler workflows were conflicting,
causing the bot to add and remove identical labels in the same
operation. Hopefully this fixes
- Removes Codespell from deps, docs, and `Makefile`s
- Python version requirements in all `pyproject.toml` files now use the
`~=` (compatible release) specifier
- All dependency groups and main dependencies now use explicit lower and
upper bounds, reducing potential for breaking changes
We want state schema as the input schema to middleware nodes because the
conditional edges after these nodes need access to the full state.
Also, we just generally want all state passed to middleware nodes, so we
should be specifying this explicitly. If we don't, the state annotations
used by users in their node signatures are used (so they might be
missing fields).
# Changes
## Adds support for `DynamicSystemPromptMiddleware`
```py
from langchain.agents.middleware import DynamicSystemPromptMiddleware
from langgraph.runtime import Runtime
from typing_extensions import TypedDict
class Context(TypedDict):
user_name: str
def system_prompt(state: AgentState, runtime: Runtime[Context]) -> str:
user_name = runtime.context.get("user_name", "n/a")
return f"You are a helpful assistant. Always address the user by their name: {user_name}"
middleware = DynamicSystemPromptMiddleware(system_prompt)
```
## Adds support for `runtime` in middleware hooks
```py
class AgentMiddleware(Generic[StateT, ContextT]):
def modify_model_request(
self,
request: ModelRequest,
state: StateT,
runtime: Runtime[ContextT], # Optional runtime parameter
) -> ModelRequest:
# upgrade model if runtime.context.subscription is `top-tier` or whatever
```
## Adds support for omitting state attributes from input / output
schemas
```py
from typing import Annotated, NotRequired
from langchain.agents.middleware.types import PrivateStateAttr, OmitFromInput, OmitFromOutput
class CustomState(AgentState):
# Private field - not in input or output schemas
internal_counter: NotRequired[Annotated[int, PrivateStateAttr]]
# Input-only field - not in output schema
user_input: NotRequired[Annotated[str, OmitFromOutput]]
# Output-only field - not in input schema
computed_result: NotRequired[Annotated[str, OmitFromInput]]
```
## Additionally
* Removes filtering of state before passing into middleware hooks
Typing is not foolproof here, still need to figure out some of the
generics stuff w/ state and context schema extensions for middleware.
TODO:
* More docs for middleware, should hold off on this until other prios
like MCP and deepagents are met
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
## Summary
This PR fixes several bugs and improves the example code in
`BaseChatMessageHistory` docstring that would prevent it from working
correctly.
### Bugs Fixed
- **Critical bug**: Fixed `json.dump(messages, f)` →
`json.dump(serialized, f)` - was using wrong variable
- **NameError**: Fixed bare variable references to use
`self.storage_path` and `self.session_id`
- **Missing imports**: Added required imports (`json`, `os`, message
converters) to make example runnable
### Improvements
- Added missing type hints following project standards (`messages() ->
list[BaseMessage]`, `clear() -> None`)
- Added robust error handling with `FileNotFoundError` exception
handling
- Added directory creation with `os.makedirs(exist_ok=True)` to prevent
path errors
- Improved performance: `json.load(f)` instead of `json.loads(f.read())`
- Added explicit UTF-8 encoding to all file operations
- Updated stores.py to use modern union syntax (`int | None` vs
`Optional[int]`)
### Test Plan
- [x] Code passes linting (`ruff check`)
- [x] Example code now has all required imports and proper syntax
- [x] Fixed variable references prevent runtime errors
- [x] Follows project's type annotation standards
The example code in the docstring is now fully functional and follows
LangChain's coding standards.
---------
Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
- **Description:** Updated the dead/unreachable links to Docling from
the additional resources section of the langchain-docling docs
- **Issue:** Fixes langchain-ai/docs/issues/574
- **Dependencies:** None
# Main changes / new features
## Better support for parallel tool calls
1. Support for multiple tool calls requiring human input
2. Support for combination of tool calls requiring human input + those
that are auto-approved
3. Support structured output w/ tool calls requiring human input
4. Support structured output w/ standard tool calls
## Shortcut for allowed actions
Adds a shortcut where tool config can be specified as a `bool`, meaning
"all actions allowed"
```py
HumanInTheLoopMiddleware(tool_configs={"expensive_tool": True})
```
## A few design decisions here
* We only raise one interrupt w/ all `HumanInterrupt`s, currently we
won't be able to execute all tools until all of these are resolved. This
isn't super blocking bc we can't re-invoke the model until all tools
have finished execution. That being said, if you have a long running
auto-approved tool, this could slow things down.
## TODOs
* Ideally, we would rename `accept` -> `approve`
* Ideally, we would rename `respond` -> `reject`
* Docs update (@sydney-runkle to own)
* In another PR I'd like to refactor testing to have one file for each
prebuilt middleware :)
Fast follow to https://github.com/langchain-ai/langchain/pull/32962
which was deemed as too breaking
Adds documentation for the integration langchain-scraperapi, which
contains 3 tools using the ScraperAPI service.
The tools give AI agents the ability to
Scrape the web and return HTML/text/markdown
Perform Google search and return json output
Perform Amazon search and return json output
For reference, here is the official repo for langchain_scraperapi:
https://github.com/scraperapi/langchain-scraperapi
Replaced `input_message` parameter with a directly called tuple, e.g.
`{"messages": [("user", "What is my name?")]}`
Before, the memory function wasn't working with the agent, using the
format of the input_message parameter.
Specifically, on page [Build an
Agent#adding-in-memory](https://python.langchain.com/docs/tutorials/agents/#adding-in-memory)
In the previous code, the query "What's my name?" wasn't working, as the
agent could not recall memory correctly.
<img width="860" height="679" alt="image"
src="https://github.com/user-attachments/assets/dfbca21e-ffe9-4645-a810-3be7a46d81d5"
/>
This PR improves navigation in the summarization how-to section by
adding
cross-links from the single-call guide to the related map-reduce and
refine
guides. This mirrors the docs style guide’s emphasis on clear
cross-references
and should help readers discover the appropriate pattern for longer
texts.
- Source edited: docs/docs/how_to/summarize_stuff.ipynb
- Links added:
- /docs/how_to/summarize_map_reduce/
- /docs/how_to/summarize_refine/
Type: docs-only (no code changes)
Description:
Add a docstring to _load_map_reduce_chain in chains/summarize/ to
explain the purpose of the prompt argument and document function
parameters. This addresses an existing TODO in the codebase.
Issue:
N/A (documentation improvement only)
Dependencies:
None
**Description:**
Add a docstring to `_load_stuff_chain` in `chains/summarize/` to explain
the purpose of the `prompt` argument and document function parameters.
This addresses an existing TODO in the codebase.
**Issue:**
N/A (documentation improvement only)
**Dependencies:**
None