## Description
Fixes#33453
`ModelResponse` was defined in `types.py` and included in its `__all__`
list, but was not exported from the middleware package's `__init__.py`.
This caused `ImportError` when attempting to import it directly
from `langchain.agents.middleware`, despite being documented as a public
export.
## Changes
- Added `ModelResponse` to the import statement in
`langchain/agents/middleware/__init__.py`
- Added `ModelResponse` to the `__all__` list in
`langchain/agents/middleware/__init__.py`
- Added comprehensive unit tests in `test_imports.py` to verify the
import works correctly
## Issue
The original issue reported that the following import failed:
```python
from langchain.agents.middleware import ModelResponse
# ImportError: cannot import name 'ModelResponse' from
'langchain.agents.middleware'
The workaround was to import from the submodule:
from langchain.agents.middleware.types import ModelResponse # Workaround
Solution
After this fix, ModelResponse can be imported directly as documented:
from langchain.agents.middleware import ModelResponse # Now works!
Testing
- ✅ Added 3 unit tests in
tests/unit_tests/agents/middleware/test_imports.py
- ✅ All tests pass locally: make format, make lint, make test
- ✅ Verified ModelResponse is properly exported and importable
- ✅ Verified ModelResponse appears in __all__ list
Dependencies
None. This is a simple export fix with no new dependencies.
---------
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
This is tool emulation middleware. The idea is to help test out an agent
that may have some tools that either take a long time to run or are
expensive to set up. This could allow simulating the behavior a bit.
**Description:**
currently `mustache_schema("{{x.y}} {{x}}")` will error. pr fixes
**Issue:** na
**Dependencies:**na
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Easiest to review side by side (not inline)
* Adding `dict` type requests + responses so that we can ship config w/
interrupts. Also more extensible.
* Keeping things generic in terms of `interrupt_on` rather than
`tool_config`
* Renaming allowed decisions -- approve, edit, reject
* Draws differentiation between actions (requested + performed by the
agent), in this case tool calls, though we generalize beyond that and
decisions - human feedback for said actions
New request structure
```py
class Action(TypedDict):
"""Represents an action with a name and arguments."""
name: str
"""The type or name of action being requested (e.g., "add_numbers")."""
arguments: dict[str, Any]
"""Key-value pairs of arguments needed for the action (e.g., {"a": 1, "b": 2})."""
DecisionType = Literal["approve", "edit", "reject"]
class ReviewConfig(TypedDict):
"""Policy for reviewing a HITL request."""
action_name: str
"""Name of the action associated with this review configuration."""
allowed_decisions: list[DecisionType]
"""The decisions that are allowed for this request."""
description: NotRequired[str]
"""The description of the action to be reviewed."""
arguments_schema: NotRequired[dict[str, Any]]
"""JSON schema for the arguments associated with the action, if edits are allowed."""
class HITLRequest(TypedDict):
"""Request for human feedback on a sequence of actions requested by a model."""
action_requests: list[Action]
"""A list of agent actions for human review."""
review_configs: list[ReviewConfig]
"""Review configuration for all possible actions."""
```
New response structure
```py
class ApproveDecision(TypedDict):
"""Response when a human approves the action."""
type: Literal["approve"]
"""The type of response when a human approves the action."""
class EditDecision(TypedDict):
"""Response when a human edits the action."""
type: Literal["edit"]
"""The type of response when a human edits the action."""
edited_action: Action
"""Edited action for the agent to perform.
Ex: for a tool call, a human reviewer can edit the tool name and args.
"""
class RejectDecision(TypedDict):
"""Response when a human rejects the action."""
type: Literal["reject"]
"""The type of response when a human rejects the action."""
message: NotRequired[str]
"""The message sent to the model explaining why the action was rejected."""
Decision = ApproveDecision | EditDecision | RejectDecision
class HITLResponse(TypedDict):
"""Response payload for a HITLRequest."""
decisions: list[Decision]
"""The decisions made by the human."""
```
User facing API:
NEW
```py
HumanInTheLoopMiddleware(interrupt_on={
'send_email': True,
# can also use a callable for description that takes tool call, state, and runtime
'execute_sql': {
'allowed_decisions': ['approve', 'edit', 'reject'],
'description': 'please review sensitive tool execution'},
}
})
Command(resume={"decisions": [{"type": "approve"}, {"type": "reject": "message": "db down"}]})
```
OLD
```py
HumanInTheLoopMiddleware(interrupt_on={
'send_email': True,
'execute_sql': {
'allow_accept': True,
'allow_edit': True,
'allow_respond': True,
description='please review sensitive tool execution'
},
})
Command(resume=[{"type": "approve"}, {"type": "reject": "message": "db down"}])
```