The Ollama chat model adapter does not support all of the possible
message content formats. That leads to Ollama model adapter crashing on
some messages from different models (e.g. Gemini 2.5 Flash).
These changes should fix one known scenario - when `content` is a list
containing a string.
This allows to use PEP604 syntax for `ToolNode` error handlers
```python
def error_handler(e: ValueError | ToolException) -> str:
return "error"
ToolNode(my_tool, handle_tool_errors=error_handler).invoke(...)
```
Without this change, this fails with `AttributeError: 'types.UnionType'
object has no attribute '__mro__'`
This is better than using a subclass as returning a `property` works
with `ClassWithBetaMethods.beta_property.__doc__`
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Added an id field to the Document passed to filter for
InMemoryVectorStore similarity search. This allows filtering by Document
id and brings the input to the filter in line with the result returned
by the vector similarity search.
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- **Description:** The vectorstore standard-test mistakenly assumes that
the store's `get_by_ids` respects the order of the provided `ids`. This
is not the case (as the base class docstring states). This PR fixes
those tests that would fail otherwise (see issue #32820 for details,
repro and all). Fixes#32820
- **Issue:** Fixes#32820
- **Dependencies:** none
Co-authored-by: Stefano Lottini <stefano.lottini@ibm.com>
## Overview
Adding new `AgentMiddleware` primitive that supports `before_model`,
`after_model`, and `prepare_model_request` hooks.
This is very exciting! It makes our `create_agent` prebuilt much more
extensible + capable. Still in alpha and subject to change.
This is different than the initial
[implementation](https://github.com/langchain-ai/langgraph/tree/nc/25aug/agent)
in that it:
* Fills in gaps w/ missing features, for ex -- new structured output,
optionality of tools + system prompt, sync and async model requests,
provider builtin tools
* Exposes private state extensions for middleware, enabling things like
model call tracking, etc
* Middleware can register tools
* Uses a `TypedDict` for `AgentState` -- dataclass subclassing is tricky
w/ required values + required decorators
* Addition of `model_settings` to `ModelRequest` so that we can pass
through things to bind (like cache kwargs for anthropic middleware)
## TODOs
### top prio
- [x] add middleware support to existing agent
- [x] top prio middlewares
- [x] summarization node
- [x] HITL
- [x] prompt caching
other ones
- [x] model call limits
- [x] tool calling limits
- [ ] usage (requires output state)
### secondary prio
- [x] improve typing for state updates from middleware (not working
right now w/ simple `AgentUpdate` and `AgentJump`, at least in Python)
- [ ] add support for public state (input / output modifications via
pregel channel mods) -- to be tackled in another PR
- [x] testing!
### docs
See https://github.com/langchain-ai/docs/pull/390
- [x] high level docs about middleware
- [x] summarization node
- [x] HITL
- [x] prompt caching
## open questions
Lots of open questions right now, many of them inlined as comments for
the short term, will catalog some more significant ones here.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>