Mason Daugherty
ffb1a08871
style(infra): use modern Optional typing in script ( #33361 )
2025-10-08 21:09:43 -04:00
Mason Daugherty
d13823043d
style: monorepo pass for refs ( #33359 )
...
* Delete some double backticks previously used by Sphinx (not done
everywhere yet)
* Fix some code blocks / dropdowns
Ignoring CLI CI for now
2025-10-08 18:41:39 -04:00
Eugene Yurtsev
b665b81a0e
chore(langchain_v1): simplify on model call logic ( #33358 )
...
Moving from the generator pattern to the slightly less verbose (but explicit) handler pattern.
This will be more familiar to users
**Before (Generator Pattern):**
```python
def on_model_call(self, request, state, runtime):
try:
result = yield request
except Exception:
result = yield request # Retry
```
**After (Handler Pattern):**
```python
def on_model_call(self, request, state, runtime, handler):
try:
return handler(request)
except Exception:
return handler(request) # Retry
```
2025-10-08 17:23:11 -04:00
Mason Daugherty
6b9b177b89
chore(openai): base.py ref pass ( #33355 )
2025-10-08 16:08:52 -04:00
Mason Daugherty
b1acf8d931
chore: fix dropdown default open admonition in refs ( #33354 )
2025-10-08 18:50:44 +00:00
Eugene Yurtsev
97f731da7e
chore(langchain_v1): remove unused internal namespace ( #33352 )
...
Remove unused internal namespace. We'll likely restore a part of it for
lazy loading optimizations later.
2025-10-08 14:08:07 -04:00
Eugene Yurtsev
1bf29da0d6
feat(langchain_v1): add on_tool_call middleware hook ( #33329 )
...
Adds generator-based middleware for intercepting tool execution in
agents. Middleware can retry on errors, cache results, modify requests,
or short-circuit execution.
### Implementation
**Middleware Protocol**
```python
class AgentMiddleware:
def on_tool_call(
self,
request: ToolCallRequest,
state: StateT,
runtime: Runtime[ContextT],
) -> Generator[ToolCallRequest | ToolMessage | Command, ToolMessage | Command, None]:
"""
Yields: ToolCallRequest (execute), ToolMessage (cached result), or Command (control flow)
Receives: ToolMessage or Command via .send()
Returns: None (final result is last value sent to handler)
"""
yield request # passthrough
```
**Composition**
Multiple middleware compose automatically (first = outermost), with
`_chain_tool_call_handlers()` stacking them like nested function calls.
### Examples
**Retry on error:**
```python
class RetryMiddleware(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
for attempt in range(3):
response = yield request
if not isinstance(response, ToolMessage) or response.status != "error":
return
if attempt == 2:
return # Give up
```
**Cache results:**
```python
class CacheMiddleware(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
cache_key = (request.tool_call["name"], tuple(request.tool_call["args"].items()))
if cached := self.cache.get(cache_key):
yield ToolMessage(content=cached, tool_call_id=request.tool_call["id"])
else:
response = yield request
self.cache[cache_key] = response.content
```
**Emulate tools with LLM**
```python
class ToolEmulator(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
prompt = f"""Emulate: {request.tool_call["name"]}
Description: {request.tool.description}
Args: {request.tool_call["args"]}
Return ONLY the tool's output."""
response = emulator_model.invoke([HumanMessage(prompt)])
yield ToolMessage(
content=response.content,
tool_call_id=request.tool_call["id"],
name=request.tool_call["name"],
)
```
**Modify requests:**
```python
class ScalingMiddleware(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
if "value" in request.tool_call["args"]:
request.tool_call["args"]["value"] *= 2
yield request
```
2025-10-08 16:43:32 +00:00
Eugene Yurtsev
2c3fec014f
feat(langchain_v1): on_model_call middleware ( #33328 )
...
Introduces a generator-based `on_model_call` hook that allows middleware
to intercept model calls with support for retry logic, error handling,
response transformation, and request modification.
## Overview
Middleware can now implement `on_model_call()` using a generator
protocol that:
- **Yields** `ModelRequest` to execute the model
- **Receives** `AIMessage` via `.send()` on success, or exception via
`.throw()` on error
- **Yields again** to retry or transform responses
- Uses **implicit last-yield semantics** (no return values from
generators)
## Usage Examples
### Basic Retry on Error
```python
from langchain.agents.middleware.types import AgentMiddleware
class RetryMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
for attempt in range(3):
try:
yield request # Execute model
break # Success
except Exception:
if attempt == 2:
raise # Max retries exceeded
```
### Response Transformation
```python
class UppercaseMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
result = yield request
modified = AIMessage(content=result.content.upper())
yield modified # Return transformed response
```
### Error Recovery
```python
class FallbackMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
try:
yield request
except Exception:
fallback = AIMessage(content="Service unavailable")
yield fallback # Convert error to fallback response
```
### Caching / Short-Circuit
```python
class CacheMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
if cached := get_cache(request):
yield cached # Skip model execution
else:
result = yield request
save_cache(request, result)
```
### Request Modification
```python
class SystemPromptMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
modified_request = ModelRequest(
model=request.model,
system_prompt="You are a helpful assistant.",
messages=request.messages,
tools=request.tools,
)
yield modified_request
```
### Function Decorator
```python
from langchain.agents.middleware.types import on_model_call
@on_model_call
def retry_three_times(request, state, runtime):
for attempt in range(3):
try:
yield request
break
except Exception:
if attempt == 2:
raise
agent = create_agent(model="openai:gpt-4o", middleware=[retry_three_times])
```
## Middleware Composition
Middleware compose with first in list as outermost layer:
```python
agent = create_agent(
model="openai:gpt-4o",
middleware=[
RetryMiddleware(), # Outer - wraps others
LoggingMiddleware(), # Middle
UppercaseMiddleware(), # Inner - closest to model
]
)
```
2025-10-08 12:34:04 -04:00
Mason Daugherty
4c38157ee0
fix(core): don't print package if no version found ( #33347 )
...
This is polluting issues making it hard to find issues that apply to a
query
2025-10-07 23:14:17 -04:00
Sydney Runkle
b5f8e87e2f
remove runtime where not needed
2025-10-07 21:33:52 -04:00
Eugene Yurtsev
6a2efd060e
fix(langchain_v1): injection logic in tool node ( #33344 )
...
Fix injection logic in tool node
2025-10-07 21:31:10 -04:00
Mason Daugherty
cda336295f
chore: enrich pyproject.toml files with links to new references, others ( #33343 )
2025-10-07 16:17:14 -04:00
Mason Daugherty
02f4256cb6
chore: remove CLI note in migrations ( #33342 )
...
unsure of functionality/we don't plan to spend time on it at the moment
2025-10-07 19:18:33 +00:00
ccurme
492ba3d127
release(core): 1.0.0a8 ( #33341 )
langchain-core==1.0.0a8
2025-10-07 14:18:44 -04:00
ccurme
cbf8d46d3e
fix(core): add back add_user_message and add_ai_message ( #33340 )
2025-10-07 13:56:34 -04:00
Mason Daugherty
58598f01b0
chore: add more informative README for libs/ ( #33339 )
2025-10-07 17:13:45 +00:00
ccurme
89fe7e1ac1
release(langchain): 1.0.0a1 ( #33337 )
2025-10-07 12:52:32 -04:00
ccurme
a24712f7f7
revert: chore(infra): temporarily skip tests of previous alpha versions on core release ( #33333 )
...
Reverts langchain-ai/langchain#33312
2025-10-07 10:51:17 -04:00
Mason Daugherty
8446fef00d
fix(infra): v0.3 ref dep ( #33336 )
2025-10-07 10:49:20 -04:00
Mason Daugherty
8bcdfbb24e
chore: clean up pyproject.toml files, use core a7 ( #33334 )
2025-10-07 10:49:04 -04:00
Mason Daugherty
b8ebc14a23
chore(langchain): clean Makefile ( #33335 )
2025-10-07 10:48:47 -04:00
ccurme
aa442bc52f
release(openai): 1.0.0a4 ( #33316 )
langchain-anthropic==1.0.0a3
langchain-openai==1.0.0a4
2025-10-07 09:25:05 -04:00
ccurme
2e024b7ede
release(anthropic): 1.0.0a3 ( #33317 )
2025-10-07 09:24:54 -04:00
Sydney Runkle
c8205ff511
fix(langchain_v1): fix edges when there's no middleware ( #33321 )
...
1. Main fix: when we don't have a response format or middleware, don't
draw a conditional edge back to the loop entrypoint (self loop on model)
2. Supplementary fix: when we jump to `end` and there is an
`after_agent` hook, jump there instead of `__end__`
Other improvements -- I can remove these if they're more harmful than
helpful
1. Use keyword only arguments for edge generator functions for clarity
2. Rename args to `model_destination` and `end_destination` for clarity
2025-10-06 18:08:08 -04:00
Mason Daugherty
ea0a25d7fe
fix(infra): v0.3 ref build; allow prerelease installations for partner packages ( #33326 )
2025-10-06 18:06:40 -04:00
Mason Daugherty
29b5df3881
fix(infra): handle special case for langchain-tavily repository checkout during ref build ( #33324 )
2025-10-06 18:00:24 -04:00
Mason Daugherty
690b620b7f
docs(infra): add note about check_diff.py running on seemingly unrelated PRs ( #33323 )
2025-10-06 17:56:57 -04:00
Mason Daugherty
c55c9785be
chore(infra): only build 0.3 ref docs from v0.3 branches ( #33322 )
...
Using the `api_doc_build.yml` workflow will now only pull from the
`v0.3` branch for each `langchain-ai` repo used during the build
process. This ensures that upcoming updates to the `master`/`main`
branch for each repo won't affect the v0.3 reference docs if/when they
are re-built or updated.
2025-10-06 21:45:49 +00:00
Christophe Bornet
20e04fc3dd
chore(text-splitters): cleanup ruff config ( #33247 )
...
Co-authored-by: Mason Daugherty <mason@langchain.dev >
2025-10-06 17:02:31 -04:00
Mason Daugherty
078137f0ba
chore(infra): use different pr title labeler ( #33318 )
...
The previous (from Grafana) is archived and doesn't work for community
PRs.
2025-10-06 16:58:52 -04:00
ccurme
d0f5a1cc96
fix(standard-tests,openai): minor fix for Responses API tests ( #33315 )
...
Following https://github.com/langchain-ai/langchain/pull/33301
2025-10-06 16:46:41 -04:00
ccurme
e8e41bd7a6
chore(infra): temporarily skip tests of previous alpha versions on core release ( #33312 )
...
To accommodate breaking changes (e.g., removal of deprecated params like
`callback_manager`).
Will revert once we have updated releases of anthropic and openai.
langchain-core==1.0.0a7
2025-10-06 16:31:36 -04:00
Sydney Runkle
7326966566
release(langchain_v1): 1.0.0a12 ( #33314 )
langchain==1.0.0a12
2025-10-06 16:24:30 -04:00
Mason Daugherty
6eb1c34ba1
fix(infra): pr-title-labeler ( #33313 )
...
Wasn't working on `pull_request_target`
2025-10-06 16:20:15 -04:00
Mason Daugherty
d390d2f28f
chore: add .claude to .gitignore ( #33311 )
2025-10-06 16:20:02 -04:00
Sydney Runkle
2fa9741f99
chore(langchain_v1): rename model_request node -> model ( #33310 )
2025-10-06 16:18:18 -04:00
ccurme
ba35387c9e
release(core): 1.0.0a7 ( #33309 )
2025-10-06 16:03:34 -04:00
ccurme
de48e102c4
fix(core,openai,anthropic): delegate to core implementation on invoke when streaming=True ( #33308 )
2025-10-06 15:54:55 -04:00
Sydney Runkle
08bf8f3dc9
release(langchain_v1): 1.0.0a11 ( #33307 )
...
* Consolidating agents
* Removing remainder of globals
* Removing `ToolNode`
langchain==1.0.0a11
2025-10-06 15:13:26 -04:00
Sydney Runkle
00f4db54c4
chore(langchain_v1): remove support for ToolNode in create_agent ( #33306 )
...
Let's add a note to help w/ migration once we add the tool call retry
middleware.
2025-10-06 15:06:20 -04:00
Sydney Runkle
62ccf7e8a4
feat(langchain_v1): simplify to use ONE agent ( #33302 )
...
This reduces confusion w/ types like `AgentState`, different arg names,
etc.
Second attempt, following
https://github.com/langchain-ai/langchain/pull/33249
* Ability to pass through `cache` and name in `create_agent` as
compilation args for the agent
* Right now, removing `test_react_agent.py` but we should add these
tests back as implemented w/ the new agent
* Add conditional edge when structured output tools are present to allow
for retries
* Rename `tracking` to `model_call_limit` to be consistent w/ tool call
limits
We need in the future (I'm happy to own):
* Significant test refactor
* Significant test overhaul where we emphasize and enforce coverage
2025-10-06 14:46:29 -04:00
Eugene Yurtsev
0ff2bc890b
chore(langchain_v1): remove text splitters from langchain v1 namespace ( #33297 )
...
Removing text splitters for now for a lighter dependency. We may re-introduce
2025-10-06 14:42:23 -04:00
ccurme
426b8e2e6a
feat(standard-tests): enable parametrization of output_version ( #33301 )
2025-10-06 14:37:33 -04:00
Eugene Yurtsev
bfed5f67a8
chore(langchain_v1): expose rate_limiters from langchain_core ( #33305 )
...
expose rate limiters from langchain core
2025-10-06 14:25:56 -04:00
Mason Daugherty
a4c8baebc5
chore: delete cookbook/ ( #33303 )
...
It will continue to be available in the `v0.3` branch
2025-10-06 14:21:53 -04:00
Sydney Runkle
a869f84c62
fix(langchain_v1): tool selector should use last human message ( #33294 )
2025-10-06 11:32:16 -04:00
Sydney Runkle
0ccc0cbdae
feat(langchain_v1): before_agent and after_agent hooks ( #33279 )
...
We're adding enough new nodes that I think a refactor in terms of graph
building is warranted here, but not necessarily required for merging.
2025-10-06 11:31:52 -04:00
ccurme
7404338786
fix(core): fix string content when streaming output_version="v1" ( #33261 )
...
Co-authored-by: Mason Daugherty <mason@langchain.dev >
2025-10-06 11:03:03 -04:00
Nuno Campos
f308139283
feat(langchain_v1): Implement Context Editing Middleware ( #33267 )
...
Brings functionality similar to Anthropic's context editing to all chat
models
https://docs.claude.com/en/docs/build-with-claude/context-editing
---------
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com >
2025-10-06 10:34:04 -04:00
ccurme
95a451ef2c
fix(openai): disable stream_usage in chat completions if OPENAI_BASE_URL is set ( #33298 )
...
This env var is used internally by the OpenAI client.
2025-10-06 10:14:43 -04:00