Mason Daugherty
7e20678321
ci(infra): add workflow to check for RST syntax
2025-10-09 17:15:39 -04:00
Mason Daugherty
6fc21afbc9
style: .. code-block:: admonition translations ( #33400 )
...
biiiiiiiiiiiiiiiigggggggg pass
2025-10-09 16:52:58 -04:00
ccurme
50445d4a27
fix(standard-tests): update Anthropic inputs test ( #33391 )
...
Since 10/7 Anthropic will raise BadRequestError if given an invalid
thinking signature.
2025-10-09 14:13:26 -04:00
ccurme
11a2efe49b
fix(anthropic): handle empty AIMessage ( #33390 )
2025-10-09 13:57:42 -04:00
Mason Daugherty
d8a680ee57
style: address Sphinx double-backtick snippet syntax ( #33389 )
2025-10-09 13:35:51 -04:00
Christophe Bornet
f405a2c57d
chore(core): remove arg types from docstrings ( #33388 )
...
* Remove types args
* Remove types from Returns
* Remove types from Yield
* Replace `kwargs` by `**kwargs` when needed
2025-10-09 13:13:23 -04:00
Mason Daugherty
3576e690fa
chore: update Sphinx links to markdown ( #33386 )
2025-10-09 11:54:14 -04:00
Mason Daugherty
057ac361ef
chore: delete .claude/settings.local.json ( #33387 )
2025-10-09 11:44:57 -04:00
Christophe Bornet
d9675a4a20
fix(langchain): improve and fix typing ( #32383 )
2025-10-09 10:55:31 -04:00
ccurme
c27271f3ae
fix(openai): update file index key name ( #33350 )
2025-10-09 13:15:27 +00:00
ccurme
a3e4f4c2e3
fix(core): override streaming callback if streaming attribute is set ( #33351 )
2025-10-09 09:04:27 -04:00
Mason Daugherty
b5030badbe
refactor(core): clean up sys_info.py ( #33372 )
2025-10-09 03:31:26 +00:00
Mason Daugherty
b6132fc23e
style: remove more Optional syntax ( #33371 )
2025-10-08 23:28:43 -04:00
Eugene Yurtsev
f33b1b3d77
chore(langchain_v1): rename on_model_call to wrap_model_call ( #33370 )
...
rename on_model_call to wrap_model_call
2025-10-08 23:28:14 -04:00
Eugene Yurtsev
c382788342
chore(langchain_v1): update the uv lock file ( #33369 )
...
Update the uv lock file.
2025-10-08 23:03:25 -04:00
Eugene Yurtsev
e193a1f273
chore(langchain_v1): replace modify model request with on model call ( #33368 )
...
* Replace modify model request with on model call
* Remove modify model request
2025-10-09 02:46:48 +00:00
Eugene Yurtsev
eb70672f4a
chore(langchain): add unit tests for wrap_tool_call decorator ( #33367 )
...
Add unit tests for wrap_tool_call decorator
2025-10-09 02:30:07 +00:00
Eugene Yurtsev
87df179ca9
chore(langchain_v1): rename on_tool_call to wrap_tool_call ( #33366 )
...
Replace on tool call with wrap tool call
2025-10-08 22:10:36 -04:00
Eugene Yurtsev
982a950ccf
chore(langchain_v1): add runtime and context to model request ( #33365 )
...
Add runtime and context to ModelRequest to make the API more convenient
2025-10-08 21:59:56 -04:00
Eugene Yurtsev
c2435eeca5
chore(langchain_v1): update on_tool_call to regular callbacks ( #33364 )
...
Refactor tool call middleware from generator-based to handler-based
pattern
Simplifies on_tool_call middleware by replacing the complex generator
protocol with a straightforward handler pattern. Instead of yielding
requests and receiving results via .send(),
handlers now receive an execute callable that can be invoked multiple
times for retry logic.
Before vs. After
Before (Generator):
```python
class RetryMiddleware(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
for attempt in range(3):
response = yield request # Yield request, receive result via .send()
if is_valid(response) or attempt == 2:
return # Final result is last value sent to generator
```
After (Handler):
```python
class RetryMiddleware(AgentMiddleware):
def on_tool_call(self, request, handler):
for attempt in range(3):
result = handler(request) # Direct function call
if is_valid(result):
return result
return result
```
Follow up after this PR:
* Rename the interceptor to wrap_tool_call
* Fix the async path for the ToolNode
2025-10-08 21:46:03 -04:00
Mason Daugherty
68c56440cf
fix(groq): handle content correctly ( #33363 )
...
(look at most recent commit; ignore prior)
2025-10-08 21:23:30 -04:00
Mason Daugherty
31eeb50ce0
chore: drop UP045 ( #33362 )
...
Python 3.9 EOL
2025-10-08 21:17:53 -04:00
Mason Daugherty
0039b3b046
refactor(core): remove keep-runtime-typing from pyproject.toml following dropping 3.9 ( #33360 )
...
https://docs.astral.sh/ruff/rules/non-pep604-annotation-optional/#why-is-this-bad
2025-10-08 21:09:53 -04:00
Mason Daugherty
ffb1a08871
style(infra): use modern Optional typing in script ( #33361 )
2025-10-08 21:09:43 -04:00
Mason Daugherty
d13823043d
style: monorepo pass for refs ( #33359 )
...
* Delete some double backticks previously used by Sphinx (not done
everywhere yet)
* Fix some code blocks / dropdowns
Ignoring CLI CI for now
2025-10-08 18:41:39 -04:00
Eugene Yurtsev
b665b81a0e
chore(langchain_v1): simplify on model call logic ( #33358 )
...
Moving from the generator pattern to the slightly less verbose (but explicit) handler pattern.
This will be more familiar to users
**Before (Generator Pattern):**
```python
def on_model_call(self, request, state, runtime):
try:
result = yield request
except Exception:
result = yield request # Retry
```
**After (Handler Pattern):**
```python
def on_model_call(self, request, state, runtime, handler):
try:
return handler(request)
except Exception:
return handler(request) # Retry
```
2025-10-08 17:23:11 -04:00
Mason Daugherty
6b9b177b89
chore(openai): base.py ref pass ( #33355 )
2025-10-08 16:08:52 -04:00
Mason Daugherty
b1acf8d931
chore: fix dropdown default open admonition in refs ( #33354 )
2025-10-08 18:50:44 +00:00
Eugene Yurtsev
97f731da7e
chore(langchain_v1): remove unused internal namespace ( #33352 )
...
Remove unused internal namespace. We'll likely restore a part of it for
lazy loading optimizations later.
2025-10-08 14:08:07 -04:00
Eugene Yurtsev
1bf29da0d6
feat(langchain_v1): add on_tool_call middleware hook ( #33329 )
...
Adds generator-based middleware for intercepting tool execution in
agents. Middleware can retry on errors, cache results, modify requests,
or short-circuit execution.
### Implementation
**Middleware Protocol**
```python
class AgentMiddleware:
def on_tool_call(
self,
request: ToolCallRequest,
state: StateT,
runtime: Runtime[ContextT],
) -> Generator[ToolCallRequest | ToolMessage | Command, ToolMessage | Command, None]:
"""
Yields: ToolCallRequest (execute), ToolMessage (cached result), or Command (control flow)
Receives: ToolMessage or Command via .send()
Returns: None (final result is last value sent to handler)
"""
yield request # passthrough
```
**Composition**
Multiple middleware compose automatically (first = outermost), with
`_chain_tool_call_handlers()` stacking them like nested function calls.
### Examples
**Retry on error:**
```python
class RetryMiddleware(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
for attempt in range(3):
response = yield request
if not isinstance(response, ToolMessage) or response.status != "error":
return
if attempt == 2:
return # Give up
```
**Cache results:**
```python
class CacheMiddleware(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
cache_key = (request.tool_call["name"], tuple(request.tool_call["args"].items()))
if cached := self.cache.get(cache_key):
yield ToolMessage(content=cached, tool_call_id=request.tool_call["id"])
else:
response = yield request
self.cache[cache_key] = response.content
```
**Emulate tools with LLM**
```python
class ToolEmulator(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
prompt = f"""Emulate: {request.tool_call["name"]}
Description: {request.tool.description}
Args: {request.tool_call["args"]}
Return ONLY the tool's output."""
response = emulator_model.invoke([HumanMessage(prompt)])
yield ToolMessage(
content=response.content,
tool_call_id=request.tool_call["id"],
name=request.tool_call["name"],
)
```
**Modify requests:**
```python
class ScalingMiddleware(AgentMiddleware):
def on_tool_call(self, request, state, runtime):
if "value" in request.tool_call["args"]:
request.tool_call["args"]["value"] *= 2
yield request
```
2025-10-08 16:43:32 +00:00
Eugene Yurtsev
2c3fec014f
feat(langchain_v1): on_model_call middleware ( #33328 )
...
Introduces a generator-based `on_model_call` hook that allows middleware
to intercept model calls with support for retry logic, error handling,
response transformation, and request modification.
## Overview
Middleware can now implement `on_model_call()` using a generator
protocol that:
- **Yields** `ModelRequest` to execute the model
- **Receives** `AIMessage` via `.send()` on success, or exception via
`.throw()` on error
- **Yields again** to retry or transform responses
- Uses **implicit last-yield semantics** (no return values from
generators)
## Usage Examples
### Basic Retry on Error
```python
from langchain.agents.middleware.types import AgentMiddleware
class RetryMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
for attempt in range(3):
try:
yield request # Execute model
break # Success
except Exception:
if attempt == 2:
raise # Max retries exceeded
```
### Response Transformation
```python
class UppercaseMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
result = yield request
modified = AIMessage(content=result.content.upper())
yield modified # Return transformed response
```
### Error Recovery
```python
class FallbackMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
try:
yield request
except Exception:
fallback = AIMessage(content="Service unavailable")
yield fallback # Convert error to fallback response
```
### Caching / Short-Circuit
```python
class CacheMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
if cached := get_cache(request):
yield cached # Skip model execution
else:
result = yield request
save_cache(request, result)
```
### Request Modification
```python
class SystemPromptMiddleware(AgentMiddleware):
def on_model_call(self, request, state, runtime):
modified_request = ModelRequest(
model=request.model,
system_prompt="You are a helpful assistant.",
messages=request.messages,
tools=request.tools,
)
yield modified_request
```
### Function Decorator
```python
from langchain.agents.middleware.types import on_model_call
@on_model_call
def retry_three_times(request, state, runtime):
for attempt in range(3):
try:
yield request
break
except Exception:
if attempt == 2:
raise
agent = create_agent(model="openai:gpt-4o", middleware=[retry_three_times])
```
## Middleware Composition
Middleware compose with first in list as outermost layer:
```python
agent = create_agent(
model="openai:gpt-4o",
middleware=[
RetryMiddleware(), # Outer - wraps others
LoggingMiddleware(), # Middle
UppercaseMiddleware(), # Inner - closest to model
]
)
```
2025-10-08 12:34:04 -04:00
Mason Daugherty
4c38157ee0
fix(core): don't print package if no version found ( #33347 )
...
This is polluting issues making it hard to find issues that apply to a
query
2025-10-07 23:14:17 -04:00
Sydney Runkle
b5f8e87e2f
remove runtime where not needed
2025-10-07 21:33:52 -04:00
Eugene Yurtsev
6a2efd060e
fix(langchain_v1): injection logic in tool node ( #33344 )
...
Fix injection logic in tool node
2025-10-07 21:31:10 -04:00
Mason Daugherty
cda336295f
chore: enrich pyproject.toml files with links to new references, others ( #33343 )
2025-10-07 16:17:14 -04:00
Mason Daugherty
02f4256cb6
chore: remove CLI note in migrations ( #33342 )
...
unsure of functionality/we don't plan to spend time on it at the moment
2025-10-07 19:18:33 +00:00
ccurme
492ba3d127
release(core): 1.0.0a8 ( #33341 )
langchain-core==1.0.0a8
2025-10-07 14:18:44 -04:00
ccurme
cbf8d46d3e
fix(core): add back add_user_message and add_ai_message ( #33340 )
2025-10-07 13:56:34 -04:00
Mason Daugherty
58598f01b0
chore: add more informative README for libs/ ( #33339 )
2025-10-07 17:13:45 +00:00
ccurme
89fe7e1ac1
release(langchain): 1.0.0a1 ( #33337 )
2025-10-07 12:52:32 -04:00
ccurme
a24712f7f7
revert: chore(infra): temporarily skip tests of previous alpha versions on core release ( #33333 )
...
Reverts langchain-ai/langchain#33312
2025-10-07 10:51:17 -04:00
Mason Daugherty
8446fef00d
fix(infra): v0.3 ref dep ( #33336 )
2025-10-07 10:49:20 -04:00
Mason Daugherty
8bcdfbb24e
chore: clean up pyproject.toml files, use core a7 ( #33334 )
2025-10-07 10:49:04 -04:00
Mason Daugherty
b8ebc14a23
chore(langchain): clean Makefile ( #33335 )
2025-10-07 10:48:47 -04:00
ccurme
aa442bc52f
release(openai): 1.0.0a4 ( #33316 )
langchain-anthropic==1.0.0a3
langchain-openai==1.0.0a4
2025-10-07 09:25:05 -04:00
ccurme
2e024b7ede
release(anthropic): 1.0.0a3 ( #33317 )
2025-10-07 09:24:54 -04:00
Sydney Runkle
c8205ff511
fix(langchain_v1): fix edges when there's no middleware ( #33321 )
...
1. Main fix: when we don't have a response format or middleware, don't
draw a conditional edge back to the loop entrypoint (self loop on model)
2. Supplementary fix: when we jump to `end` and there is an
`after_agent` hook, jump there instead of `__end__`
Other improvements -- I can remove these if they're more harmful than
helpful
1. Use keyword only arguments for edge generator functions for clarity
2. Rename args to `model_destination` and `end_destination` for clarity
2025-10-06 18:08:08 -04:00
Mason Daugherty
ea0a25d7fe
fix(infra): v0.3 ref build; allow prerelease installations for partner packages ( #33326 )
2025-10-06 18:06:40 -04:00
Mason Daugherty
29b5df3881
fix(infra): handle special case for langchain-tavily repository checkout during ref build ( #33324 )
2025-10-06 18:00:24 -04:00
Mason Daugherty
690b620b7f
docs(infra): add note about check_diff.py running on seemingly unrelated PRs ( #33323 )
2025-10-06 17:56:57 -04:00