* Making `FakeToolCallingModel` generic on its `structured_response`
doesn't help anywhere in typing.
* There are more than 120 references of `FakeToolCallingModel` in the
code where you get ` error: Need type annotation for "model"
[var-annotated]` because mypy can't resolve the generic type (we don't
see them atm because they are in files temporarily excluded from mypy
checking). We would need to explicitly type them to
`FakeToolCallingModel[Any]`
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Appears `override()`'s docstring in `langgraph` already shows
`state=new_state` as a valid usage pattern
Works since `dataclasses.replace()` accepts any field, but the
`TypedDicts` weren't updated to match. Caused mypy to flag legitimate
usage as an error.
description by @mdrxy
- Enable `test_responses_spec.py` integration tests that were previously
skipped at module level
- Widen `ToolStrategy.schema` type annotation from `type[SchemaT]` to
`type[SchemaT] | dict[str, Any]` to match actual supported usage (JSON
schema dicts were already handled at runtime)
- Fix type annotations and linting issues in test file (modernize to
`dict`/`list`, add return types, prefix unused `_request` param)
- Improve generic typing in `load_spec` utility with bounded `TypeVar`
Co-authored-by: Mason Daugherty <mason@langchain.dev>
# feat(core): add more file extensions to ignore in HTML link extraction
## Description
This PR enhances the HTML link extraction utility in
`libs/core/langchain_core/utils/html.py` by expanding the
`SUFFIXES_TO_IGNORE` list to include additional common binary file
extensions:
- `.webp`
- `.pdf`
- `.docx`
- `.xlsx`
- `.pptx`
- `.pptm`
These file types are non-HTML, non-crawlable resources. Ignoring them
prevents `find_all_links` and `extract_sub_links` from mistakenly
treating such binary assets as navigable links. This improves link
filtering, reduces unnecessary crawling, and aligns behavior with
typical web scraping expectations.
## Summary of Changes
- **Updated** `libs/core/langchain_core/utils/html.py`: Added `.webp`,
`.pdf`, `.docx`, `.xlsx`, `.pptx`, `.pptm` to `SUFFIXES_TO_IGNORE`.
## Related Issues
N/A
## Verification
- `ruff check libs/core/langchain_core/utils/html.py`: **Passed**
- `mypy libs/core/langchain_core/utils/html.py`: **Passed**
- `pytest libs/core/tests/unit_tests/utils/test_html.py`: **Passed** (11
tests)
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
# refactor(core): improve docstrings for HTML link extraction utilities
## Description
This PR updates and clarifies the docstrings for `find_all_links` and
`extract_sub_links` in
`libs/core/langchain_core/utils/html.py`.
The previous return-value descriptions were vague (e.g., "all links",
"sub links"). They have now been revised to clearly describe the
behavior and output of each function:
- **find_all_links** → “A list of all links found in the HTML.”
- **extract_sub_links** → “A list of absolute paths to sub links.”
These improvements make the utilities more understandable and
developer-friendly without altering functionality.
## Verification
- `ruff check libs/core/langchain_core/utils/html.py`: **Passed**
- `pytest libs/core/tests/unit_tests/utils/test_html.py`: **Passed**
## Checklists
- PR title follows the required format: `TYPE(SCOPE): DESCRIPTION`
- Changes are limited to the `langchain-core` package
- `make format`, `make lint`, and `make test` pass
Fixes#34282
**Before:** When using agents with tools (like file reading, web search,
etc.), the conversation looks like this:
```
[User] "Read these 10 files and summarize them"
[AI] "I'll read all 10 files" + [tool_call: read_file x 10]
[Tool] "Contents of file1.txt..."
[Tool] "Contents of file2.txt..."
[Tool] "Contents of file3.txt..."
... (7 more tool responses)
```
When the conversation gets too long, `SummarizationMiddleware` kicks in
to compress older messages. The problem was:
If you asked to keep the last 6 messages, you'd get:
```
[Summary] "Here's what happened before..."
[Tool] "Contents of file5.txt..."
[Tool] "Contents of file6.txt..."
[Tool] "Contents of file7.txt..."
[Tool] "Contents of file8.txt..."
[Tool] "Contents of file9.txt..."
[Tool] "Contents of file10.txt..."
```
The AI's original request to read the files (`[AI]` message with
`tool_calls`) was summarized away, but the tool responses remained. This
caused the error:
```
Error code: 400 - "No tool call found for function call output with call_id..."
```
Many APIs require that every tool response has a matching tool request.
Without the AI message, the tool responses are "orphaned."
## The fix
Now when the cutoff lands on tool messages, we **move backward** to
include the AI message that requested those tools:
Same scenario, keeping last 6 messages:
```
[Summary] "Here's what happened before..."
[AI] "I'll read all 10 files" + [tool_call: read_file x 10]
[Tool] "Contents of file1.txt..."
[Tool] "Contents of file2.txt..."
... (all 10 tool responses)
```
The AI message is preserved along with its tool responses, keeping them
paired together.
## Practical examples
### Example 1: Parallel tool calls
**Scenario:** Agent reads 10 files in parallel, summarization triggers
(see above)
### Example 2: Mixed conversation
**Scenario:** User asks question, AI uses tools, user says thanks
```
[User] "What's the weather?"
[AI] "Let me check" + [tool_call: get_weather]
[Tool] "72F and sunny"
[AI] "It's 72F and sunny!"
[User] "Thanks!"
```
Keeping last 2 messages:
| Before (Bug) | After (Fix) |
|--------------|-------------|
| Only `[User] "Thanks!"` kept | `[AI] + [Tool] + [AI] + [User]` all
kept |
| Lost the weather info | Tool pair preserved with response |
### Example 3: Multiple tool sequences
```
[User] "Search for X"
[AI] [tool_call: search]
[Tool] "Results for X"
[User] "Now search for Y"
[AI] [tool_call: search]
[Tool] "Results for Y"
[User] "Great!"
```
**Keeping last 3 messages:** If cutoff lands on `[Tool] "Results for
Y"`, we now include `[AI] [tool_call: search]` to keep the pair
together.
Add unit coverage for chat model provider inference across common model
name prefixes. This improves regression protection without touching
runtime
---------
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Fixes a bug introduced with commit 85f1ba2 (released in `langchain ==
1.2.1`).
Whenever the index embedding of the langgraph-server is configured with
`azure_openai` provider, the wrong class is going to be initialized (and
fails to do so if the now unexpected credentials in environment variable
`OPENAI_API_KEY` is not provided).
Example configuration file `langgraph.json` that will reproduce the
issue:
(see
https://docs.langchain.com/langsmith/cli#adding-semantic-search-to-the-store)
```json
{
"dependencies": ["."],
"graphs": {
"chat": "src/agents/chat/graph.py:graph",
},
"store": {
"index": {
"embed": "azure_openai:text-embedding-3-small",
"dims": 1536
}
},
"python_version": "3.13",
"image_distro": "wolfi"
}
```
The agent should only make a single call to update the todo list at a
time. A parallel call doesn't make sense, but also cannot work as
there's no obvious reducer to use.
On parallel calls of the todo tool, we return ToolMessage containing to
guide the LLM to not call the tool in parallel.
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>