Compare commits

..

2 Commits

Author SHA1 Message Date
Mason Daugherty
c673b9d5c4 Merge branch 'master' into mdrxy/openai-strict-bind_tools 2025-11-27 19:22:03 -05:00
Mason Daugherty
3d0a38cc92 fix(openai): pass strict for response_format in bind_tools() 2025-11-27 19:00:37 -05:00
27 changed files with 806 additions and 1307 deletions

439
AGENTS.md
View File

@@ -1,58 +1,255 @@
# Global development guidelines for the LangChain monorepo
# Global Development Guidelines for LangChain Projects
This document provides context to understand the LangChain Python project and assist with development.
## Core Development Principles
## Project architecture and context
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
### Monorepo structure
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
This is a Python monorepo with multiple independently versioned packages that use `uv`.
**Bad - Breaking Change:**
```txt
langchain/
├── libs/
│ ├── core/ # `langchain-core` primitives and base abstractions
│ ├── langchain/ # `langchain-classic` (legacy, no new features)
│ ├── langchain_v1/ # Actively maintained `langchain` package
│ ├── partners/ # Third-party integrations
│ │ ├── openai/ # OpenAI models and embeddings
│ │ ├── anthropic/ # Anthropic (Claude) integration
│ │ ├── ollama/ # Local model support
│ │ └── ... (other integrations maintained by the LangChain team)
│ ├── text-splitters/ # Document chunking utilities
│ ├── standard-tests/ # Shared test suite for integrations
│ ├── model-profiles/ # Model configuration profiles
│ └── cli/ # Command-line interface tools
├── .github/ # CI/CD workflows and templates
├── .vscode/ # VSCode IDE standard settings and recommended extensions
└── README.md # Information about LangChain
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
- **Core layer** (`langchain-core`): Base abstractions, interfaces, and protocols. Users should not need to know about this layer directly.
- **Implementation layer** (`langchain`): Concrete implementations and high-level public utilities
- **Integration layer** (`partners/`): Third-party service integrations. Note that this monorepo is not exhaustive of all LangChain integrations; some are maintained in separate repos, such as `langchain-ai/langchain-google` and `langchain-ai/langchain-aws`. Usually these repos are cloned at the same level as this monorepo, so if needed, you can refer to their code directly by navigating to `../langchain-google/` from this monorepo.
- **Testing layer** (`standard-tests/`): Standardized integration tests for partner integrations
**Good - Stable Interface:**
### Development tools & commands**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
- `uv` Fast Python package installer and resolver (replaces pip/poetry)
- `make` Task runner for common development commands. Feel free to look at the `Makefile` for available commands and usage patterns.
- `ruff` Fast Python linter and formatter
- `mypy` Static type checking
- `pytest` Testing framework
**Before making ANY changes to public APIs:**
This monorepo uses `uv` for dependency management. Local development uses editable installs: `[tool.uv.sources]`
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using MkDocs Material admonitions, like `!!! warning`)
Each package in `libs/` has its own `pyproject.toml` and `uv.lock`.
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (`'low'`, `'normal'`, `'high'`).
Returns:
`True` if email was sent successfully, `False` otherwise.
Raises:
`InvalidEmailError`: If the email address format is invalid.
`SMTPConnectionError`: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- If a default is present, DO NOT repeat it in the docstring unless there is post-processing or it is set conditionally.
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Ensure American English spelling (e.g., "behavior", not "behaviour")
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
@@ -64,118 +261,72 @@ make format
uv run --group lint mypy .
```
#### Key config files
### Dependency Management Patterns
- pyproject.toml: Main workspace configuration with dependency groups
- uv.lock: Locked dependencies for reproducible builds
- Makefile: Development tasks
**Local Development Dependencies:**
#### Commit standards
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
Suggest PR titles that follow Conventional Commits format. Refer to .github/workflows/pr_lint for allowed types and scopes.
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
#### Pull request guidelines
```python
from langchain_core.tools import tool
- Always add a disclaimer to the PR description mentioning how AI agents are involved with the contribution.
- Describe the "why" of the changes, why the proposed solution is the right one. Limit prose.
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format
## Pull request guidelines
- Describe the "why" of the changes, why the proposed solution is the right one.
- Highlight areas of the proposed changes that require careful review.
## Core development principles
### Maintain stable public interfaces
CRITICAL: Always attempt to preserve function signatures, argument positions, and names for exported/public methods. Do not make breaking changes.
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using MkDocs Material admonitions, like `!!! warning`)
Ask: "Would this change break someone's code if they used it last week?"
### Code quality standards
All Python code MUST include type hints and return types.
```python title="Example"
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Single line description of the function.
Any additional context about the function can go here.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
```
- Use descriptive, self-explanatory variable names.
- Follow existing patterns in the codebase you're modifying
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
### Testing requirements
Every new feature or bugfix MUST be covered by unit tests.
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- We use `pytest` as the testing framework; if in doubt, check other existing tests for examples.
- The testing file structure should mirror the source code structure.
**Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
- [ ] Does the test suite fail if your new logic is broken?
### Security and risk assessment
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
### Documentation standards
Use Google-style docstrings with Args section for all public functions.
```python title="Example"
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""Send an email to a recipient with specified priority.
Any additional context about the function can go here.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level.
Returns:
`True` if email was sent successfully, `False` otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
- Types go in function signatures, NOT in docstrings
- If a default is present, DO NOT repeat it in the docstring unless there is post-processing or it is set conditionally.
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Ensure American English spelling (e.g., "behavior", not "behaviour")
## Additional resources
- **Documentation:** https://docs.langchain.com/oss/python/langchain/overview and source at https://github.com/langchain-ai/docs or `../docs/`. Prefer the local install and use file search tools for best results. If needed, use the docs MCP server as defined in `.mcp.json` for programmatic access.
- **Contributing Guide:** [`.github/CONTRIBUTING.md`](https://docs.langchain.com/oss/python/contributing/overview)
- Always add a disclaimer to the PR description mentioning how AI agents are involved with the contribution.

182
CLAUDE.md
View File

@@ -1,181 +1 @@
# Global development guidelines for the LangChain monorepo
This document provides context to understand the LangChain Python project and assist with development.
## Project architecture and context
### Monorepo structure
This is a Python monorepo with multiple independently versioned packages that use `uv`.
```txt
langchain/
├── libs/
│ ├── core/ # `langchain-core` primitives and base abstractions
│ ├── langchain/ # `langchain-classic` (legacy, no new features)
│ ├── langchain_v1/ # Actively maintained `langchain` package
│ ├── partners/ # Third-party integrations
│ │ ├── openai/ # OpenAI models and embeddings
│ │ ├── anthropic/ # Anthropic (Claude) integration
│ │ ├── ollama/ # Local model support
│ │ └── ... (other integrations maintained by the LangChain team)
│ ├── text-splitters/ # Document chunking utilities
│ ├── standard-tests/ # Shared test suite for integrations
│ ├── model-profiles/ # Model configuration profiles
│ └── cli/ # Command-line interface tools
├── .github/ # CI/CD workflows and templates
├── .vscode/ # VSCode IDE standard settings and recommended extensions
└── README.md # Information about LangChain
```
- **Core layer** (`langchain-core`): Base abstractions, interfaces, and protocols. Users should not need to know about this layer directly.
- **Implementation layer** (`langchain`): Concrete implementations and high-level public utilities
- **Integration layer** (`partners/`): Third-party service integrations. Note that this monorepo is not exhaustive of all LangChain integrations; some are maintained in separate repos, such as `langchain-ai/langchain-google` and `langchain-ai/langchain-aws`. Usually these repos are cloned at the same level as this monorepo, so if needed, you can refer to their code directly by navigating to `../langchain-google/` from this monorepo.
- **Testing layer** (`standard-tests/`): Standardized integration tests for partner integrations
### Development tools & commands**
- `uv` Fast Python package installer and resolver (replaces pip/poetry)
- `make` Task runner for common development commands. Feel free to look at the `Makefile` for available commands and usage patterns.
- `ruff` Fast Python linter and formatter
- `mypy` Static type checking
- `pytest` Testing framework
This monorepo uses `uv` for dependency management. Local development uses editable installs: `[tool.uv.sources]`
Each package in `libs/` has its own `pyproject.toml` and `uv.lock`.
```bash
# Run unit tests (no network)
make test
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
#### Key config files
- pyproject.toml: Main workspace configuration with dependency groups
- uv.lock: Locked dependencies for reproducible builds
- Makefile: Development tasks
#### Commit standards
Suggest PR titles that follow Conventional Commits format. Refer to .github/workflows/pr_lint for allowed types and scopes.
#### Pull request guidelines
- Always add a disclaimer to the PR description mentioning how AI agents are involved with the contribution.
- Describe the "why" of the changes, why the proposed solution is the right one. Limit prose.
- Highlight areas of the proposed changes that require careful review.
## Core development principles
### Maintain stable public interfaces
CRITICAL: Always attempt to preserve function signatures, argument positions, and names for exported/public methods. Do not make breaking changes.
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using MkDocs Material admonitions, like `!!! warning`)
Ask: "Would this change break someone's code if they used it last week?"
### Code quality standards
All Python code MUST include type hints and return types.
```python title="Example"
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Single line description of the function.
Any additional context about the function can go here.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
```
- Use descriptive, self-explanatory variable names.
- Follow existing patterns in the codebase you're modifying
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
### Testing requirements
Every new feature or bugfix MUST be covered by unit tests.
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- We use `pytest` as the testing framework; if in doubt, check other existing tests for examples.
- The testing file structure should mirror the source code structure.
**Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
- [ ] Does the test suite fail if your new logic is broken?
### Security and risk assessment
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
### Documentation standards
Use Google-style docstrings with Args section for all public functions.
```python title="Example"
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""Send an email to a recipient with specified priority.
Any additional context about the function can go here.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level.
Returns:
`True` if email was sent successfully, `False` otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
- Types go in function signatures, NOT in docstrings
- If a default is present, DO NOT repeat it in the docstring unless there is post-processing or it is set conditionally.
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Ensure American English spelling (e.g., "behavior", not "behaviour")
## Additional resources
- **Documentation:** https://docs.langchain.com/oss/python/langchain/overview and source at https://github.com/langchain-ai/docs or `../docs/`. Prefer the local install and use file search tools for best results. If needed, use the docs MCP server as defined in `.mcp.json` for programmatic access.
- **Contributing Guide:** [`.github/CONTRIBUTING.md`](https://docs.langchain.com/oss/python/contributing/overview)
AGENTS.md

9
MIGRATE.md Normal file
View File

@@ -0,0 +1,9 @@
# Migrating
Please see the following guides for migrating LangChain code:
* Migrate to [LangChain v1.0](https://docs.langchain.com/oss/python/migrate/langchain-v1)
* Migrate to [LangChain v0.3](https://python.langchain.com/docs/versions/v0_3/)
* Migrate to [LangChain v0.2](https://python.langchain.com/docs/versions/v0_2/)
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)

View File

@@ -47,59 +47,54 @@ class EmptyDict(TypedDict, total=False):
class RunnableConfig(TypedDict, total=False):
"""Configuration for a `Runnable`.
See the [reference docs](https://reference.langchain.com/python/langchain_core/runnables/#langchain_core.runnables.RunnableConfig)
for more details.
"""
"""Configuration for a Runnable."""
tags: list[str]
"""Tags for this call and any sub-calls (e.g. a Chain calling an LLM).
"""
Tags for this call and any sub-calls (eg. a Chain calling an LLM).
You can use these to filter calls.
"""
metadata: dict[str, Any]
"""Metadata for this call and any sub-calls (e.g. a Chain calling an LLM).
"""
Metadata for this call and any sub-calls (eg. a Chain calling an LLM).
Keys should be strings, values should be JSON-serializable.
"""
callbacks: Callbacks
"""Callbacks for this call and any sub-calls (e.g. a Chain calling an LLM).
"""
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
"""
run_name: str
"""Name for the tracer run for this call.
Defaults to the name of the class."""
"""
Name for the tracer run for this call. Defaults to the name of the class.
"""
max_concurrency: int | None
"""Maximum number of parallel calls to make.
If not provided, defaults to `ThreadPoolExecutor`'s default.
"""
Maximum number of parallel calls to make. If not provided, defaults to
`ThreadPoolExecutor`'s default.
"""
recursion_limit: int
"""Maximum number of times a call can recurse.
If not provided, defaults to `25`.
"""
Maximum number of times a call can recurse. If not provided, defaults to `25`.
"""
configurable: dict[str, Any]
"""Runtime values for attributes previously made configurable on this `Runnable`,
"""
Runtime values for attributes previously made configurable on this `Runnable`,
or sub-Runnables, through `configurable_fields` or `configurable_alternatives`.
Check `output_schema` for a description of the attributes that have been made
configurable.
"""
run_id: uuid.UUID | None
"""Unique identifier for the tracer run for this call.
If not provided, a new UUID will be generated.
"""
Unique identifier for the tracer run for this call. If not provided, a new UUID
will be generated.
"""

View File

@@ -170,33 +170,28 @@ def dereference_refs(
full_schema: dict | None = None,
skip_keys: Sequence[str] | None = None,
) -> dict:
"""Resolve and inline JSON Schema `$ref` references in a schema object.
"""Resolve and inline JSON Schema $ref references in a schema object.
This function processes a JSON Schema and resolves all `$ref` references by
replacing them with the actual referenced content.
Handles both simple references and complex cases like circular references and mixed
`$ref` objects that contain additional properties alongside the `$ref`.
This function processes a JSON Schema and resolves all $ref references by replacing
them with the actual referenced content. It handles both simple references and
complex cases like circular references and mixed $ref objects that contain
additional properties alongside the $ref.
Args:
schema_obj: The JSON Schema object or fragment to process.
This can be a complete schema or just a portion of one.
full_schema: The complete schema containing all definitions that `$refs` might
point to.
If not provided, defaults to `schema_obj` (useful when the schema is
self-contained).
skip_keys: Controls recursion behavior and reference resolution depth.
- If `None` (Default): Only recurse under `'$defs'` and use shallow
reference resolution (break cycles but don't deep-inline nested refs)
- If provided (even as `[]`): Recurse under all keys and use deep reference
resolution (fully inline all nested references)
schema_obj: The JSON Schema object or fragment to process. This can be a
complete schema or just a portion of one.
full_schema: The complete schema containing all definitions that $refs might
point to. If not provided, defaults to schema_obj (useful when the
schema is self-contained).
skip_keys: Controls recursion behavior and reference resolution depth:
- If `None` (Default): Only recurse under '$defs' and use shallow reference
resolution (break cycles but don't deep-inline nested refs)
- If provided (even as []): Recurse under all keys and use deep reference
resolution (fully inline all nested references)
Returns:
A new dictionary with all $ref references resolved and inlined.
The original `schema_obj` is not modified.
A new dictionary with all $ref references resolved and inlined. The original
schema_obj is not modified.
Examples:
Basic reference resolution:
@@ -208,8 +203,7 @@ def dereference_refs(
>>> result = dereference_refs(schema)
>>> result["properties"]["name"] # {"type": "string"}
Mixed `$ref` with additional properties:
Mixed $ref with additional properties:
>>> schema = {
... "properties": {
... "name": {"$ref": "#/$defs/base", "description": "User name"}
@@ -221,7 +215,6 @@ def dereference_refs(
# {"type": "string", "minLength": 1, "description": "User name"}
Handling circular references:
>>> schema = {
... "properties": {"user": {"$ref": "#/$defs/User"}},
... "$defs": {
@@ -234,11 +227,10 @@ def dereference_refs(
>>> result = dereference_refs(schema) # Won't cause infinite recursion
!!! note
- Circular references are handled gracefully by breaking cycles
- Mixed `$ref` objects (with both `$ref` and other properties) are supported
- Additional properties in mixed `$refs` override resolved properties
- The `$defs` section is preserved in the output by default
- Mixed $ref objects (with both $ref and other properties) are supported
- Additional properties in mixed $refs override resolved properties
- The $defs section is preserved in the output by default
"""
full = full_schema or schema_obj
keys_to_skip = list(skip_keys) if skip_keys is not None else ["$defs"]

View File

@@ -70,6 +70,7 @@ test = [
"pandas>=2.0.0,<3.0.0",
"syrupy>=4.0.2,<5.0.0",
"requests-mock>=1.11.0,<2.0.0",
"blockbuster>=1.5.18,<1.6.0",
"toml>=0.10.2,<1.0.0",
"packaging>=24.2.0,<26.0.0",
"langchain-tests",

View File

@@ -1,9 +1,42 @@
"""Configuration for unit tests."""
from collections.abc import Sequence
from collections.abc import Iterator, Sequence
from importlib import util
import pytest
from blockbuster import blockbuster_ctx
@pytest.fixture(autouse=True)
def blockbuster() -> Iterator[None]:
with blockbuster_ctx("langchain_classic") as bb:
bb.functions["io.TextIOWrapper.read"].can_block_in(
"langchain_classic/__init__.py",
"<module>",
)
for func in ["os.stat", "os.path.abspath"]:
(
bb.functions[func]
.can_block_in("langchain_core/runnables/base.py", "__repr__")
.can_block_in(
"langchain_core/beta/runnables/context.py",
"aconfig_with_context",
)
)
for func in ["os.stat", "io.TextIOWrapper.read"]:
bb.functions[func].can_block_in(
"langsmith/client.py",
"_default_retry_config",
)
for bb_function in bb.functions.values():
bb_function.can_block_in(
"freezegun/api.py",
"_get_cached_module_attributes",
)
yield
def pytest_addoption(parser: pytest.Parser) -> None:

View File

@@ -73,6 +73,7 @@ def test_test_group_dependencies(uv_conf: Mapping[str, Any]) -> None:
"pytest-socket",
"pytest-watcher",
"pytest-xdist",
"blockbuster",
"responses",
"syrupy",
"toml",

30
libs/langchain/uv.lock generated
View File

@@ -1,5 +1,5 @@
version = 1
revision = 2
revision = 3
requires-python = ">=3.10.0, <4.0.0"
resolution-markers = [
"python_full_version >= '3.14' and platform_python_implementation == 'PyPy'",
@@ -377,6 +377,18 @@ css = [
{ name = "tinycss2" },
]
[[package]]
name = "blockbuster"
version = "1.5.25"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "forbiddenfruit", marker = "implementation_name == 'cpython'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7f/bc/57c49465decaeeedd58ce2d970b4cdfd93a74ba9993abff2dc498a31c283/blockbuster-1.5.25.tar.gz", hash = "sha256:b72f1d2aefdeecd2a820ddf1e1c8593bf00b96e9fdc4cd2199ebafd06f7cb8f0", size = 36058, upload-time = "2025-07-14T16:00:20.766Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0b/01/dccc277c014f171f61a6047bb22c684e16c7f2db6bb5c8cce1feaf41ec55/blockbuster-1.5.25-py3-none-any.whl", hash = "sha256:cb06229762273e0f5f3accdaed3d2c5a3b61b055e38843de202311ede21bb0f5", size = 13196, upload-time = "2025-07-14T16:00:19.396Z" },
]
[[package]]
name = "boto3"
version = "1.40.44"
@@ -1030,6 +1042,12 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/39/de/576c2dab45914e08c1fc4adfa4a334da037b8b4ad4df1fdab4b56904bd07/fireworks_ai-0.16.4-py3-none-any.whl", hash = "sha256:e7592fdec64aa35f0068b8fa8277e2440ef6f0d6355e818b7220e098f7ea0ee9", size = 193771, upload-time = "2025-05-18T07:16:20.611Z" },
]
[[package]]
name = "forbiddenfruit"
version = "0.1.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e6/79/d4f20e91327c98096d605646bdc6a5ffedae820f38d378d3515c42ec5e60/forbiddenfruit-0.1.4.tar.gz", hash = "sha256:e3f7e66561a29ae129aac139a85d610dbf3dd896128187ed5454b6421f624253", size = 43756, upload-time = "2021-01-16T21:03:35.401Z" }
[[package]]
name = "fqdn"
version = "1.5.1"
@@ -2315,6 +2333,7 @@ lint = [
{ name = "ruff" },
]
test = [
{ name = "blockbuster" },
{ name = "cffi" },
{ name = "freezegun" },
{ name = "langchain-core" },
@@ -2405,6 +2424,7 @@ lint = [
{ name = "ruff", specifier = ">=0.13.1,<0.14.0" },
]
test = [
{ name = "blockbuster", specifier = ">=1.5.18,<1.6.0" },
{ name = "cffi", marker = "python_full_version < '3.10'", specifier = "<1.17.1" },
{ name = "cffi", marker = "python_full_version >= '3.10'" },
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
@@ -2458,7 +2478,7 @@ typing = [
[[package]]
name = "langchain-core"
version = "1.1.0"
version = "1.0.3"
source = { editable = "../core" }
dependencies = [
{ name = "jsonpatch" },
@@ -2492,6 +2512,7 @@ test = [
{ name = "blockbuster", specifier = ">=1.5.18,<1.6.0" },
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
{ name = "grandalf", specifier = ">=0.8.0,<1.0.0" },
{ name = "langchain-model-profiles", directory = "../model-profiles" },
{ name = "langchain-tests", directory = "../standard-tests" },
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
@@ -2508,6 +2529,7 @@ test = [
]
test-integration = []
typing = [
{ name = "langchain-model-profiles", directory = "../model-profiles" },
{ name = "langchain-text-splitters", directory = "../text-splitters" },
{ name = "mypy", specifier = ">=1.18.1,<1.19.0" },
{ name = "types-pyyaml", specifier = ">=6.0.12.2,<7.0.0.0" },
@@ -2637,7 +2659,7 @@ wheels = [
[[package]]
name = "langchain-openai"
version = "1.1.0"
version = "1.0.2"
source = { editable = "../partners/openai" }
dependencies = [
{ name = "langchain-core" },
@@ -2700,7 +2722,7 @@ wheels = [
[[package]]
name = "langchain-tests"
version = "1.0.2"
version = "1.0.1"
source = { editable = "../standard-tests" }
dependencies = [
{ name = "httpx" },

View File

@@ -7,7 +7,7 @@ from langgraph.runtime import Runtime
from langgraph.types import interrupt
from typing_extensions import NotRequired, TypedDict
from langchain.agents.middleware.types import AgentMiddleware, AgentState, ContextT, StateT
from langchain.agents.middleware.types import AgentMiddleware, AgentState
class Action(TypedDict):
@@ -102,7 +102,7 @@ class HITLResponse(TypedDict):
class _DescriptionFactory(Protocol):
"""Callable that generates a description for a tool call."""
def __call__(self, tool_call: ToolCall, state: AgentState, runtime: Runtime[ContextT]) -> str:
def __call__(self, tool_call: ToolCall, state: AgentState, runtime: Runtime) -> str:
"""Generate a description for a tool call."""
...
@@ -138,7 +138,7 @@ class InterruptOnConfig(TypedDict):
def format_tool_description(
tool_call: ToolCall,
state: AgentState,
runtime: Runtime[ContextT]
runtime: Runtime
) -> str:
import json
return (
@@ -156,7 +156,7 @@ class InterruptOnConfig(TypedDict):
"""JSON schema for the args associated with the action, if edits are allowed."""
class HumanInTheLoopMiddleware(AgentMiddleware[StateT, ContextT]):
class HumanInTheLoopMiddleware(AgentMiddleware):
"""Human in the loop middleware."""
def __init__(
@@ -204,7 +204,7 @@ class HumanInTheLoopMiddleware(AgentMiddleware[StateT, ContextT]):
tool_call: ToolCall,
config: InterruptOnConfig,
state: AgentState,
runtime: Runtime[ContextT],
runtime: Runtime,
) -> tuple[ActionRequest, ReviewConfig]:
"""Create an ActionRequest and ReviewConfig for a tool call."""
tool_name = tool_call["name"]
@@ -277,7 +277,7 @@ class HumanInTheLoopMiddleware(AgentMiddleware[StateT, ContextT]):
)
raise ValueError(msg)
def after_model(self, state: AgentState, runtime: Runtime[ContextT]) -> dict[str, Any] | None:
def after_model(self, state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
"""Trigger interrupt flows for relevant tool calls after an `AIMessage`."""
messages = state["messages"]
if not messages:
@@ -350,8 +350,6 @@ class HumanInTheLoopMiddleware(AgentMiddleware[StateT, ContextT]):
return {"messages": [last_ai_msg, *artificial_tool_messages]}
async def aafter_model(
self, state: AgentState, runtime: Runtime[ContextT]
) -> dict[str, Any] | None:
async def aafter_model(self, state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
"""Async trigger interrupt flows for relevant tool calls after an `AIMessage`."""
return self.after_model(state, runtime)

View File

@@ -3,7 +3,6 @@
import uuid
import warnings
from collections.abc import Callable, Iterable, Mapping
from functools import partial
from typing import Any, Literal, cast
from langchain_core.messages import (
@@ -56,76 +55,13 @@ Messages to summarize:
_DEFAULT_MESSAGES_TO_KEEP = 20
_DEFAULT_TRIM_TOKEN_LIMIT = 4000
_DEFAULT_FALLBACK_MESSAGE_COUNT = 15
_SEARCH_RANGE_FOR_TOOL_PAIRS = 5
ContextFraction = tuple[Literal["fraction"], float]
"""Fraction of model's maximum input tokens.
Example:
To specify 50% of the model's max input tokens:
```python
("fraction", 0.5)
```
"""
ContextTokens = tuple[Literal["tokens"], int]
"""Absolute number of tokens.
Example:
To specify 3000 tokens:
```python
("tokens", 3000)
```
"""
ContextMessages = tuple[Literal["messages"], int]
"""Absolute number of messages.
Example:
To specify 50 messages:
```python
("messages", 50)
```
"""
ContextSize = ContextFraction | ContextTokens | ContextMessages
"""Union type for context size specifications.
Can be either:
- [`ContextFraction`][langchain.agents.middleware.summarization.ContextFraction]: A
fraction of the model's maximum input tokens.
- [`ContextTokens`][langchain.agents.middleware.summarization.ContextTokens]: An absolute
number of tokens.
- [`ContextMessages`][langchain.agents.middleware.summarization.ContextMessages]: An
absolute number of messages.
Depending on use with `trigger` or `keep` parameters, this type indicates either
when to trigger summarization or how much context to retain.
Example:
```python
# ContextFraction
context_size: ContextSize = ("fraction", 0.5)
# ContextTokens
context_size: ContextSize = ("tokens", 3000)
# ContextMessages
context_size: ContextSize = ("messages", 50)
```
"""
def _get_approximate_token_counter(model: BaseChatModel) -> TokenCounter:
"""Tune parameters of approximate token counter based on model type."""
if model._llm_type == "anthropic-chat":
# 3.3 was estimated in an offline experiment, comparing with Claude's token-counting
# API: https://platform.claude.com/docs/en/build-with-claude/token-counting
return partial(count_tokens_approximately, chars_per_token=3.3)
return count_tokens_approximately
class SummarizationMiddleware(AgentMiddleware):
@@ -153,48 +89,19 @@ class SummarizationMiddleware(AgentMiddleware):
model: The language model to use for generating summaries.
trigger: One or more thresholds that trigger summarization.
Provide a single
[`ContextSize`][langchain.agents.middleware.summarization.ContextSize]
tuple or a list of tuples, in which case summarization runs when any
threshold is met.
Provide a single `ContextSize` tuple or a list of tuples, in which case
summarization runs when any threshold is breached.
!!! example
```python
# Trigger summarization when 50 messages is reached
("messages", 50)
# Trigger summarization when 3000 tokens is reached
("tokens", 3000)
# Trigger summarization either when 80% of model's max input tokens
# is reached or when 100 messages is reached (whichever comes first)
[("fraction", 0.8), ("messages", 100)]
```
See [`ContextSize`][langchain.agents.middleware.summarization.ContextSize]
for more details.
Examples: `("messages", 50)`, `("tokens", 3000)`, `[("fraction", 0.8),
("messages", 100)]`.
keep: Context retention policy applied after summarization.
Provide a [`ContextSize`][langchain.agents.middleware.summarization.ContextSize]
tuple to specify how much history to preserve.
Provide a `ContextSize` tuple to specify how much history to preserve.
Defaults to keeping the most recent `20` messages.
Defaults to keeping the most recent 20 messages.
Does not support multiple values like `trigger`.
!!! example
```python
# Keep the most recent 20 messages
("messages", 20)
# Keep the most recent 3000 tokens
("tokens", 3000)
# Keep the most recent 30% of the model's max input tokens
("fraction", 0.3)
```
Examples: `("messages", 20)`, `("tokens", 3000)`, or
`("fraction", 0.3)`.
token_counter: Function to count tokens in messages.
summary_prompt: Prompt template for generating summaries.
trim_tokens_to_summarize: Maximum tokens to keep when preparing messages for
@@ -243,10 +150,7 @@ class SummarizationMiddleware(AgentMiddleware):
self._trigger_conditions = trigger_conditions
self.keep = self._validate_context_size(keep, "keep")
if token_counter is count_tokens_approximately:
self.token_counter = _get_approximate_token_counter(self.model)
else:
self.token_counter = token_counter
self.token_counter = token_counter
self.summary_prompt = summary_prompt
self.trim_tokens_to_summarize = trim_tokens_to_summarize
@@ -479,25 +383,16 @@ class SummarizationMiddleware(AgentMiddleware):
if cutoff_index >= len(messages):
return True
# Check tool messages at or after cutoff and find their source AI message
for i in range(cutoff_index, len(messages)):
msg = messages[i]
if not isinstance(msg, ToolMessage):
search_start = max(0, cutoff_index - _SEARCH_RANGE_FOR_TOOL_PAIRS)
search_end = min(len(messages), cutoff_index + _SEARCH_RANGE_FOR_TOOL_PAIRS)
for i in range(search_start, search_end):
if not self._has_tool_calls(messages[i]):
continue
# Search backwards to find the AI message that generated this tool call
tool_call_id = msg.tool_call_id
for j in range(i - 1, -1, -1):
ai_msg = messages[j]
if not self._has_tool_calls(ai_msg):
continue
ai_tool_ids = self._extract_tool_call_ids(cast("AIMessage", ai_msg))
if tool_call_id in ai_tool_ids:
# Found the AI message - check if cutoff separates them
if j < cutoff_index:
# AI message would be summarized, tool message would be kept
return False
break
tool_call_ids = self._extract_tool_call_ids(cast("AIMessage", messages[i]))
if self._cutoff_separates_tool_pair(messages, i, cutoff_index, tool_call_ids):
return False
return True

View File

@@ -892,38 +892,3 @@ def test_summarization_middleware_is_safe_cutoff_at_end() -> None:
# Cutoff past the length should also be safe
assert middleware._is_safe_cutoff_point(messages, len(messages) + 5)
def test_summarization_adjust_token_counts() -> None:
test_message = HumanMessage(content="a" * 12)
middleware = SummarizationMiddleware(model=MockChatModel(), trigger=("messages", 5))
count_1 = middleware.token_counter([test_message])
class MockAnthropicModel(MockChatModel):
@property
def _llm_type(self) -> str:
return "anthropic-chat"
middleware = SummarizationMiddleware(model=MockAnthropicModel(), trigger=("messages", 5))
count_2 = middleware.token_counter([test_message])
assert count_1 != count_2
def test_summarization_middleware_many_parallel_tool_calls_safety_gap() -> None:
"""Test cutoff safety with many parallel tool calls extending beyond old search range."""
middleware = SummarizationMiddleware(
model=MockChatModel(), trigger=("messages", 15), keep=("messages", 5)
)
tool_calls = [{"name": f"tool_{i}", "args": {}, "id": f"call_{i}"} for i in range(10)]
human_message = HumanMessage(content="calling 10 tools")
ai_message = AIMessage(content="calling 10 tools", tool_calls=tool_calls)
tool_messages = [
ToolMessage(content=f"result_{i}", tool_call_id=f"call_{i}") for i in range(10)
]
messages: list[AnyMessage] = [human_message, ai_message, *tool_messages]
# Cutoff at index 7 would separate the AI message (index 1) from tool messages 7-11
is_safe = middleware._is_safe_cutoff_point(messages, 7)
assert is_safe is False

View File

@@ -205,7 +205,7 @@ License: MIT License
To update these data, refer to the instructions here:
https://docs.langchain.com/oss/python/langchain/models#updating-or-overwriting-profile-data
https://docs.langchain.com/oss/python/langchain/models#modify-profile-data
"""

View File

@@ -121,16 +121,6 @@ class AnthropicTool(TypedDict):
cache_control: NotRequired[dict[str, str]]
# Some tool types require specific beta headers to be enabled
# Mapping of tool type patterns to required beta headers
_TOOL_TYPE_TO_BETA: dict[str, str] = {
"web_fetch_20250910": "web-fetch-2025-09-10",
"code_execution_20250522": "code-execution-2025-05-22",
"code_execution_20250825": "code-execution-2025-08-25",
"memory_20250818": "context-management-2025-06-27",
}
def _is_builtin_tool(tool: Any) -> bool:
"""Check if a tool is a built-in Anthropic tool.
@@ -1403,11 +1393,12 @@ class ChatAnthropic(BaseChatModel):
??? example "Web fetch (beta)"
```python hl_lines="7-11"
```python hl_lines="5 8-12"
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-3-5-haiku-20241022",
betas=["web-fetch-2025-09-10"], # Enable web fetch beta
)
tool = {
@@ -1420,19 +1411,16 @@ class ChatAnthropic(BaseChatModel):
response = model_with_tools.invoke("Please analyze the content at https://example.com/article")
```
!!! note "Automatic beta header"
The required `web-fetch-2025-09-10` beta header is automatically
appended to the request when using the `web_fetch_20250910` tool type.
You don't need to manually specify it in the `betas` parameter.
See the [Claude docs](https://platform.claude.com/docs/en/agents-and-tools/tool-use/web-fetch-tool)
for more info.
??? example "Code execution"
```python hl_lines="3-6"
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
```python hl_lines="3 6-9"
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["code-execution-2025-05-22"], # Enable code execution beta
)
tool = {
"type": "code_execution_20250522",
@@ -1445,21 +1433,18 @@ class ChatAnthropic(BaseChatModel):
)
```
!!! note "Automatic beta header"
The required `code-execution-2025-05-22` beta header is automatically
appended to the request when using the `code_execution_20250522` tool
type. You don't need to manually specify it in the `betas` parameter.
See the [Claude docs](https://platform.claude.com/docs/en/agents-and-tools/tool-use/code-execution-tool)
for more info.
??? example "Memory tool"
```python hl_lines="5-8"
```python hl_lines="5 8-11"
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["context-management-2025-06-27"], # Enable context management beta
)
tool = {
"type": "memory_20250818",
@@ -1470,12 +1455,6 @@ class ChatAnthropic(BaseChatModel):
response = model_with_tools.invoke("What are my interests?")
```
!!! note "Automatic beta header"
The required `context-management-2025-06-27` beta header is automatically
appended to the request when using the `memory_20250818` tool type.
You don't need to manually specify it in the `betas` parameter.
See the [Claude docs](https://platform.claude.com/docs/en/agents-and-tools/tool-use/memory-tool)
for more info.
@@ -1613,12 +1592,6 @@ class ChatAnthropic(BaseChatModel):
Example: `#!python betas=["mcp-client-2025-04-04"]`
"""
# Can also be passed in w/ model_kwargs, but having it as a param makes better devx
#
# Precedence order:
# 1. Call-time kwargs (e.g., llm.invoke(..., betas=[...]))
# 2. model_kwargs (e.g., ChatAnthropic(model_kwargs={"betas": [...]}))
# 3. Direct parameter (e.g., ChatAnthropic(betas=[...]))
model_kwargs: dict[str, Any] = Field(default_factory=dict)
@@ -1869,74 +1842,21 @@ class ChatAnthropic(BaseChatModel):
payload["thinking"] = self.thinking
if "response_format" in payload:
# response_format present when using agents.create_agent's ProviderStrategy
# ---
# ProviderStrategy converts to OpenAI-style format, which passes kwargs to
# ChatAnthropic, ending up in our payload
response_format = payload.pop("response_format")
if (
isinstance(response_format, dict)
and response_format.get("type") == "json_schema"
and "schema" in response_format.get("json_schema", {})
):
# compat with langchain.agents.create_agent response_format, which is
# an approximation of OpenAI format
response_format = cast(dict, response_format["json_schema"]["schema"])
# Convert OpenAI-style response_format to Anthropic's output_format
payload["output_format"] = _convert_to_anthropic_output_format(
response_format
)
if "output_format" in payload:
# Native structured output requires the structured outputs beta
if payload["betas"]:
if "structured-outputs-2025-11-13" not in payload["betas"]:
# Merge with existing betas
payload["betas"] = [
*payload["betas"],
"structured-outputs-2025-11-13",
]
else:
payload["betas"] = ["structured-outputs-2025-11-13"]
# Check if any tools have strict mode enabled
if "tools" in payload and isinstance(payload["tools"], list):
has_strict_tool = any(
isinstance(tool, dict) and tool.get("strict") is True
for tool in payload["tools"]
)
if has_strict_tool:
# Strict tool use requires the structured outputs beta
if payload["betas"]:
if "structured-outputs-2025-11-13" not in payload["betas"]:
# Merge with existing betas
payload["betas"] = [
*payload["betas"],
"structured-outputs-2025-11-13",
]
else:
payload["betas"] = ["structured-outputs-2025-11-13"]
# Auto-append required betas for specific tool types
for tool in payload["tools"]:
if isinstance(tool, dict) and "type" in tool:
tool_type = tool["type"]
if tool_type in _TOOL_TYPE_TO_BETA:
required_beta = _TOOL_TYPE_TO_BETA[tool_type]
if payload["betas"]:
# Append to existing betas if not already present
if required_beta not in payload["betas"]:
payload["betas"] = [*payload["betas"], required_beta]
else:
payload["betas"] = [required_beta]
# Auto-append required beta for mcp_servers
if payload.get("mcp_servers"):
required_beta = "mcp-client-2025-11-20"
if payload["betas"]:
# Append to existing betas if not already present
if required_beta not in payload["betas"]:
payload["betas"] = [*payload["betas"], required_beta]
else:
payload["betas"] = [required_beta]
if "output_format" in payload and not payload["betas"]:
payload["betas"] = ["structured-outputs-2025-11-13"]
return {k: v for k, v in payload.items() if v is not None}
@@ -2380,13 +2300,17 @@ class ChatAnthropic(BaseChatModel):
- Claude Sonnet 4.5 or Opus 4.1
- `langchain-anthropic>=1.1.0`
To enable strict tool use, specify `strict=True` when calling `bind_tools`.
To enable strict tool use:
```python hl_lines="11"
1. Specify the `structured-outputs-2025-11-13` beta header
2. Specify `strict=True` when calling `bind_tools`
```python hl_lines="5 12"
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5",
betas=["structured-outputs-2025-11-13"],
)
def get_weather(location: str) -> str:
@@ -2396,12 +2320,6 @@ class ChatAnthropic(BaseChatModel):
model_with_tools = model.bind_tools([get_weather], strict=True)
```
!!! note "Automatic beta header"
The required `structured-outputs-2025-11-13` beta header is
automatically appended to the request when using `strict=True`, so you
don't need to manually specify it in the `betas` parameter.
See LangChain [docs](https://docs.langchain.com/oss/python/integrations/chat/anthropic#strict-tool-use)
for more detail.
""" # noqa: E501
@@ -2595,15 +2513,19 @@ class ChatAnthropic(BaseChatModel):
- Claude Sonnet 4.5 or Opus 4.1
- `langchain-anthropic>=1.1.0`
To enable native structured output, specify `method="json_schema"` when
calling `with_structured_output`. (Under the hood, LangChain will
append the required `structured-outputs-2025-11-13` beta header)
To enable native structured output:
```python hl_lines="13"
1. Specify the `structured-outputs-2025-11-13` beta header
2. Specify `method="json_schema"` when calling `with_structured_output`
```python hl_lines="6 16"
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, Field
model = ChatAnthropic(model="claude-sonnet-4-5")
model = ChatAnthropic(
model="claude-sonnet-4-5",
betas=["structured-outputs-2025-11-13"],
)
class Movie(BaseModel):
\"\"\"A movie with details.\"\"\"
@@ -2791,7 +2713,8 @@ def convert_to_anthropic_tool(
!!! note
Requires Claude Sonnet 4.5 or Opus 4.1.
Requires Claude Sonnet 4.5 or Opus 4.1 and the
`structured-outputs-2025-11-13` beta header.
Returns:
An Anthropic tool definition dict.

View File

@@ -700,7 +700,6 @@ def test_response_format(schema: dict | type) -> None:
assert parsed["age"]
@pytest.mark.vcr
def test_response_format_in_agent() -> None:
class Weather(BaseModel):
temperature: float

View File

@@ -1398,7 +1398,7 @@ def test_mcp_tracing() -> None:
]
llm = ChatAnthropic(
model=MODEL_NAME,
model="claude-sonnet-4-5-20250929",
betas=["mcp-client-2025-04-04"],
mcp_servers=mcp_servers,
)
@@ -1586,7 +1586,7 @@ def test_streaming_cache_token_reporting() -> None:
def test_strict_tool_use() -> None:
model = ChatAnthropic(
model=MODEL_NAME, # type: ignore[call-arg]
model="claude-sonnet-4-5", # type: ignore[call-arg]
betas=["structured-outputs-2025-11-13"],
)
@@ -1600,284 +1600,6 @@ def test_strict_tool_use() -> None:
assert tool_definition["strict"] is True
def test_beta_merging_with_response_format() -> None:
"""Test that structured-outputs beta is merged with existing betas."""
class Person(BaseModel):
"""Person data."""
name: str
age: int
# Auto-inject structured-outputs beta with no others specified
model = ChatAnthropic(model=MODEL_NAME)
payload = model._get_request_payload(
"Test query",
response_format=Person.model_json_schema(),
)
assert payload["betas"] == ["structured-outputs-2025-11-13"]
# Merge structured-outputs beta if other betas are present
model = ChatAnthropic(
model=MODEL_NAME,
betas=["mcp-client-2025-04-04"],
)
payload = model._get_request_payload(
"Test query",
response_format=Person.model_json_schema(),
)
assert payload["betas"] == [
"mcp-client-2025-04-04",
"structured-outputs-2025-11-13",
]
# Structured-outputs beta already present - don't duplicate
model = ChatAnthropic(
model=MODEL_NAME,
betas=[
"mcp-client-2025-04-04",
"structured-outputs-2025-11-13",
],
)
payload = model._get_request_payload(
"Test query",
response_format=Person.model_json_schema(),
)
assert payload["betas"] == [
"mcp-client-2025-04-04",
"structured-outputs-2025-11-13",
]
# No response_format - betas should not be modified
model = ChatAnthropic(
model=MODEL_NAME,
betas=["mcp-client-2025-04-04"],
)
payload = model._get_request_payload("Test query")
assert payload["betas"] == ["mcp-client-2025-04-04"]
def test_beta_merging_with_strict_tool_use() -> None:
"""Test beta merging for strict tools."""
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "Sunny"
# Auto-inject structured-outputs beta with no others specified
model = ChatAnthropic(model=MODEL_NAME) # type: ignore[call-arg]
model_with_tools = model.bind_tools([get_weather], strict=True)
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"What's the weather?",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == ["structured-outputs-2025-11-13"]
# Merge structured-outputs beta if other betas are present
model = ChatAnthropic(
model=MODEL_NAME, # type: ignore[call-arg]
betas=["mcp-client-2025-04-04"],
)
model_with_tools = model.bind_tools([get_weather], strict=True)
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"What's the weather?",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == [
"mcp-client-2025-04-04",
"structured-outputs-2025-11-13",
]
# Structured-outputs beta already present - don't duplicate
model = ChatAnthropic(
model=MODEL_NAME, # type: ignore[call-arg]
betas=[
"mcp-client-2025-04-04",
"structured-outputs-2025-11-13",
],
)
model_with_tools = model.bind_tools([get_weather], strict=True)
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"What's the weather?",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == [
"mcp-client-2025-04-04",
"structured-outputs-2025-11-13",
]
# No strict tools - betas should not be modified
model = ChatAnthropic(
model=MODEL_NAME, # type: ignore[call-arg]
betas=["mcp-client-2025-04-04"],
)
model_with_tools = model.bind_tools([get_weather], strict=False)
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"What's the weather?",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == ["mcp-client-2025-04-04"]
def test_auto_append_betas_for_tool_types() -> None:
"""Test that betas are automatically appended based on tool types."""
# Test web_fetch_20250910 auto-appends web-fetch-2025-09-10
model = ChatAnthropic(model=MODEL_NAME) # type: ignore[call-arg]
tool = {"type": "web_fetch_20250910", "name": "web_fetch", "max_uses": 3}
model_with_tools = model.bind_tools([tool])
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"test",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == ["web-fetch-2025-09-10"]
# Test code_execution_20250522 auto-appends code-execution-2025-05-22
model = ChatAnthropic(model=MODEL_NAME) # type: ignore[call-arg]
tool = {"type": "code_execution_20250522", "name": "code_execution"}
model_with_tools = model.bind_tools([tool])
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"test",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == ["code-execution-2025-05-22"]
# Test memory_20250818 auto-appends context-management-2025-06-27
model = ChatAnthropic(model=MODEL_NAME) # type: ignore[call-arg]
tool = {"type": "memory_20250818", "name": "memory"}
model_with_tools = model.bind_tools([tool])
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"test",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == ["context-management-2025-06-27"]
# Test merging with existing betas
model = ChatAnthropic(
model=MODEL_NAME,
betas=["mcp-client-2025-04-04"], # type: ignore[call-arg]
)
tool = {"type": "web_fetch_20250910", "name": "web_fetch"}
model_with_tools = model.bind_tools([tool])
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"test",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == ["mcp-client-2025-04-04", "web-fetch-2025-09-10"]
# Test that it doesn't duplicate existing betas
model = ChatAnthropic(
model=MODEL_NAME,
betas=["web-fetch-2025-09-10"], # type: ignore[call-arg]
)
tool = {"type": "web_fetch_20250910", "name": "web_fetch"}
model_with_tools = model.bind_tools([tool])
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"test",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert payload["betas"] == ["web-fetch-2025-09-10"]
# Test multiple tools with different beta requirements
model = ChatAnthropic(model=MODEL_NAME) # type: ignore[call-arg]
tools = [
{"type": "web_fetch_20250910", "name": "web_fetch"},
{"type": "code_execution_20250522", "name": "code_execution"},
]
model_with_tools = model.bind_tools(tools)
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"test",
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert set(payload["betas"]) == {
"web-fetch-2025-09-10",
"code-execution-2025-05-22",
}
def test_auto_append_betas_for_mcp_servers() -> None:
"""Test that `mcp-client-2025-11-20` beta is automatically appended
for `mcp_servers`."""
model = ChatAnthropic(model=MODEL_NAME) # type: ignore[call-arg]
mcp_servers = [
{
"type": "url",
"url": "https://mcp.example.com/mcp",
"name": "example",
}
]
payload = model._get_request_payload(
"Test query",
mcp_servers=mcp_servers, # type: ignore[arg-type]
)
assert payload["betas"] == ["mcp-client-2025-11-20"]
assert payload["mcp_servers"] == mcp_servers
# Test merging with existing betas
model = ChatAnthropic(
model=MODEL_NAME,
betas=["context-management-2025-06-27"],
)
payload = model._get_request_payload(
"Test query",
mcp_servers=mcp_servers, # type: ignore[arg-type]
)
assert payload["betas"] == [
"context-management-2025-06-27",
"mcp-client-2025-11-20",
]
# Test that it doesn't duplicate if beta already present
model = ChatAnthropic(
model=MODEL_NAME,
betas=["mcp-client-2025-11-20"],
)
payload = model._get_request_payload(
"Test query",
mcp_servers=mcp_servers, # type: ignore[arg-type]
)
assert payload["betas"] == ["mcp-client-2025-11-20"]
# Test with mcp_servers set on model initialization
model = ChatAnthropic(
model=MODEL_NAME,
mcp_servers=mcp_servers, # type: ignore[arg-type]
)
payload = model._get_request_payload("Test query")
assert payload["betas"] == ["mcp-client-2025-11-20"]
assert payload["mcp_servers"] == mcp_servers
# Test with existing betas and mcp_servers on model initialization
model = ChatAnthropic(
model=MODEL_NAME,
betas=["context-management-2025-06-27"],
mcp_servers=mcp_servers, # type: ignore[arg-type]
)
payload = model._get_request_payload("Test query")
assert payload["betas"] == [
"context-management-2025-06-27",
"mcp-client-2025-11-20",
]
# Test that beta is not appended when mcp_servers is None
model = ChatAnthropic(model=MODEL_NAME)
payload = model._get_request_payload("Test query")
assert "betas" not in payload or payload["betas"] is None
# Test combining mcp_servers with tool types that require betas
model = ChatAnthropic(model=MODEL_NAME)
tool = {"type": "web_fetch_20250910", "name": "web_fetch"}
model_with_tools = model.bind_tools([tool])
payload = model_with_tools._get_request_payload( # type: ignore[attr-defined]
"Test query",
mcp_servers=mcp_servers,
**model_with_tools.kwargs, # type: ignore[attr-defined]
)
assert set(payload["betas"]) == {
"web-fetch-2025-09-10",
"mcp-client-2025-11-20",
}
def test_profile() -> None:
model = ChatAnthropic(model="claude-sonnet-4-20250514")
assert model.profile

View File

@@ -1,5 +1,5 @@
version = 1
revision = 3
revision = 2
requires-python = ">=3.10.0, <4.0.0"
resolution-markers = [
"python_full_version >= '3.13' and platform_python_implementation == 'PyPy'",
@@ -495,7 +495,7 @@ wheels = [
[[package]]
name = "langchain"
version = "1.1.0"
version = "1.0.5"
source = { editable = "../../langchain_v1" }
dependencies = [
{ name = "langchain-core" },

View File

@@ -1801,6 +1801,7 @@ class BaseChatOpenAI(BaseChatModel):
Args:
tools: A list of tool definitions to bind to this chat model.
Supports any tool definition handled by
`langchain_core.utils.function_calling.convert_to_openai_tool`.
tool_choice: Which tool to require the model to call. Options are:
@@ -1812,22 +1813,31 @@ class BaseChatOpenAI(BaseChatModel):
- `dict` of the form `{"type": "function", "function": {"name": <<tool_name>>}}`: calls `<<tool_name>>` tool.
- `False` or `None`: no effect, default OpenAI behavior.
strict: If `True`, model output is guaranteed to exactly match the JSON Schema
provided in the tool definition. The input schema will also be validated according to the
provided in the tool definition.
The input schema will also be validated according to the
[supported schemas](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas).
If `False`, input schema will not be validated and model output will not
be validated. If `None`, `strict` argument will not be passed to the model.
be validated.
If `None`, `strict` argument will not be passed to the model.
parallel_tool_calls: Set to `False` to disable parallel tool use.
Defaults to `None` (no specification, which allows parallel tool use).
response_format: Optional schema to format model response. If provided
and the model does not call a tool, the model will generate a
[structured response](https://platform.openai.com/docs/guides/structured-outputs).
response_format: Optional schema to format model response.
If provided and the model **does not** call a tool, the model will
generate a [structured response](https://platform.openai.com/docs/guides/structured-outputs).
kwargs: Any additional parameters are passed directly to `bind`.
""" # noqa: E501
if parallel_tool_calls is not None:
kwargs["parallel_tool_calls"] = parallel_tool_calls
formatted_tools = [
convert_to_openai_tool(tool, strict=strict) for tool in tools
]
tool_names = []
for tool in formatted_tools:
if "function" in tool:
@@ -1836,6 +1846,7 @@ class BaseChatOpenAI(BaseChatModel):
tool_names.append(tool["name"])
else:
pass
if tool_choice:
if isinstance(tool_choice, str):
# tool_choice is a tool/function name
@@ -1865,17 +1876,20 @@ class BaseChatOpenAI(BaseChatModel):
kwargs["tool_choice"] = tool_choice
if response_format:
# response_format present when using agents.create_agent's ProviderStrategy
# ---
# ProviderStrategy converts to OpenAI-style format, uses
# response_format
if (
isinstance(response_format, dict)
and response_format.get("type") == "json_schema"
and "schema" in response_format.get("json_schema", {})
):
# compat with langchain.agents.create_agent response_format, which is
# an approximation of OpenAI format
response_format = cast(dict, response_format["json_schema"]["schema"])
kwargs["response_format"] = _convert_to_openai_response_format(
response_format
response_format, strict=strict
)
return super().bind(tools=formatted_tools, **kwargs)
def with_structured_output(

View File

@@ -1147,28 +1147,33 @@ def test_multi_party_conversation() -> None:
assert "Bob" in response.content
class ResponseFormat(BaseModel):
class ResponseFormatPydanticBaseModel(BaseModel):
response: str
explanation: str
class ResponseFormatDict(TypedDict):
class ResponseFormatTypedDict(TypedDict):
response: str
explanation: str
@pytest.mark.parametrize(
"schema", [ResponseFormat, ResponseFormat.model_json_schema(), ResponseFormatDict]
"schema",
[
ResponseFormatPydanticBaseModel,
ResponseFormatPydanticBaseModel.model_json_schema(),
ResponseFormatTypedDict,
],
)
def test_structured_output_and_tools(schema: Any) -> None:
llm = ChatOpenAI(model="gpt-5-nano", verbosity="low").bind_tools(
[GenerateUsername], strict=True, response_format=schema
)
response = llm.invoke("What weighs more, a pound of feathers or a pound of gold?")
if schema == ResponseFormat:
if schema == ResponseFormatPydanticBaseModel:
parsed = response.additional_kwargs["parsed"]
assert isinstance(parsed, ResponseFormat)
assert isinstance(parsed, ResponseFormatPydanticBaseModel)
else:
parsed = json.loads(response.text)
assert isinstance(parsed, dict)
@@ -1190,7 +1195,10 @@ def test_structured_output_and_tools(schema: Any) -> None:
def test_tools_and_structured_output() -> None:
llm = ChatOpenAI(model="gpt-5-nano").with_structured_output(
ResponseFormat, strict=True, include_raw=True, tools=[GenerateUsername]
ResponseFormatPydanticBaseModel,
strict=True,
include_raw=True,
tools=[GenerateUsername],
)
expected_keys = {"raw", "parsing_error", "parsed"}
@@ -1199,7 +1207,7 @@ def test_tools_and_structured_output() -> None:
# Test invoke
## Engage structured output
response = llm.invoke(query)
assert isinstance(response["parsed"], ResponseFormat)
assert isinstance(response["parsed"], ResponseFormatPydanticBaseModel)
## Engage tool calling
response_tools = llm.invoke(tool_query)
ai_msg = response_tools["raw"]

View File

@@ -1,5 +1,5 @@
version = 1
revision = 2
revision = 3
requires-python = ">=3.10.0, <4.0.0"
resolution-markers = [
"python_full_version >= '3.13' and platform_python_implementation == 'PyPy'",
@@ -544,7 +544,7 @@ wheels = [
[[package]]
name = "langchain"
version = "1.0.5"
version = "1.1.0"
source = { editable = "../../langchain_v1" }
dependencies = [
{ name = "langchain-core" },

View File

@@ -230,7 +230,7 @@ class ChatModelIntegrationTests(ChatModelTests):
By default, this is determined by whether the chat model's `bind_tools` method
is overridden. It typically does not need to be overridden on the test class.
```python
```python "Example override"
@property
def has_tool_calling(self) -> bool:
return True
@@ -266,7 +266,7 @@ class ChatModelIntegrationTests(ChatModelTests):
`tool_choice="any"` will force a tool call, and `tool_choice=<tool name>`
will force a call to a specific tool.
```python
```python "Example override"
@property
def has_tool_choice(self) -> bool:
return False
@@ -281,7 +281,7 @@ class ChatModelIntegrationTests(ChatModelTests):
`with_structured_output` method is overridden. If the base implementation is
intended to be used, this method should be overridden.
See docs for [Structured output](https://docs.langchain.com/oss/python/langchain/structured-output).
See: https://docs.langchain.com/oss/python/langchain/structured-output
```python
@property
@@ -291,45 +291,23 @@ class ChatModelIntegrationTests(ChatModelTests):
??? info "`structured_output_kwargs`"
Dict property specifying additional kwargs to pass to
`with_structured_output()` when running structured output tests.
Dict property that can be used to specify additional kwargs for
`with_structured_output`.
Override this to customize how your model generates structured output.
The most common use case is specifying the `method` parameter:
- `'function_calling'`: Uses tool/function calling to enforce the schema.
- `'json_mode'`: Uses the model's JSON mode.
- `'json_schema'`: Uses native JSON schema support (e.g., OpenAI's structured
outputs).
Useful for testing different models.
```python
@property
def structured_output_kwargs(self) -> dict:
return {"method": "json_schema"}
return {"method": "function_calling"}
```
??? info "`supports_json_mode`"
Boolean property indicating whether the chat model supports
`method='json_mode'` in `with_structured_output`.
Boolean property indicating whether the chat model supports JSON mode in
`with_structured_output`.
Defaults to `False`.
JSON mode constrains the model to output valid JSON without enforcing
a specific schema (unlike `'function_calling'` or `'json_schema'` methods).
When using JSON mode, you must prompt the model to output JSON in your
message.
!!! example
```python
structured_llm = llm.with_structured_output(MySchema, method="json_mode")
structured_llm.invoke("... Return the result as JSON.")
```
See docs for [Structured output](https://docs.langchain.com/oss/python/langchain/structured-output).
See: https://docs.langchain.com/oss/python/langchain/structured-output
```python
@property
@@ -363,7 +341,7 @@ class ChatModelIntegrationTests(ChatModelTests):
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
@@ -388,7 +366,7 @@ class ChatModelIntegrationTests(ChatModelTests):
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
@@ -396,12 +374,127 @@ class ChatModelIntegrationTests(ChatModelTests):
return True
```
??? info "`supports_pdf_inputs`"
Boolean property indicating whether the chat model supports PDF inputs.
Defaults to `False`.
If set to `True`, the chat model will be tested by inputting a
`FileContentBlock` with the shape:
```python
{
"type": "file",
"base64": "<base64 file data>",
"mime_type": "application/pdf",
}
```
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
def supports_pdf_inputs(self) -> bool:
return True
```
??? info "`supports_audio_inputs`"
Boolean property indicating whether the chat model supports audio inputs.
Defaults to `False`.
If set to `True`, the chat model will be tested by inputting an
`AudioContentBlock` with the shape:
```python
{
"type": "audio",
"base64": "<base64 audio data>",
"mime_type": "audio/wav", # or appropriate MIME type
}
```
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
def supports_audio_inputs(self) -> bool:
return True
```
!!! warning
This test downloads audio data from wikimedia.org. You may need to set the
`LANGCHAIN_TESTS_USER_AGENT` environment variable to identify these tests,
e.g.,
```bash
export LANGCHAIN_TESTS_USER_AGENT="CoolBot/0.0 (https://example.org/coolbot/; coolbot@example.org) generic-library/0.0"
```
Refer to the [Wikimedia Foundation User-Agent Policy](https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Foundation_User-Agent_Policy).
??? info "`supports_video_inputs`"
Boolean property indicating whether the chat model supports image inputs.
Defaults to `False`.
No current tests are written for this feature.
??? info "`returns_usage_metadata`"
Boolean property indicating whether the chat model returns usage metadata
on invoke and streaming responses.
Defaults to `True`.
`usage_metadata` is an optional dict attribute on `AIMessage` objects that track
input and output tokens.
[See more](https://reference.langchain.com/python/langchain_core/language_models/#langchain_core.messages.ai.UsageMetadata).
```python
@property
def returns_usage_metadata(self) -> bool:
return False
```
Models supporting `usage_metadata` should also return the name of the underlying
model in the `response_metadata` of the `AIMessage`.
??? info "`supports_anthropic_inputs`"
Boolean property indicating whether the chat model supports Anthropic-style
inputs.
These inputs might feature "tool use" and "tool result" content blocks, e.g.,
```python
[
{"type": "text", "text": "Hmm let me think about that"},
{
"type": "tool_use",
"input": {"fav_color": "green"},
"id": "foo",
"name": "color_picker",
},
]
```
If set to `True`, the chat model will be tested using content blocks of this
form.
```python
@property
def supports_anthropic_inputs(self) -> bool:
return False
```
??? info "`supports_image_tool_message`"
Boolean property indicating whether the chat model supports a `ToolMessage`
that includes image content, e.g. in the OpenAI Chat Completions format.
Defaults to `False`.
that includes image content, e.g. in the OpenAI Chat Completions format:
```python
ToolMessage(
@@ -438,40 +531,13 @@ class ChatModelIntegrationTests(ChatModelTests):
```python
@property
def supports_image_tool_message(self) -> bool:
return True
```
??? info "`supports_pdf_inputs`"
Boolean property indicating whether the chat model supports PDF inputs.
Defaults to `False`.
If set to `True`, the chat model will be tested by inputting a
`FileContentBlock` with the shape:
```python
{
"type": "file",
"base64": "<base64 file data>",
"mime_type": "application/pdf",
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
```python
@property
def supports_pdf_inputs(self) -> bool:
return True
return False
```
??? info "`supports_pdf_tool_message`"
Boolean property indicating whether the chat model supports a `ToolMessage`
that includes PDF content using the LangChain `FileContentBlock` format.
Defaults to `False`.
Boolean property indicating whether the chat model supports a `ToolMessage
that include PDF content using the LangChain `FileContentBlock` format:
```python
ToolMessage(
@@ -493,114 +559,16 @@ class ChatModelIntegrationTests(ChatModelTests):
```python
@property
def supports_pdf_tool_message(self) -> bool:
return True
```
??? info "`supports_audio_inputs`"
Boolean property indicating whether the chat model supports audio inputs.
Defaults to `False`.
If set to `True`, the chat model will be tested by inputting an
`AudioContentBlock` with the shape:
```python
{
"type": "audio",
"base64": "<base64 audio data>",
"mime_type": "audio/wav", # or appropriate MIME type
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
```python
@property
def supports_audio_inputs(self) -> bool:
return True
```
!!! warning
This test downloads audio data from wikimedia.org. You may need to set the
`LANGCHAIN_TESTS_USER_AGENT` environment variable to identify these tests,
e.g.,
```bash
export LANGCHAIN_TESTS_USER_AGENT="CoolBot/0.0 (https://example.org/coolbot/; coolbot@example.org) generic-library/0.0"
```
Refer to the [Wikimedia Foundation User-Agent Policy](https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Foundation_User-Agent_Policy).
??? info "`supports_video_inputs`"
Boolean property indicating whether the chat model supports image inputs.
Defaults to `False`.
No current tests are written for this feature.
??? info "`returns_usage_metadata`"
Boolean property indicating whether the chat model returns usage metadata
on invoke and streaming responses.
Defaults to `True`.
`usage_metadata` is an optional dict attribute on `AIMessage` objects that track
input and output tokens.
[See more](https://reference.langchain.com/python/langchain_core/language_models/#langchain_core.messages.ai.UsageMetadata).
```python
@property
def returns_usage_metadata(self) -> bool:
return False
```
Models supporting `usage_metadata` should also return the name of the underlying
model in the `response_metadata` of the `AIMessage`.
??? info "`supports_anthropic_inputs`"
Boolean property indicating whether the chat model supports Anthropic-style
inputs.
Defaults to `False`.
These inputs might feature "tool use" and "tool result" content blocks, e.g.,
```python
[
{"type": "text", "text": "Hmm let me think about that"},
{
"type": "tool_use",
"input": {"fav_color": "green"},
"id": "foo",
"name": "color_picker",
},
]
```
If set to `True`, the chat model will be tested using content blocks of this
form.
```python
@property
def supports_anthropic_inputs(self) -> bool:
return True
```
??? info "`supported_usage_metadata_details`"
Property controlling what usage metadata details are emitted in both invoke
and stream.
Defaults to `{"invoke": [], "stream": []}`.
`usage_metadata` is an optional dict attribute on `AIMessage` objects that track
input and output tokens.
[See more](https://reference.langchain.com/python/langchain_core/language_models/#langchain_core.messages.ai.UsageMetadata).
It includes optional keys `input_token_details` and `output_token_details`
@@ -615,8 +583,6 @@ class ChatModelIntegrationTests(ChatModelTests):
[VCR](https://vcrpy.readthedocs.io/en/latest/) caching of HTTP calls, such
as benchmarking tests.
Defaults to `False`.
To enable these tests, follow these steps:
1. Override the `enable_vcr_tests` property to return `True`:
@@ -743,7 +709,8 @@ class ChatModelIntegrationTests(ChatModelTests):
3. Run tests to generate VCR cassettes.
```bash title="Example"
Example:
```bash
uv run python -m pytest tests/integration_tests/test_chat_models.py::TestMyModel::test_stream_time
```
@@ -758,7 +725,7 @@ class ChatModelIntegrationTests(ChatModelTests):
You can then commit the cassette to your repository. Subsequent test runs
will use the cassette instead of making HTTP calls.
''' # noqa: E501
''' # noqa: E501,D214
@property
def standard_chat_model_params(self) -> dict:
@@ -1979,6 +1946,10 @@ class ChatModelIntegrationTests(ChatModelTests):
??? question "Troubleshooting"
This test uses [a utility function](https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.tool_example_to_messages.html).
in `langchain_core` to generate a sequence of messages representing
"few-shot" examples.
If this test fails, check that the model can correctly handle this
sequence of messages.
@@ -2024,7 +1995,7 @@ class ChatModelIntegrationTests(ChatModelTests):
??? note "Configuration"
To disable structured output tests, set `has_structured_output` to `False`
To disable structured output tests, set `has_structured_output` to False
in your test class:
```python
@@ -2034,7 +2005,7 @@ class ChatModelIntegrationTests(ChatModelTests):
return False
```
By default, `has_structured_output` is `True` if a model overrides the
By default, `has_structured_output` is True if a model overrides the
`with_structured_output` or `bind_tools` methods.
??? question "Troubleshooting"
@@ -2042,10 +2013,10 @@ class ChatModelIntegrationTests(ChatModelTests):
If this test fails, ensure that the model's `bind_tools` method
properly handles both JSON Schema and Pydantic V2 models.
`langchain_core` implements a [utility function](https://reference.langchain.com/python/langchain_core/utils/?h=convert_to_op#langchain_core.utils.function_calling.convert_to_openai_tool).
`langchain_core` implements a [utility function](https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.html).
that will accommodate most formats.
See [example implementation](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py).
See [example implementation](https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.with_structured_output).
of `with_structured_output`.
"""
@@ -2107,7 +2078,7 @@ class ChatModelIntegrationTests(ChatModelTests):
??? note "Configuration"
To disable structured output tests, set `has_structured_output` to `False`
To disable structured output tests, set `has_structured_output` to False
in your test class:
```python
@@ -2117,7 +2088,7 @@ class ChatModelIntegrationTests(ChatModelTests):
return False
```
By default, `has_structured_output` is `True` if a model overrides the
By default, `has_structured_output` is True if a model overrides the
`with_structured_output` or `bind_tools` methods.
??? question "Troubleshooting"
@@ -2125,10 +2096,10 @@ class ChatModelIntegrationTests(ChatModelTests):
If this test fails, ensure that the model's `bind_tools` method
properly handles both JSON Schema and Pydantic V2 models.
`langchain_core` implements a [utility function](https://reference.langchain.com/python/langchain_core/utils/?h=convert_to_op#langchain_core.utils.function_calling.convert_to_openai_tool).
`langchain_core` implements a [utility function](https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.html).
that will accommodate most formats.
See [example implementation](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py).
See [example implementation](https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.with_structured_output).
of `with_structured_output`.
"""
@@ -2189,7 +2160,7 @@ class ChatModelIntegrationTests(ChatModelTests):
??? note "Configuration"
To disable structured output tests, set `has_structured_output` to `False`
To disable structured output tests, set `has_structured_output` to False
in your test class:
```python
@@ -2199,7 +2170,7 @@ class ChatModelIntegrationTests(ChatModelTests):
return False
```
By default, `has_structured_output` is `True` if a model overrides the
By default, `has_structured_output` is True if a model overrides the
`with_structured_output` or `bind_tools` methods.
??? question "Troubleshooting"
@@ -2207,10 +2178,10 @@ class ChatModelIntegrationTests(ChatModelTests):
If this test fails, ensure that the model's `bind_tools` method
properly handles both JSON Schema and Pydantic V1 models.
`langchain_core` implements a [utility function](https://reference.langchain.com/python/langchain_core/utils/?h=convert_to_op#langchain_core.utils.function_calling.convert_to_openai_tool).
`langchain_core` implements [a utility function](https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.html).
that will accommodate most formats.
See [example implementation](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py).
See [example implementation](https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.with_structured_output).
of `with_structured_output`.
"""
@@ -2255,7 +2226,7 @@ class ChatModelIntegrationTests(ChatModelTests):
??? note "Configuration"
To disable structured output tests, set `has_structured_output` to `False`
To disable structured output tests, set `has_structured_output` to False
in your test class:
```python
@@ -2273,10 +2244,10 @@ class ChatModelIntegrationTests(ChatModelTests):
If this test fails, ensure that the model's `bind_tools` method
properly handles Pydantic V2 models with optional parameters.
`langchain_core` implements a [utility function](https://reference.langchain.com/python/langchain_core/utils/?h=convert_to_op#langchain_core.utils.function_calling.convert_to_openai_tool).
`langchain_core` implements [a utility function](https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.html).
that will accommodate most formats.
See [example implementation](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py).
See [example implementation](https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.with_structured_output).
of `with_structured_output`.
"""
@@ -2320,7 +2291,7 @@ class ChatModelIntegrationTests(ChatModelTests):
assert isinstance(result, dict)
def test_json_mode(self, model: BaseChatModel) -> None:
"""Test [structured output]((https://docs.langchain.com/oss/python/langchain/structured-output)) via JSON mode.
"""Test structured output via [JSON mode.](https://python.langchain.com/docs/concepts/structured_outputs/#json-mode).
This test is optional and should be skipped if the model does not support
the JSON mode feature (see configuration below).
@@ -2341,7 +2312,7 @@ class ChatModelIntegrationTests(ChatModelTests):
See example implementation of `with_structured_output` here: https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.with_structured_output
""" # noqa: E501
"""
if not self.supports_json_mode:
pytest.skip("Test requires json mode support.")
@@ -2590,7 +2561,7 @@ class ChatModelIntegrationTests(ChatModelTests):
]
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
See https://python.langchain.com/docs/concepts/multimodality/
If the property `supports_image_urls` is set to `True`, the test will also
check that we can process content blocks of the form:
@@ -2710,7 +2681,7 @@ class ChatModelIntegrationTests(ChatModelTests):
```
This test can be skipped by setting the `supports_image_tool_message` property
to `False` (see configuration below).
to False (see configuration below).
??? note "Configuration"
@@ -2729,7 +2700,7 @@ class ChatModelIntegrationTests(ChatModelTests):
If this test fails, check that the model can correctly handle messages
with image content blocks in `ToolMessage` objects, including base64-encoded
images. Otherwise, set the `supports_image_tool_message` property to
`False`.
False.
"""
if not self.supports_image_tool_message:
@@ -2810,7 +2781,7 @@ class ChatModelIntegrationTests(ChatModelTests):
```
This test can be skipped by setting the `supports_pdf_tool_message` property
to `False` (see configuration below).
to False (see configuration below).
??? note "Configuration"
@@ -2829,7 +2800,7 @@ class ChatModelIntegrationTests(ChatModelTests):
If this test fails, check that the model can correctly handle messages
with PDF content blocks in `ToolMessage` objects, specifically
base64-encoded PDFs. Otherwise, set the `supports_pdf_tool_message` property
to `False`.
to False.
"""
if not self.supports_pdf_tool_message:
pytest.skip("Model does not support PDF tool message.")

View File

@@ -15,7 +15,7 @@ class ToolsIntegrationTests(ToolsTests):
If invoked with a `ToolCall`, the tool should return a valid `ToolMessage`
content.
If you have followed the [custom tool guide](https://docs.langchain.com/oss/python/contributing/implement-langchain#tools),
If you have followed the [custom tool guide](https://python.langchain.com/docs/how_to/custom_tools/),
this test should always pass because `ToolCall` inputs are handled by the
`langchain_core.tools.BaseTool` class.

View File

@@ -118,27 +118,7 @@ class ChatModelTests(BaseStandardTests):
@property
def structured_output_kwargs(self) -> dict:
"""Additional kwargs to pass to `with_structured_output()` in tests.
Override this property to customize how structured output is generated
for your model. The most common use case is specifying the `method`
parameter, which controls the mechanism used to enforce structured output:
- `'function_calling'`: Uses tool/function calling to enforce the schema.
- `'json_mode'`: Uses the model's JSON mode.
- `'json_schema'`: Uses native JSON schema support (e.g., OpenAI's
structured outputs).
Returns:
A dict of kwargs passed to `with_structured_output()`.
Example:
```python
@property
def structured_output_kwargs(self) -> dict:
return {"method": "json_schema"}
```
"""
"""If specified, additional kwargs for `with_structured_output`."""
return {}
@property
@@ -316,7 +296,7 @@ class ChatModelUnitTests(ChatModelTests):
By default, this is determined by whether the chat model's `bind_tools` method
is overridden. It typically does not need to be overridden on the test class.
```python
```python "Example override"
@property
def has_tool_calling(self) -> bool:
return True
@@ -351,7 +331,7 @@ class ChatModelUnitTests(ChatModelTests):
`tool_choice="any"` will force a tool call, and `tool_choice=<tool name>`
will force a call to a specific tool.
```python
```python "Example override"
@property
def has_tool_choice(self) -> bool:
return False
@@ -366,7 +346,7 @@ class ChatModelUnitTests(ChatModelTests):
`with_structured_output` or `bind_tools` methods. If the base
implementations are intended to be used, this method should be overridden.
See docs for [Structured output](https://docs.langchain.com/oss/python/langchain/structured-output).
See: https://docs.langchain.com/oss/python/langchain/structured-output
```python
@property
@@ -376,44 +356,23 @@ class ChatModelUnitTests(ChatModelTests):
??? info "`structured_output_kwargs`"
Dict property specifying additional kwargs to pass to
`with_structured_output()` when running structured output tests.
Dict property that can be used to specify additional kwargs for
`with_structured_output`.
Override this to customize how your model generates structured output.
The most common use case is specifying the `method` parameter:
- `'function_calling'`: Uses tool/function calling to enforce the schema.
- `'json_mode'`: Uses the model's JSON mode.
- `'json_schema'`: Uses native JSON schema support (e.g., OpenAI's structured
outputs).
Useful for testing different models.
```python
@property
def structured_output_kwargs(self) -> dict:
return {"method": "json_schema"}
return {"method": "function_calling"}
```
??? info "`supports_json_mode`"
Boolean property indicating whether the chat model supports
`method='json_mode'` in `with_structured_output`.
Boolean property indicating whether the chat model supports JSON mode in
`with_structured_output`.
JSON mode constrains the model to output valid JSON without enforcing
a specific schema (unlike `'function_calling'` or `'json_schema'` methods).
When using JSON mode, you must prompt the model to output JSON in your
message.
Example:
```python
structured_llm = llm.with_structured_output(MySchema, method="json_mode")
structured_llm.invoke("... Return the result as JSON.")
```
See docs for [Structured output](https://docs.langchain.com/oss/python/langchain/structured-output).
Defaults to `False`.
See: https://docs.langchain.com/oss/python/langchain/structured-output
```python
@property
@@ -447,7 +406,7 @@ class ChatModelUnitTests(ChatModelTests):
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
@@ -472,7 +431,7 @@ class ChatModelUnitTests(ChatModelTests):
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
@@ -480,54 +439,6 @@ class ChatModelUnitTests(ChatModelTests):
return True
```
??? info "`supports_image_tool_message`"
Boolean property indicating whether the chat model supports a `ToolMessage`
that includes image content, e.g. in the OpenAI Chat Completions format.
Defaults to `False`.
```python
ToolMessage(
content=[
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
},
],
tool_call_id="1",
name="random_image",
)
```
(OpenAI Chat Completions format), as well as LangChain's `ImageContentBlock`
format:
```python
ToolMessage(
content=[
{
"type": "image",
"base64": image_data,
"mime_type": "image/jpeg",
},
],
tool_call_id="1",
name="random_image",
)
```
(standard format).
If set to `True`, the chat model will be tested with message sequences that
include `ToolMessage` objects of this form.
```python
@property
def supports_image_tool_message(self) -> bool:
return True
```
??? info "`supports_pdf_inputs`"
Boolean property indicating whether the chat model supports PDF inputs.
@@ -545,7 +456,7 @@ class ChatModelUnitTests(ChatModelTests):
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
@@ -553,38 +464,6 @@ class ChatModelUnitTests(ChatModelTests):
return True
```
??? info "`supports_pdf_tool_message`"
Boolean property indicating whether the chat model supports a `ToolMessage`
that includes PDF content using the LangChain `FileContentBlock` format.
Defaults to `False`.
```python
ToolMessage(
content=[
{
"type": "file",
"base64": pdf_data,
"mime_type": "application/pdf",
},
],
tool_call_id="1",
name="random_pdf",
)
```
using LangChain's `FileContentBlock` format.
If set to `True`, the chat model will be tested with message sequences that
include `ToolMessage` objects of this form.
```python
@property
def supports_pdf_tool_message(self) -> bool:
return True
```
??? info "`supports_audio_inputs`"
Boolean property indicating whether the chat model supports audio inputs.
@@ -602,7 +481,7 @@ class ChatModelUnitTests(ChatModelTests):
}
```
See docs for [Multimodality](https://docs.langchain.com/oss/python/langchain/models#multimodal).
See https://docs.langchain.com/oss/python/langchain/models#multimodal
```python
@property
@@ -678,6 +557,82 @@ class ChatModelUnitTests(ChatModelTests):
return False
```
??? info "`supports_image_tool_message`"
Boolean property indicating whether the chat model supports `ToolMessage`
objects that include image content, e.g.,
```python
ToolMessage(
content=[
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
},
],
tool_call_id="1",
name="random_image",
)
```
(OpenAI Chat Completions format), as well as LangChain's `ImageContentBlock`
format:
```python
ToolMessage(
content=[
{
"type": "image",
"base64": image_data,
"mime_type": "image/jpeg",
},
],
tool_call_id="1",
name="random_image",
)
```
(standard format).
If set to `True`, the chat model will be tested with message sequences that
include `ToolMessage` objects of this form.
```python
@property
def supports_image_tool_message(self) -> bool:
return False
```
??? info "`supports_pdf_tool_message`"
Boolean property indicating whether the chat model supports `ToolMessage`
objects that include PDF content, i.e.,
```python
ToolMessage(
content=[
{
"type": "file",
"base64": pdf_data,
"mime_type": "application/pdf",
},
],
tool_call_id="1",
name="random_pdf",
)
```
using LangChain's `FileContentBlock` format.
If set to `True`, the chat model will be tested with message sequences that
include `ToolMessage` objects of this form.
```python
@property
def supports_pdf_tool_message(self) -> bool:
return False
```
??? info "`supported_usage_metadata_details`"
Property controlling what usage metadata details are emitted in both `invoke`
@@ -685,7 +640,6 @@ class ChatModelUnitTests(ChatModelTests):
`usage_metadata` is an optional dict attribute on `AIMessage` objects that track
input and output tokens.
[See more](https://reference.langchain.com/python/langchain_core/language_models/#langchain_core.messages.ai.UsageMetadata).
It includes optional keys `input_token_details` and `output_token_details`
@@ -826,7 +780,8 @@ class ChatModelUnitTests(ChatModelTests):
3. Run tests to generate VCR cassettes.
```bash title="Example"
Example:
```bash
uv run python -m pytest tests/integration_tests/test_chat_models.py::TestMyModel::test_stream_time
```
@@ -901,7 +856,7 @@ class ChatModelUnitTests(ChatModelTests):
1. `chat_model_params` is specified and the model can be initialized
from those params;
2. The model accommodates
[standard parameters](https://docs.langchain.com/oss/python/langchain/models#parameters).
[standard parameters](https://python.langchain.com/docs/concepts/chat_models/#standard-parameters).
"""
model = self.chat_model_class(
@@ -974,13 +929,10 @@ class ChatModelUnitTests(ChatModelTests):
??? question "Troubleshooting"
If this test fails, ensure that the model's `bind_tools` method
properly handles Pydantic V2 models.
properly handles Pydantic V2 models. `langchain_core` implements
a utility function that will accommodate most formats: https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.html
`langchain_core` implements a [utility function](https://reference.langchain.com/python/langchain_core/utils/?h=convert_to_op#langchain_core.utils.function_calling.convert_to_openai_tool).
that will accommodate most formats.
See [example implementation](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py).
of `with_structured_output`.
See example implementation of `bind_tools` here: https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.bind_tools
"""
if not self.has_tool_calling:
return
@@ -1019,13 +971,11 @@ class ChatModelUnitTests(ChatModelTests):
??? question "Troubleshooting"
If this test fails, ensure that the model's `bind_tools` method
properly handles Pydantic V2 models.
properly handles Pydantic V2 models. `langchain_core` implements
a utility function that will accommodate most formats: https://python.langchain.com/api_reference/core/utils/langchain_core.utils.function_calling.convert_to_openai_tool.html
`langchain_core` implements a [utility function](https://reference.langchain.com/python/langchain_core/utils/?h=convert_to_op#langchain_core.utils.function_calling.convert_to_openai_tool).
that will accommodate most formats.
See example implementation of `with_structured_output` here: https://python.langchain.com/api_reference/_modules/langchain_openai/chat_models/base.html#BaseChatOpenAI.with_structured_output
See [example implementation](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py).
of `with_structured_output`.
"""
if not self.has_structured_output:
return
@@ -1045,7 +995,7 @@ class ChatModelUnitTests(ChatModelTests):
??? question "Troubleshooting"
If this test fails, check that the model accommodates [standard parameters](https://docs.langchain.com/oss/python/langchain/models#parameters).
If this test fails, check that the model accommodates [standard parameters](https://python.langchain.com/docs/concepts/chat_models/#standard-parameters).
Check also that the model class is named according to convention
(e.g., `ChatProviderName`).

View File

@@ -104,7 +104,8 @@ class ToolsUnitTests(ToolsTests):
If this fails, add an `args_schema` to your tool.
See [this guide](https://docs.langchain.com/oss/python/contributing/implement-langchain#tools)
See
[this guide](https://python.langchain.com/docs/how_to/custom_tools/#subclass-basetool)
and see how `CalculatorInput` is configured in the
`CustomCalculatorTool.args_schema` attribute
"""

16
pyproject.toml Normal file
View File

@@ -0,0 +1,16 @@
[project]
authors = []
license = { text = "MIT" }
requires-python = ">=3.10.0,<4.0.0"
dependencies = []
name = "langchain-monorepo"
version = "0.0.1"
description = "LangChain monorepo"
readme = "README.md"
[project.urls]
repository = "https://www.github.com/langchain-ai/langchain"
[dependency-groups]
dev = []

13
uv.lock generated Normal file
View File

@@ -0,0 +1,13 @@
version = 1
revision = 3
requires-python = ">=3.10.0, <4.0.0"
[[package]]
name = "langchain-monorepo"
version = "0.0.1"
source = { virtual = "." }
[package.metadata]
[package.metadata.requires-dev]
dev = []