ToolSchema as root schema cache; replace TypedDict conversion with TypeAdapter (#37103)
Builds on #37101. --- Two changes in one commit, both motivated by the same principle: a single, clean owner for everything schema-related on a tool. ## `ToolSchema` — the root cache Previously `BaseTool` had three independent `cached_property` slots (`tool_call_schema`, `args`, `_approximate_schema_chars`) that all computed overlapping data and each needed individual invalidation. This PR replaces them with a single `ToolSchema` dataclass and one `tool_schema` cached property that is the sole root: ```python @dataclass class ToolSchema: name: str description: str validator: TypeAdapter # validates tool call inputs json_schema: dict # sent to LLMs pydantic_schema: Any # model class or dict (backward compat) args: dict # properties from json_schema approximate_chars: int # precomputed for token estimation ``` `BaseTool.tool_call_schema`, `BaseTool.args`, and `BaseTool._approximate_schema_chars` are now plain `@property` delegates to `tool_schema`. `__setattr__` only needs to pop one key on mutation instead of four. The `is`-identity caching tests still pass because all delegates read from the same cached `ToolSchema` object. `ToolSchema` is exported from `langchain_core.tools` and can be used directly by integrations that want to consume both the validator and the schema without going through `BaseTool`. ## `TypeAdapter`-based TypedDict conversion `_convert_any_typed_dicts_to_pydantic` was a ~70-line recursive function that converted TypedDicts to throwaway pydantic v1 model classes just to call `.schema()`. Replaced with: ```python adapter = TypeAdapter(typed_dict) schema = adapter.json_schema() ``` Pydantic v2's `TypeAdapter` handles everything the old code did — nested TypedDicts, generic containers, `Annotated` metadata — and also correctly handles `NotRequired` and `Required` annotations, which the v1 path did not. A new test `test__convert_typed_dict_not_required` verifies this: ```python class Tool(TypedDict): required_field: str optional_field: NotRequired[int] result = _convert_typed_dict_to_openai_function(Tool) assert "required_field" in result["parameters"]["required"] assert "optional_field" not in result["parameters"]["required"] ``` Field descriptions from Google-style docstrings and `Annotated[T, ..., "description"]` metadata are preserved by post-processing the schema after generation. The old `test__convert_typed_dict_to_openai_function_fail` test expected a `TypeError` for `MutableSet` because pydantic v1 didn't support it. pydantic v2 does; the test is updated to verify successful conversion instead. ## What stays unchanged - All public `BaseTool` API signatures — `tool_call_schema`, `args`, `get_input_schema()` all have the same signatures and return types as before. - `pydantic.v1` acceptance for `args_schema` — tools with v1 model schemas continue to work. > AI-agent assisted contribution. --------- Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
ToolSchema as root schema cache; replace TypedDict conversion with TypeAdapter (#37103)
The agent engineering platform.
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
Note
Looking for the JS/TS library? Check out LangChain.js.
Quickstart
pip install langchain
# or
uv add langchain
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-5.4")
result = model.invoke("Hello, world!")
If you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.
Tip
For developing, debugging, and deploying AI agents and LLM applications, see LangSmith.
LangChain ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
- Deep Agents — Build agents that can plan, use subagents, and leverage file systems for complex tasks
- LangGraph — Build agents that can reliably handle complex tasks with our low-level agent orchestration framework
- Integrations — Chat & embedding models, tools & toolkits, and more
- LangSmith — Agent evals, observability, and debugging for LLM apps
- LangSmith Deployment — Deploy and scale agents with a purpose-built platform for long-running, stateful workflows
Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
- Real-time data augmentation — Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more
- Model interoperability — Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly — LangChain's abstractions keep you moving without losing momentum
- Rapid prototyping — Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle
- Production-ready features — Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices
- Vibrant community and ecosystem — Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community
- Flexible abstraction layers — Work at the level of abstraction that suits your needs — from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity
Documentation
- docs.langchain.com – Comprehensive documentation, including conceptual overviews and guides
- reference.langchain.com/python – API reference docs for LangChain packages
- Chat LangChain – Chat with the LangChain documentation and get answers to your questions
Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.
Additional resources
- Contributing Guide – Learn how to contribute to LangChain projects and find good first issues.
- Code of Conduct – Our community guidelines and standards for participation.
- LangChain Academy – Comprehensive, free courses on LangChain libraries and products, made by the LangChain team.