# Description This PR fixes a bug in _recursive_set_additional_properties_false used in function_calling.convert_to_openai_function. Previously, schemas with "additionalProperties=True" were not correctly overridden when strict validation was expected, which could lead to invalid OpenAI function schemas. The updated implementation ensures that: - Any schema with "additionalProperties" already set will now be forced to False under strict mode. - Recursive traversal of properties, items, and anyOf is preserved. - Function signature remains unchanged for backward compatibility. # Issue When using tool calling in OpenAI structured output strict mode (strict=True), 400: "Invalid schema for response_format XXXXX 'additionalProperties' is required to be supplied and to be false" error raises for the parameter that contains dict type. OpenAI requires additionalProperties to be set to False. Some PRs try to resolved the issue. - PR #25169 introduced _recursive_set_additional_properties_false to recursively set additionalProperties=False. - PR #26287 fixed handling of empty parameter tools for OpenAI function generation. - PR #30971 added support for Union type arguments in strict mode of OpenAI function calling / structured output. Despite these improvements, since Pydantic 2.11, it will always add `additionalProperties: True` for arbitrary dictionary schemas dict or Any (https://pydantic.dev/articles/pydantic-v2-11-release#changes). Schemas that already had additionalProperties=True in such cases were not being overridden, which this PR addresses to ensure strict mode behaves correctly in all cases. # Dependencies No Changes --------- Co-authored-by: Zhong, Yu <yzhong@freewheel.com>
🦜🍎️ LangChain Core
Quick Install
pip install langchain-core
What is it?
LangChain Core contains the base abstractions that power the the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
For full documentation see the API reference.
⛰️ Why build on top of LangChain Core?
The LangChain ecosystem is built on top of langchain-core
. Some of the benefits:
- Modularity: We've designed Core around abstractions that are independent of each other, and not tied to any specific model provider.
- Stability: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
- Battle-tested: Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
1️⃣ Core Interface: Runnables
The concept of a Runnable
is central to LangChain Core – it is the interface that most LangChain Core components implement, giving them
- A common invocation interface (
invoke()
,batch()
,stream()
, etc.) - Built-in utilities for retries, fallbacks, schemas and runtime configurability
- Easy deployment with LangGraph
For more check out the Runnable
docs. Examples of components that implement the interface include: Chat Models, Tools, Retrievers, and Output Parsers.
📕 Releases & Versioning
As langchain-core
contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything in langchain_core.beta
. The reason for langchain_core.beta
is that given the rate of change of the field, being able to move quickly is still a priority, and this module is our attempt to do so.
Minor version increases will occur for:
- Breaking changes for any public interfaces NOT in
langchain_core.beta
Patch version increases will occur for:
- Bug fixes
- New features
- Any changes to private interfaces
- Any changes to
langchain_core.beta
💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.