mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-15 22:44:36 +00:00
openai[minor]: release 0.3 (#29100)
## Goal Solve the following problems with `langchain-openai`: - Structured output with `o1` [breaks out of the box](https://langchain.slack.com/archives/C050X0VTN56/p1735232400232099). - `with_structured_output` by default does not use OpenAI’s [structured output feature](https://platform.openai.com/docs/guides/structured-outputs). - We override API defaults for temperature and other parameters. ## Breaking changes: - Default method for structured output is changing to OpenAI’s dedicated [structured output feature](https://platform.openai.com/docs/guides/structured-outputs). For schemas specified via TypedDict or JSON schema, strict schema validation is disabled by default but can be enabled by specifying `strict=True`. - To recover previous default, pass `method="function_calling"` into `with_structured_output`. - Models that don’t support `method="json_schema"` (e.g., `gpt-4` and `gpt-3.5-turbo`, currently the default model for ChatOpenAI) will raise an error unless `method` is explicitly specified. - To recover previous default, pass `method="function_calling"` into `with_structured_output`. - Schemas specified via Pydantic `BaseModel` that have fields with non-null defaults or metadata (like min/max constraints) will raise an error. - To recover previous default, pass `method="function_calling"` into `with_structured_output`. - `strict` now defaults to False for `method="json_schema"` when schemas are specified via TypedDict or JSON schema. - To recover previous behavior, use `with_structured_output(schema, strict=True)` - Schemas specified via Pydantic V1 will raise a warning (and use `method="function_calling"`) unless `method` is explicitly specified. - To remove the warning, pass `method="function_calling"` into `with_structured_output`. - Streaming with default structured output method / Pydantic schema no longer generates intermediate streamed chunks. - To recover previous behavior, pass `method="function_calling"` into `with_structured_output`. - We no longer override default temperature (was 0.7 in LangChain, now will follow OpenAI, currently 1.0). - To recover previous behavior, initialize `ChatOpenAI` or `AzureChatOpenAI` with `temperature=0.7`. - Note: conceptually there is a difference between forcing a tool call and forcing a response format. Tool calls may have more concise arguments vs. generating content adhering to a schema. Prompts may need to be adjusted to recover desired behavior. --------- Co-authored-by: Jacob Lee <jacoblee93@gmail.com> Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
@@ -132,6 +132,11 @@ class ChatModelTests(BaseStandardTests):
|
||||
is not BaseChatModel.with_structured_output
|
||||
)
|
||||
|
||||
@property
|
||||
def structured_output_kwargs(self) -> dict:
|
||||
"""If specified, additional kwargs for with_structured_output."""
|
||||
return {}
|
||||
|
||||
@property
|
||||
def supports_json_mode(self) -> bool:
|
||||
"""(bool) whether the chat model supports JSON mode."""
|
||||
@@ -299,6 +304,19 @@ class ChatModelUnitTests(ChatModelTests):
|
||||
def has_structured_output(self) -> bool:
|
||||
return True
|
||||
|
||||
.. dropdown:: structured_output_kwargs
|
||||
|
||||
Dict property that can be used to specify additional kwargs for
|
||||
``with_structured_output``. Useful for testing different models.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@property
|
||||
def structured_output_kwargs(self) -> dict:
|
||||
return {"method": "function_calling"}
|
||||
|
||||
.. dropdown:: supports_json_mode
|
||||
|
||||
Boolean property indicating whether the chat model supports JSON mode in
|
||||
|
Reference in New Issue
Block a user