Does not support partial images during generation at the moment. Before
doing that I'd like to figure out how to specify the aggregation logic
without requiring changes in core.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Scheduled testing started failing today because the Responses API
stopped raising `BadRequestError` for a schema that was previously
invalid when `strict=True`.
Although docs still say that [some type-specific keywords are not yet
supported](https://platform.openai.com/docs/guides/structured-outputs#some-type-specific-keywords-are-not-yet-supported)
(including `minimum` and `maximum` for numbers), the below appears to
run and correctly respect the constraints:
```python
import json
import openai
maximums = list(range(1, 11))
arg_values = []
for maximum in maximums:
tool = {
"type": "function",
"name": "magic_function",
"description": "Applies a magic function to an input.",
"parameters": {
"properties": {
"input": {"maximum": maximum, "minimum": 0, "type": "integer"}
},
"required": ["input"],
"type": "object",
"additionalProperties": False
},
"strict": True
}
client = openai.OpenAI()
response = client.responses.create(
model="gpt-4.1",
input=[{"role": "user", "content": "What is the value of magic_function(3)? Use the tool."}],
tools=[tool],
)
function_call = next(item for item in response.output if item.type == "function_call")
args = json.loads(function_call.arguments)
arg_values.append(args["input"])
print(maximums)
print(arg_values)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# [1, 2, 3, 3, 3, 3, 3, 3, 3, 3]
```
Until yesterday this raised BadRequestError.
The same is not true of Chat Completions, which appears to still raise
BadRequestError
```python
tool = {
"type": "function",
"function": {
"name": "magic_function",
"description": "Applies a magic function to an input.",
"parameters": {
"properties": {
"input": {"maximum": 5, "minimum": 0, "type": "integer"}
},
"required": ["input"],
"type": "object",
"additionalProperties": False
},
"strict": True
}
}
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "What is the value of magic_function(3)? Use the tool."}],
tools=[tool],
)
response # raises BadRequestError
```
Here we update tests accordingly.
When aggregating AIMessageChunks in a stream, core prefers the leftmost
non-null ID. This is problematic because:
- Core assigns IDs when they are null to `f"run-{run_manager.run_id}"`
- The desired meaningful ID might not be available until midway through
the stream, as is the case for the OpenAI Responses API.
For the OpenAI Responses API, we assign message IDs to the top-level
`AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the
IDs get overwritten by the defaults assigned in langchain-core. These
IDs
[must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9)
be available on the AIMessage object to support passing reasoning items
back to the API (e.g., if not using OpenAI's `previous_response_id`
feature). We could add them elsewhere, but seeing as we've already made
the decision to store them in `.id` during `.(a)invoke`, addressing the
issue in core lets us fix the problem with no interface changes.
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)
Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
**langchain_openai: Support of reasoning summary streaming**
**Description:**
OpenAI API now supports streaming reasoning summaries for reasoning
models (o1, o3, o3-mini, o4-mini). More info about it:
https://platform.openai.com/docs/guides/reasoning#reasoning-summaries
It is supported only in Responses API (not Completion API), so you need
to create LangChain Open AI model as follows to support reasoning
summaries streaming:
```
llm = ChatOpenAI(
model="o4-mini", # also o1, o3, o3-mini support reasoning streaming
use_responses_api=True, # reasoning streaming works only with responses api, not completion api
model_kwargs={
"reasoning": {
"effort": "high", # also "low" and "medium" supported
"summary": "auto" # some models support "concise" summary, some "detailed", but auto will always work
}
}
)
```
Now, if you stream events from llm:
```
async for event in llm.astream_events(prompt, version="v2"):
print(event)
```
or
```
for chunk in llm.stream(prompt):
print (chunk)
```
OpenAI API will send you new types of events:
`response.reasoning_summary_text.added`
`response.reasoning_summary_text.delta`
`response.reasoning_summary_text.done`
These events are new, so they were ignored. So I have added support of
these events in function `_convert_responses_chunk_to_generation_chunk`,
so reasoning chunks or full reasoning added to the chunk
additional_kwargs.
Example of how this reasoning summary may be printed:
```
async for event in llm.astream_events(prompt, version="v2"):
if event["event"] == "on_chat_model_stream":
chunk: AIMessageChunk = event["data"]["chunk"]
if "reasoning_summary_chunk" in chunk.additional_kwargs:
print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
elif "reasoning_summary" in chunk.additional_kwargs:
print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
elif chunk.content and chunk.content[0]["type"] == "text":
print(chunk.content[0]["text"], end="")
```
or
```
for chunk in llm.stream(prompt):
if "reasoning_summary_chunk" in chunk.additional_kwargs:
print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
elif "reasoning_summary" in chunk.additional_kwargs:
print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
elif chunk.content and chunk.content[0]["type"] == "text":
print(chunk.content[0]["text"], end="")
```
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
When OpenAI originally released `stream_options` to enable token usage
during streaming, it was not supported in AzureOpenAI. It is now
supported.
Like the [OpenAI
SDK](f66d2e6fdc/src/openai/resources/completions.py (L68)),
ChatOpenAI does not return usage metadata during streaming by default
(which adds an extra chunk to the stream). The OpenAI SDK requires users
to pass `stream_options={"include_usage": True}`. ChatOpenAI implements
a convenience argument `stream_usage: Optional[bool]`, and an attribute
`stream_usage: bool = False`.
Here we extend this to AzureChatOpenAI by moving the `stream_usage`
attribute and `stream_usage` kwarg (on `_(a)stream`) from ChatOpenAI to
BaseChatOpenAI.
---
Additional consideration: we must be sensitive to the number of users
using BaseChatOpenAI to interact with other APIs that do not support the
`stream_options` parameter.
Suppose OpenAI in the future updates the default behavior to stream
token usage. Currently, BaseChatOpenAI only passes `stream_options` if
`stream_usage` is True, so there would be no way to disable this new
default behavior.
To address this, we could update the `stream_usage` attribute to
`Optional[bool] = None`, but this is technically a breaking change (as
currently values of False are not passed to the client). IMO: if / when
this change happens, we could accompany it with this update in a minor
bump.
---
Related previous PRs:
- https://github.com/langchain-ai/langchain/pull/22628
- https://github.com/langchain-ai/langchain/pull/22854
- https://github.com/langchain-ai/langchain/pull/23552
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- Support thinking blocks in core's `convert_to_openai_messages` (pass
through instead of error)
- Ignore thinking blocks in ChatOpenAI (instead of error)
- Support Anthropic-style image blocks in ChatOpenAI
---
Standard integration tests include a `supports_anthropic_inputs`
property which is currently enabled only for tests on `ChatAnthropic`.
This test enforces compatibility with message histories of the form:
```
- system message
- human message
- AI message with tool calls specified only through `tool_use` content blocks
- human message containing `tool_result` and an additional `text` block
```
It additionally checks support for Anthropic-style image inputs if
`supports_image_inputs` is enabled.
Here we change this test, such that if you enable
`supports_anthropic_inputs`:
- You support AI messages with text and `tool_use` content blocks
- You support Anthropic-style image inputs (if `supports_image_inputs`
is enabled)
- You support thinking content blocks.
That is, we add a test case for thinking content blocks, but we also
remove the requirement of handling tool results within HumanMessages
(motivated by existing agent abstractions, which should all return
ToolMessage). We move that requirement to a ChatAnthropic-specific test.
- **Description:** Before sending a completion chunk at the end of an
OpenAI stream, removing the tool_calls as those have already been sent
as chunks.
- **Issue:** -
- **Dependencies:** -
- **Twitter handle:** -
@ccurme as mentioned in another PR
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>