OpenAI changed their API to require the `partial_images` parameter when
using image generation + streaming.
As described in https://github.com/langchain-ai/langchain/pull/31424, we
are ignoring partial images. Here, we accept the `partial_images`
parameter (as required by OpenAI), but emit a warning and continue to
ignore partial images.
Does not support partial images during generation at the moment. Before
doing that I'd like to figure out how to specify the aggregation logic
without requiring changes in core.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Some providers include (legacy) function calls in `additional_kwargs` in
addition to tool calls. We currently unpack both function calls and tool
calls if present, but OpenAI will raise 400 in this case.
This can come up if providers are mixed in a tool-calling loop. Example:
```python
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get weather at a location."""
return "It's sunny."
gemini = init_chat_model("google_genai:gemini-2.0-flash-001").bind_tools([get_weather])
openai = init_chat_model("openai:gpt-4.1-mini").bind_tools([get_weather])
input_message = HumanMessage("What's the weather in Boston?")
tool_call_message = gemini.invoke([input_message])
assert len(tool_call_message.tool_calls) == 1
tool_call = tool_call_message.tool_calls[0]
tool_message = get_weather.invoke(tool_call)
response = openai.invoke( # currently raises 400 / BadRequestError
[input_message, tool_call_message, tool_message]
)
```
Here we ignore function calls if tool calls are present.
partners: (langchain-openai) total_tokens should not add 'Nonetype' t…
# PR Description
## Description
Fixed an issue in `langchain-openai` where `total_tokens` was
incorrectly adding `None` to an integer, causing a TypeError. The fix
ensures proper type checking before adding token counts.
## Issue
Fixes the TypeError traceback shown in the image where `'NoneType'`
cannot be added to an integer.
## Dependencies
None
## Twitter handle
None

Co-authored-by: qiulijie <qiulijie@yuaiweiwu.com>
When aggregating AIMessageChunks in a stream, core prefers the leftmost
non-null ID. This is problematic because:
- Core assigns IDs when they are null to `f"run-{run_manager.run_id}"`
- The desired meaningful ID might not be available until midway through
the stream, as is the case for the OpenAI Responses API.
For the OpenAI Responses API, we assign message IDs to the top-level
`AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the
IDs get overwritten by the defaults assigned in langchain-core. These
IDs
[must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9)
be available on the AIMessage object to support passing reasoning items
back to the API (e.g., if not using OpenAI's `previous_response_id`
feature). We could add them elsewhere, but seeing as we've already made
the decision to store them in `.id` during `.(a)invoke`, addressing the
issue in core lets us fix the problem with no interface changes.
Following https://github.com/langchain-ai/langchain/pull/30909: need to
retain "empty" reasoning output when streaming, e.g.,
```python
{'id': 'rs_...', 'summary': [], 'type': 'reasoning'}
```
Tested by existing integration tests, which are currently failing.
**langchain_openai: Support of reasoning summary streaming**
**Description:**
OpenAI API now supports streaming reasoning summaries for reasoning
models (o1, o3, o3-mini, o4-mini). More info about it:
https://platform.openai.com/docs/guides/reasoning#reasoning-summaries
It is supported only in Responses API (not Completion API), so you need
to create LangChain Open AI model as follows to support reasoning
summaries streaming:
```
llm = ChatOpenAI(
model="o4-mini", # also o1, o3, o3-mini support reasoning streaming
use_responses_api=True, # reasoning streaming works only with responses api, not completion api
model_kwargs={
"reasoning": {
"effort": "high", # also "low" and "medium" supported
"summary": "auto" # some models support "concise" summary, some "detailed", but auto will always work
}
}
)
```
Now, if you stream events from llm:
```
async for event in llm.astream_events(prompt, version="v2"):
print(event)
```
or
```
for chunk in llm.stream(prompt):
print (chunk)
```
OpenAI API will send you new types of events:
`response.reasoning_summary_text.added`
`response.reasoning_summary_text.delta`
`response.reasoning_summary_text.done`
These events are new, so they were ignored. So I have added support of
these events in function `_convert_responses_chunk_to_generation_chunk`,
so reasoning chunks or full reasoning added to the chunk
additional_kwargs.
Example of how this reasoning summary may be printed:
```
async for event in llm.astream_events(prompt, version="v2"):
if event["event"] == "on_chat_model_stream":
chunk: AIMessageChunk = event["data"]["chunk"]
if "reasoning_summary_chunk" in chunk.additional_kwargs:
print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
elif "reasoning_summary" in chunk.additional_kwargs:
print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
elif chunk.content and chunk.content[0]["type"] == "text":
print(chunk.content[0]["text"], end="")
```
or
```
for chunk in llm.stream(prompt):
if "reasoning_summary_chunk" in chunk.additional_kwargs:
print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
elif "reasoning_summary" in chunk.additional_kwargs:
print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
elif chunk.content and chunk.content[0]["type"] == "text":
print(chunk.content[0]["text"], end="")
```
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
When OpenAI originally released `stream_options` to enable token usage
during streaming, it was not supported in AzureOpenAI. It is now
supported.
Like the [OpenAI
SDK](f66d2e6fdc/src/openai/resources/completions.py (L68)),
ChatOpenAI does not return usage metadata during streaming by default
(which adds an extra chunk to the stream). The OpenAI SDK requires users
to pass `stream_options={"include_usage": True}`. ChatOpenAI implements
a convenience argument `stream_usage: Optional[bool]`, and an attribute
`stream_usage: bool = False`.
Here we extend this to AzureChatOpenAI by moving the `stream_usage`
attribute and `stream_usage` kwarg (on `_(a)stream`) from ChatOpenAI to
BaseChatOpenAI.
---
Additional consideration: we must be sensitive to the number of users
using BaseChatOpenAI to interact with other APIs that do not support the
`stream_options` parameter.
Suppose OpenAI in the future updates the default behavior to stream
token usage. Currently, BaseChatOpenAI only passes `stream_options` if
`stream_usage` is True, so there would be no way to disable this new
default behavior.
To address this, we could update the `stream_usage` attribute to
`Optional[bool] = None`, but this is technically a breaking change (as
currently values of False are not passed to the client). IMO: if / when
this change happens, we could accompany it with this update in a minor
bump.
---
Related previous PRs:
- https://github.com/langchain-ai/langchain/pull/22628
- https://github.com/langchain-ai/langchain/pull/22854
- https://github.com/langchain-ai/langchain/pull/23552
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
**Description:**
This PR fixes a minor typo in the comments within
`libs/partners/openai/langchain_openai/chat_models/base.py`. The word
"ben" has been corrected to "be" for clarity and professionalism.
**Issue:**
N/A
**Dependencies:**
None