core, openai[patch]: prefer provider-assigned IDs when aggregating message chunks (#31080)

When aggregating AIMessageChunks in a stream, core prefers the leftmost
non-null ID. This is problematic because:
- Core assigns IDs when they are null to `f"run-{run_manager.run_id}"`
- The desired meaningful ID might not be available until midway through
the stream, as is the case for the OpenAI Responses API.

For the OpenAI Responses API, we assign message IDs to the top-level
`AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the
IDs get overwritten by the defaults assigned in langchain-core. These
IDs
[must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9)
be available on the AIMessage object to support passing reasoning items
back to the API (e.g., if not using OpenAI's `previous_response_id`
feature). We could add them elsewhere, but seeing as we've already made
the decision to store them in `.id` during `.(a)invoke`, addressing the
issue in core lets us fix the problem with no interface changes.
This commit is contained in:
ccurme
2025-05-02 11:18:18 -04:00
committed by GitHub
parent 72f905a436
commit 26ad239669
5 changed files with 42 additions and 14 deletions

View File

@@ -3127,6 +3127,7 @@ def _construct_responses_api_input(messages: Sequence[BaseMessage]) -> list:
reasoning_items = []
if reasoning := lc_msg.additional_kwargs.get("reasoning"):
reasoning_items.append(_pop_summary_index_from_reasoning(reasoning))
input_.extend(reasoning_items)
# Function calls
function_calls = []
if tool_calls := msg.pop("tool_calls", None):
@@ -3185,13 +3186,11 @@ def _construct_responses_api_input(messages: Sequence[BaseMessage]) -> list:
pass
msg["content"] = new_blocks
if msg["content"]:
if lc_msg.id and lc_msg.id.startswith("msg_"):
msg["id"] = lc_msg.id
input_.append(msg)
input_.extend(function_calls)
if computer_calls:
# Hack: we only add reasoning items if computer calls are present. See:
# https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/5
input_.extend(reasoning_items)
input_.extend(computer_calls)
input_.extend(computer_calls)
elif msg["role"] == "user":
if isinstance(msg["content"], list):
new_blocks = []