mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-11 13:55:03 +00:00
When aggregating AIMessageChunks in a stream, core prefers the leftmost non-null ID. This is problematic because: - Core assigns IDs when they are null to `f"run-{run_manager.run_id}"` - The desired meaningful ID might not be available until midway through the stream, as is the case for the OpenAI Responses API. For the OpenAI Responses API, we assign message IDs to the top-level `AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the IDs get overwritten by the defaults assigned in langchain-core. These IDs [must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9) be available on the AIMessage object to support passing reasoning items back to the API (e.g., if not using OpenAI's `previous_response_id` feature). We could add them elsewhere, but seeing as we've already made the decision to store them in `.id` during `.(a)invoke`, addressing the issue in core lets us fix the problem with no interface changes. |
||
---|---|---|
.. | ||
chat_models | ||
embeddings | ||
llms | ||
__init__.py | ||
test_compile.py |