Nick Hollon f42d80ca1c fix(core): preserve chunk additional_kwargs across v3 stream assembly (#37435)
The v3 streaming path drops `additional_kwargs` from per-chunk
`AIMessageChunk`s during assembly: `chunks_to_events` emits no event
field for them, and `ChatModelStream._assemble_message` constructs the
final `AIMessage` without an `additional_kwargs` argument. Non-streaming
`ainvoke` returns the provider message unchanged, so streaming and
non-streaming diverge for any provider that uses `additional_kwargs` to
carry data outside the typed protocol blocks.

## How this surfaces

The concrete failure mode is Gemini's
`__gemini_function_call_thought_signatures__` — a per-tool-call
signature blob the Google GenAI integration places in
`additional_kwargs`, keyed by `tool_call_id`. Gemini requires that
signature on follow-up turns to replay the prior thought trace; without
it, multi-turn streaming flows lose thought continuity (and may
regenerate thinking, charging additional reasoning tokens, or in some
cases refuse). Other providers that use `additional_kwargs` (e.g. older
`function_call` accumulators, custom routing metadata) hit the same gap;
the fix is intentionally provider-agnostic.

## Fix

Provider-agnostic, two seams:

- `_compat_bridge` accumulates `msg.additional_kwargs` across chunks
with `merge_dicts` (matching `AIMessageChunk`'s own merge semantics for
fields that accumulate, like `function_call`) and emits the merged dict
on the `message-finish` event as an off-spec extension. The bridge
already uses one such extension (`metadata` on `MessageFinishData`);
this PR follows the same pattern for `additional_kwargs`.
- `ChatModelStream._finish` reads the new field; `_assemble_message`
threads it onto the final `AIMessage` only when non-empty, preserving
today's behavior of leaving `additional_kwargs` empty when no provider
data needs to ride on it.
2026-05-14 11:19:45 -07:00
2023-11-28 17:34:27 -08:00
2026-05-05 17:58:15 +02:00

The agent engineering platform.

PyPI - License PyPI - Downloads Version Twitter / X

LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.

Tip

Just getting started? Check out Deep Agents — a higher-level package built on LangChain for agents that have built-in capabilites for common usage patterns such as planning, subagents, file system usage, and more.

Quickstart

pip install langchain
# or
uv add langchain
from langchain.chat_models import init_chat_model

model = init_chat_model("openai:gpt-5.4")
result = model.invoke("Hello, world!")

If you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.

For an equivalent JS/TS library, check out LangChain.js.

Tip

For developing, debugging, and deploying AI agents and LLM applications, see LangSmith.

LangChain ecosystem

While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.

  • Deep Agents — Build agents that can plan, use subagents, and leverage file systems for complex tasks
  • LangGraph — Build agents that can reliably handle complex tasks with our low-level agent orchestration framework
  • Integrations — Chat & embedding models, tools & toolkits, and more
  • LangSmith — Agent evals, observability, and debugging for LLM apps
  • LangSmith Deployment — Deploy and scale agents with a purpose-built platform for long-running, stateful workflows

Why use LangChain?

LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.

  • Real-time data augmentation — Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more
  • Model interoperability — Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly — LangChain's abstractions keep you moving without losing momentum
  • Rapid prototyping — Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle
  • Production-ready features — Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices
  • Vibrant community and ecosystem — Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community
  • Flexible abstraction layers — Work at the level of abstraction that suits your needs — from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity

Documentation

Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.

Additional resources

  • Contributing Guide Learn how to contribute to LangChain projects and find good first issues.
  • Code of Conduct Our community guidelines and standards for participation.
  • LangChain Academy Comprehensive, free courses on LangChain libraries and products, made by the LangChain team.
Description
Building applications with LLMs through composability
Readme MIT Cite this repository 4.9 GiB
Languages
Python 85%
omnetpp-msg 14.4%
Makefile 0.4%