Many provider integrations (notably Anthropic's `input_json_delta`
path) attach the tool-call `id` and `name` only to the first
`tool_use` chunk; subsequent per-chunk slices carry `id=None,
name=None` and just the fresh `args` segment. The compat bridge
forwarded those `None` values verbatim, producing wire payloads like
`{"type": "tool_call_chunk", "id": null, "name": null, "args": "..."}`.
Consumers that fold deltas via a naive `{...target, ...delta}` spread
(e.g. the langgraph-js SDK's `MessageAssembler.applyContentDelta`)
interpret those as "identifier reset to null" and lose the id/name
captured from `content-block-start`. Downstream extractors then drop
the chunk until the final `content-block-finish` arrives — visible to
end users as tool-call cards appearing all-at-once at the end of a
turn instead of streaming in incrementally (the Deep Agent example
rendering four subagents in a single flicker rather than one after
another).
Introduce `_to_protocol_delta_block` and route every
`content-block-delta` emission (sync / async chunk streams and the
`message_to_events` replay path) through it. For `tool_call_chunk`
and `server_tool_call_chunk` shapes, drop `id` / `name` keys when
they would serialize to `null`. This matches the wire shape produced
by langgraph-js's `toProtocolDeltaBlock`, where identifiers are only
surfaced when they carry a real value.
The agent engineering platform.
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
Note
Looking for the JS/TS library? Check out LangChain.js.
Quickstart
pip install langchain
# or
uv add langchain
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-5.4")
result = model.invoke("Hello, world!")
If you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.
Tip
For developing, debugging, and deploying AI agents and LLM applications, see LangSmith.
LangChain ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
- Deep Agents — Build agents that can plan, use subagents, and leverage file systems for complex tasks
- LangGraph — Build agents that can reliably handle complex tasks with our low-level agent orchestration framework
- Integrations — Chat & embedding models, tools & toolkits, and more
- LangSmith — Agent evals, observability, and debugging for LLM apps
- LangSmith Deployment — Deploy and scale agents with a purpose-built platform for long-running, stateful workflows
Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
- Real-time data augmentation — Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more
- Model interoperability — Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly — LangChain's abstractions keep you moving without losing momentum
- Rapid prototyping — Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle
- Production-ready features — Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices
- Vibrant community and ecosystem — Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community
- Flexible abstraction layers — Work at the level of abstraction that suits your needs — from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity
Documentation
- docs.langchain.com – Comprehensive documentation, including conceptual overviews and guides
- reference.langchain.com/python – API reference docs for LangChain packages
- Chat LangChain – Chat with the LangChain documentation and get answers to your questions
Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.
Additional resources
- Contributing Guide – Learn how to contribute to LangChain projects and find good first issues.
- Code of Conduct – Our community guidelines and standards for participation.
- LangChain Academy – Comprehensive, free courses on LangChain libraries and products, made by the LangChain team.