Fixes a streaming bug where models like Qwen3 (using OpenAI interface) send tool call chunks with inconsistent indices, resulting in duplicate/erroneous tool calls instead of a single merged tool call. ## Problem When Qwen3 streams tool calls, it sends chunks with inconsistent `index` values: - First chunk: `index=1` with tool name and partial arguments - Subsequent chunks: `index=0` with `name=None`, `id=None` and argument continuation The existing `merge_lists` function only merges chunks when their `index` values match exactly, causing these logically related chunks to remain separate, resulting in multiple incomplete tool calls instead of one complete tool call. ```python # Before fix: Results in 1 valid + 1 invalid tool call chunk1 = AIMessageChunk(tool_call_chunks=[ {"name": "search", "args": '{"query":', "id": "call_123", "index": 1} ]) chunk2 = AIMessageChunk(tool_call_chunks=[ {"name": None, "args": ' "test"}', "id": None, "index": 0} ]) merged = chunk1 + chunk2 # Creates 2 separate tool calls # After fix: Results in 1 complete tool call merged = chunk1 + chunk2 # Creates 1 merged tool call: search({"query": "test"}) ``` ## Solution Enhanced the `merge_lists` function in `langchain_core/utils/_merge.py` with intelligent tool call chunk merging: 1. **Preserves existing behavior**: Same-index chunks still merge as before 2. **Adds special handling**: Tool call chunks with `name=None`/`id=None` that don't match any existing index are now merged with the most recent complete tool call chunk 3. **Maintains backward compatibility**: All existing functionality works unchanged 4. **Targeted fix**: Only affects tool call chunks, doesn't change behavior for other list items The fix specifically handles the pattern where: - A continuation chunk has `name=None` and `id=None` (indicating it's part of an ongoing tool call) - No matching index is found in existing chunks - There exists a recent tool call chunk with a valid name or ID to merge with ## Testing Added comprehensive test coverage including: - ✅ Qwen3-style chunks with different indices now merge correctly - ✅ Existing same-index behavior preserved - ✅ Multiple distinct tool calls remain separate - ✅ Edge cases handled (empty chunks, orphaned continuations) - ✅ Backward compatibility maintained Fixes #31511. <!-- START COPILOT CODING AGENT TIPS --> --- 💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click [here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to start the survey. --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com> Co-authored-by: Mason Daugherty <github@mdrxy.com> Co-authored-by: Mason Daugherty <mason@langchain.dev> |
||
---|---|---|
.devcontainer | ||
.github | ||
.vscode | ||
cookbook | ||
docs | ||
libs | ||
scripts | ||
.editorconfig | ||
.gitattributes | ||
.gitignore | ||
.markdownlint.json | ||
.pre-commit-config.yaml | ||
.readthedocs.yaml | ||
CITATION.cff | ||
LICENSE | ||
Makefile | ||
MIGRATE.md | ||
poetry.toml | ||
pyproject.toml | ||
README.md | ||
SECURITY.md | ||
uv.lock | ||
yarn.lock |
Note
Looking for the JS/TS library? Check out LangChain.js.
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
pip install -U langchain
To learn more about LangChain, check out the docs. If you’re looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.
Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
Use LangChain for:
- Real-time data augmentation. Easily connect LLMs to diverse data sources and external / internal systems, drawing from LangChain’s vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- Model interoperability. Swap models in and out as your engineering team experiments to find the best choice for your application’s needs. As the industry frontier evolves, adapt quickly — LangChain’s abstractions keep you moving without losing momentum.
LangChain’s ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
- LangSmith - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- LangGraph - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- LangGraph Platform - Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in LangGraph Studio.
Additional resources
- Tutorials: Simple walkthroughs with guided examples on getting started with LangChain.
- How-to Guides: Quick, actionable code snippets for topics such as tool calling, RAG use cases, and more.
- Conceptual Guides: Explanations of key concepts behind the LangChain framework.
- LangChain Forum: Connect with the community and share all of your technical questions, ideas, and feedback.
- API Reference: Detailed reference on navigating base packages and integrations for LangChain.