Shreyansh Singh Gautam 2ef23882d2 fix(core): add tool_call_id to on_tool_error event data (#33731)
# Add `tool_call_id` to `on_tool_error` event data

## Summary

This PR addresses issue #33597 by adding `tool_call_id` to the
`on_tool_error` callback event data. This enables users to link tool
errors to specific tool calls in stateless agent implementations, which
is essential for building OpenAI-compatible APIs and tracking tool
execution flows.

## Problem

When streaming events using `astream_events` with `version="v2"`, the
`on_tool_error` event only included the error and input data, but lacked
the `tool_call_id`. This made it difficult to:

- Link errors to specific tool calls in stateless agent scenarios
- Implement OpenAI-compatible APIs that require tool call tracking
- Track tool execution flows when using `run_id` is not sufficient

## Solution

The fix adds `tool_call_id` propagation through the callback chain:

1. **Pass `tool_call_id` to callbacks**: Updated `BaseTool.run()` and
`BaseTool.arun()` to pass `tool_call_id` to both `on_tool_start` and
`on_tool_error` callbacks
2. **Store in event stream handler**: Modified
`_AstreamEventsCallbackHandler` to store `tool_call_id` in run info
during `on_tool_start`
3. **Include in error events**: Updated `on_tool_error` handler to
extract and include `tool_call_id` in the event data

## Changes

- **`libs/core/langchain_core/tools/base.py`**:
- Pass `tool_call_id` to `on_tool_start` in both sync and async methods
  - Pass `tool_call_id` to `on_tool_error` when errors occur

- **`libs/core/langchain_core/tracers/event_stream.py`**:
  - Store `tool_call_id` in run info during `on_tool_start`
  - Extract `tool_call_id` from kwargs or run info in `on_tool_error`
  - Include `tool_call_id` in the `on_tool_error` event data

## Testing

The fix was verified by:

1. Direct tool invocation: Confirmed `tool_call_id` appears in
`on_tool_error` event data when calling tools directly
2. Agent integration: Tested with `create_agent` to ensure
`tool_call_id` is present in error events during agent execution

```python
# Example verification
async for event in agent.astream_events(
    {"messages": "Please demonstrate a tool error"},
    version="v2",
):
    if event["event"] == "on_tool_error":
        assert "tool_call_id" in event["data"]  # ✓ Now passes
        print(event["data"]["tool_call_id"])
```

## Backward Compatibility

-  Fully backward compatible: `tool_call_id` is optional (can be
`None`)
-  No breaking changes: All changes are additive
-  Existing code continues to work without modification

## Related Issues

Fixes #33597

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2026-01-10 02:35:13 -05:00

The platform for reliable agents.

PyPI - License PyPI - Downloads Version Open in Dev Containers Open in Github Codespace CodSpeed Badge Twitter / X

LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development all while future-proofing decisions as the underlying technology evolves.

pip install langchain

If you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.


Documentation:

Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.

Note

Looking for the JS/TS library? Check out LangChain.js.

Why use LangChain?

LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.

Use LangChain for:

  • Real-time data augmentation. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more.
  • Model interoperability. Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly LangChain's abstractions keep you moving without losing momentum.
  • Rapid prototyping. Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle.
  • Production-ready features. Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices.
  • Vibrant community and ecosystem. Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community.
  • Flexible abstraction layers. Work at the level of abstraction that suits your needs - from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity.

LangChain ecosystem

While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.

To improve your LLM application development, pair LangChain with:

  • LangGraph Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
  • Integrations List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
  • LangSmith Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
  • LangSmith Deployment Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams and iterate quickly with visual prototyping in LangSmith Studio.
  • Deep Agents (new!) Build agents that can plan, use subagents, and leverage file systems for complex tasks

Additional resources

  • API Reference Detailed reference on navigating base packages and integrations for LangChain.
  • Contributing Guide Learn how to contribute to LangChain projects and find good first issues.
  • Code of Conduct Our community guidelines and standards for participation.
  • LangChain Academy Comprehensive, free courses on LangChain libraries and products, made by the LangChain team.
Description
Building applications with LLMs through composability
Readme MIT Cite this repository 4.9 GiB
Languages
Python 84.8%
omnetpp-msg 14.6%
Makefile 0.4%