Tool runs in `_TracerCore._create_tool_run` were discarding the
structured `inputs` dict that `BaseTool.run` passes to `on_tool_start`,
replacing it with `{"input": str(filtered_tool_input)}`. Consequently,
every multi-arg tool (e.g. ones in `deepagents` like `execute`,
`edit_file`, `write_file`, `grep`, ...) appeared in LangSmith with a
stringified, escaped dump of its arguments — multi-line bash commands
rendered with `\n` and were effectively unreadable. Chain runs already
preserved dicts via `_get_chain_inputs`; tool runs are now symmetric.
## Changes
- Preserve `inputs` when it is already a `dict` in the `original` /
`original+chat` branch of `_TracerCore._create_tool_run`, falling back
to `{"input": input_str}` only when no structured payload was provided
- Add regression tests in the sync and async base-tracer suites that
pass a structured `inputs` to `on_tool_start` and assert the dict
survives onto the resulting `Run`
## Breaking change
Custom `BaseTracer` subclasses that parsed `Run.inputs["input"]` as a
stringified dict for tool runs will need to read the structured fields
directly. The shape now matches what `on_tool_start(inputs=...)` has
always received — introduced alongside `_schema_format` in the
`astream_events` work — and what `streaming_events` consumers already
see.
🦜🍎️ LangChain Core
Looking for the JS/TS version? Check out LangChain.js.
To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.
Quick Install
pip install langchain-core
🤔 What is this?
LangChain Core contains the base abstractions that power the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
⛰️ Why build on top of LangChain Core?
The LangChain ecosystem is built on top of langchain-core. Some of the benefits:
- Modularity: We've designed Core around abstractions that are independent of each other, and not tied to any specific model provider.
- Stability: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
- Battle-tested: Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
📖 Documentation
For full documentation, see the API reference. For conceptual guides, tutorials, and examples on using LangChain, see the LangChain Docs. You can also chat with the docs using Chat LangChain.
📕 Releases & Versioning
See our Releases and Versioning policies.
💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.