Sydney Runkle 3ab8cc7ada feat(langchain): support for parallel (or interrupted) tool calls and structured output (#32980)
This enables parallel tool calling w/ a combo of
1. Standard and structured response tool calls
2. Deferred (requiring human approval / edits) tool calls and structured
response tool calls

Hard to unit test w/ HITL right now end to end, so here's a repro of
things working w/ an integration test:

```py
from pydantic import BaseModel, Field
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage
from langgraph.types import Command
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents.middleware_agent import create_agent
from langchain.agents.middleware.human_in_the_loop import HumanInTheLoopMiddleware
from langchain_openai import ChatOpenAI


class WeatherBaseModel(BaseModel):
    temperature: float = Field(description="Temperature in fahrenheit")
    condition: str = Field(description="Weather condition")


@tool
def add_numbers(a: int, b: int) -> int:
    """Add two numbers."""
    return a + b


model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
checkpointer = InMemorySaver()

agent = create_agent(
    model=model,
    tools=[add_numbers],
    response_format=WeatherBaseModel,
    middleware=[HumanInTheLoopMiddleware(tool_configs={"add_numbers": True})],
)
agent = agent.compile(checkpointer=checkpointer)

# First invocation should be interrupted due to human-in-the-loop middleware
response = agent.invoke(
    {
        "messages": [
            HumanMessage(
                "Add 1 and 2, then return the weather forecast with temperature 72 and condition sunny."
            )
        ]
    },
    config={"configurable": {"thread_id": "1"}},
)
interrupt_description = response["__interrupt__"][0].value[0]["description"]
print(interrupt_description)
"""
Tool execution requires approval

Tool: add_numbers
Args: {'a': 1, 'b': 2}
"""

# Resume the agent with approval
response = agent.invoke(
    Command(resume=[{"type": "approve"}]), config={"configurable": {"thread_id": "1"}}
)

for msg in response["messages"]:
    msg.pretty_print()

"""
================================ Human Message =================================

Add 1 and 2, then return the weather forecast with temperature 72 and condition sunny.
================================== Ai Message ==================================
Tool Calls:
  WeatherBaseModel (call_u6nXsEYRJbqNx4AEHgiQMpE2)
 Call ID: call_u6nXsEYRJbqNx4AEHgiQMpE2
  Args:
    temperature: 72
    condition: sunny
  add_numbers (call_nuQEZF7PwfYDlVpnSt8eaInI)
 Call ID: call_nuQEZF7PwfYDlVpnSt8eaInI
  Args:
    a: 1
    b: 2
================================= Tool Message =================================
Name: WeatherBaseModel

Returning structured response: temperature=72.0 condition='sunny'
================================= Tool Message =================================
Name: add_numbers

3
"""

print(repr(response["response"]))
"""
WeatherBaseModel(temperature=72.0, condition='sunny')
"""

```
2025-09-17 10:23:26 -04:00
2025-07-28 15:03:25 -04:00
2025-07-30 23:04:45 +00:00
2025-07-27 20:00:16 -04:00
2025-09-08 20:06:59 +00:00

LangChain Logo

PyPI - License PyPI - Downloads Open in Dev Containers Open in Github Codespace CodSpeed Badge Twitter

Note

Looking for the JS/TS library? Check out LangChain.js.

LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.

pip install -U langchain

To learn more about LangChain, check out the docs. If youre looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.

Why use LangChain?

LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.

Use LangChain for:

  • Real-time data augmentation. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChains vast library of integrations with model providers, tools, vector stores, retrievers, and more.
  • Model interoperability. Swap models in and out as your engineering team experiments to find the best choice for your applications needs. As the industry frontier evolves, adapt quickly — LangChains abstractions keep you moving without losing momentum.

LangChains ecosystem

While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.

To improve your LLM application development, pair LangChain with:

  • LangSmith - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
  • LangGraph - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
  • LangGraph Platform - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in LangGraph Studio.

Additional resources

  • Tutorials: Simple walkthroughs with guided examples on getting started with LangChain.
  • How-to Guides: Quick, actionable code snippets for topics such as tool calling, RAG use cases, and more.
  • Conceptual Guides: Explanations of key concepts behind the LangChain framework.
  • LangChain Forum: Connect with the community and share all of your technical questions, ideas, and feedback.
  • API Reference: Detailed reference on navigating base packages and integrations for LangChain.
  • Chat LangChain: Ask questions & chat with our documentation.
Description
Building applications with LLMs through composability
Readme MIT Cite this repository 4.8 GiB
Languages
Python 83.3%
omnetpp-msg 16.1%
Makefile 0.4%