mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-18 08:03:36 +00:00
Adding `create_react_agent` and introducing `langchain.agents`! ## Enhanced Structured Output `create_react_agent` supports coercion of outputs to structured data types like `pydantic` models, dataclasses, typed dicts, or JSON schemas specifications. ### Structural Changes In langgraph < 1.0, `create_react_agent` implemented support for structured output via an additional LLM call to the model after the standard model / tool calling loop finished. This introduced extra expense and was unnecessary. This new version implements structured output support in the main loop, allowing a model to choose between calling tools or generating structured output (or both). The same basic pattern for structured output generation works: ```py from langchain.agents import create_react_agent from langchain_core.messages import HumanMessage from pydantic import BaseModel class Weather(BaseModel): temperature: float condition: str def weather_tool(city: str) -> str: """Get the weather for a city.""" return f"it's sunny and 70 degrees in {city}" agent = create_react_agent("openai:gpt-4o-mini", tools=[weather_tool], response_format=Weather) print(repr(result["structured_response"])) #> Weather(temperature=70.0, condition='sunny') ``` ### Advanced Configuration The new API exposes two ways to configure how structured output is generated. Under the hood, LangChain will attempt to pick the best approach if not explicitly specified. That is, if provider native support is available for a given model, that takes priority over artificial tool calling. 1. Artificial tool calling (the default for most models) LangChain generates a tool (or tools) under the hood that match the schema of your response format. When the model calls those tools, LangChain coerces the args to the desired format. Note, LangChain does not validate outputs adhering to JSON schema specifications. <details> <summary>Extended example</summary> ```py from langchain.agents import create_react_agent from langchain_core.messages import HumanMessage from langchain.agents.structured_output import ToolStrategy from pydantic import BaseModel class Weather(BaseModel): temperature: float condition: str def weather_tool(city: str) -> str: """Get the weather for a city.""" return f"it's sunny and 70 degrees in {city}" agent = create_react_agent( "openai:gpt-4o-mini", tools=[weather_tool], response_format=ToolStrategy( schema=Weather, tool_message_content="Final Weather result generated" ), ) result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]}) for message in result["messages"]: message.pretty_print() """ ================================ Human Message ================================= What's the weather in Tokyo? ================================== Ai Message ================================== Tool Calls: weather_tool (call_Gg933BMHMwck50Q39dtBjXm7) Call ID: call_Gg933BMHMwck50Q39dtBjXm7 Args: city: Tokyo ================================= Tool Message ================================= Name: weather_tool it's sunny and 70 degrees in Tokyo ================================== Ai Message ================================== Tool Calls: Weather (call_9xOkYUM7PuEXl9DQq9sWGv5l) Call ID: call_9xOkYUM7PuEXl9DQq9sWGv5l Args: temperature: 70 condition: sunny ================================= Tool Message ================================= Name: Weather Final Weather result generated """ print(repr(result["structured_response"])) #> Weather(temperature=70.0, condition='sunny') ``` </details> 2. Provider implementations (limited to OpenAI, Groq) Some providers support structured output generating directly. For those cases, we offer the `ProviderStrategy` hint: <details> <summary>Extended example</summary> ```py from langchain.agents import create_react_agent from langchain_core.messages import HumanMessage from langchain.agents.structured_output import ProviderStrategy from pydantic import BaseModel class Weather(BaseModel): temperature: float condition: str def weather_tool(city: str) -> str: """Get the weather for a city.""" return f"it's sunny and 70 degrees in {city}" agent = create_react_agent( "openai:gpt-4o-mini", tools=[weather_tool], response_format=ProviderStrategy(Weather), ) result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]}) for message in result["messages"]: message.pretty_print() """ ================================ Human Message ================================= What's the weather in Tokyo? ================================== Ai Message ================================== Tool Calls: weather_tool (call_OFJq1FngIXS6cvjWv5nfSFZp) Call ID: call_OFJq1FngIXS6cvjWv5nfSFZp Args: city: Tokyo ================================= Tool Message ================================= Name: weather_tool it's sunny and 70 degrees in Tokyo ================================== Ai Message ================================== {"temperature":70,"condition":"sunny"} Weather(temperature=70.0, condition='sunny') """ print(repr(result["structured_response"])) #> Weather(temperature=70.0, condition='sunny') ``` Note! The final tool message has the custom content provided by the dev. </details> Prompted output was previously supported and is no longer supported via the `response_format` argument to `create_react_agent`. If there's significant demand for this, we'd be happy to engineer a solution. ## Error Handling `create_react_agent` now exposes an API for managing errors associated with structured output generation. There are two common problems with structured output generation (w/ artificial tool calling): 1. **Parsing error** -- the model generates data that doesn't match the desired structure for the output 2. **Multiple tool calls error** -- the model generates 2 or more tool calls associated with structured output schemas A developer can control the desired behavior for this via the `handle_errors` arg to `ToolStrategy`. <details> <summary>Extended example</summary> ```py from langchain_core.messages import HumanMessage from pydantic import BaseModel from langchain.agents import create_react_agent from langchain.agents.structured_output import StructuredOutputValidationError, ToolStrategy class Weather(BaseModel): temperature: float condition: str def weather_tool(city: str) -> str: """Get the weather for a city.""" return f"it's sunny and 70 degrees in {city}" def handle_validation_error(error: Exception) -> str: if isinstance(error, StructuredOutputValidationError): return ( f"Please call the {error.tool_name} call again with the correct arguments. " f"Your mistake was: {error.source}" ) raise error agent = create_react_agent( "openai:gpt-5", tools=[weather_tool], response_format=ToolStrategy( schema=Weather, handle_errors=handle_validation_error, ), ) ``` </details> ## Error Handling for Tool Calling Tools fail for two main reasons: 1. **Invocation failure** -- the args generated by the model for the tool are incorrect (missing, incompatible data types, etc) 2. **Execution failure** -- the tool execution itself fails due to a developer error, network error, or some other exception. By default, when tool **invocation** fails, the react agent will return an artificial `ToolMessage` to the model asking it to correct its mistakes and retry. Now, when tool **execution** fails, the react agent raises the `ToolException` by default instead of asking the model to retry. This helps to avoid looping that should be avoided due to the aforementioned issues. Developers can configure their desired behavior for retries / error handling via the `handle_tool_errors` arg to `ToolNode`. ## Pre-Bound Models `create_react_agent` no longer supports inputs to `model` that have been pre-bound w/ tools or other configuration. To properly support structured output generation, the agent itself needs the power to bind tools + structured output kwargs. This also makes the devx cleaner - it's always expected that `model` is an instance of `BaseChatModel` (or `str` that we coerce into a chat model instance). Dynamic model functions can return a pre-bound model **IF** structured output is not also used. Dynamic model functions can then bind tools / structured output logic. ## Import Changes Users should now use `create_react_agent` from `langchain.agents` instead of `langgraph.prebuilts`. Other imports have a similar migration path, `ToolNode` and `AgentState` for example. * `chat_agent_executor.py` -> `react_agent.py` Some notes: 1. Disabled blockbuster + some linting in `langchain/agents` -- beyond ideal, but necessary to get this across the line for the alpha. We should re-enable before official release.
119 lines
4.8 KiB
Makefile
119 lines
4.8 KiB
Makefile
.PHONY: all clean docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck format lint test tests test_watch integration_tests help extended_tests start_services stop_services
|
|
|
|
# Default target executed when no arguments are given to make.
|
|
all: help
|
|
|
|
######################
|
|
# TESTING AND COVERAGE
|
|
######################
|
|
|
|
start_services:
|
|
docker compose -f tests/unit_tests/agents/compose-postgres.yml -f tests/unit_tests/agents/compose-redis.yml up -V --force-recreate --wait --remove-orphans
|
|
|
|
stop_services:
|
|
docker compose -f tests/unit_tests/agents/compose-postgres.yml -f tests/unit_tests/agents/compose-redis.yml down -v
|
|
|
|
# Define a variable for the test file path.
|
|
TEST_FILE ?= tests/unit_tests/
|
|
|
|
.EXPORT_ALL_VARIABLES:
|
|
UV_FROZEN = true
|
|
|
|
# Run unit tests and generate a coverage report.
|
|
coverage:
|
|
uv run --group test pytest --cov \
|
|
--cov-config=.coveragerc \
|
|
--cov-report xml \
|
|
--cov-report term-missing:skip-covered \
|
|
$(TEST_FILE)
|
|
|
|
test:
|
|
make start_services && LANGGRAPH_TEST_FAST=0 uv run --group test pytest -n auto --disable-socket --allow-unix-socket $(TEST_FILE) --cov-report term-missing:skip-covered; \
|
|
EXIT_CODE=$$?; \
|
|
make stop_services; \
|
|
exit $$EXIT_CODE
|
|
|
|
test_fast:
|
|
LANGGRAPH_TEST_FAST=1 uv run --group test pytest -n auto --disable-socket --allow-unix-socket $(TEST_FILE)
|
|
|
|
extended_tests:
|
|
make start_services && LANGGRAPH_TEST_FAST=0 uv run --group test pytest --disable-socket --allow-unix-socket --only-extended tests/unit_tests; \
|
|
EXIT_CODE=$$?; \
|
|
make stop_services; \
|
|
exit $$EXIT_CODE
|
|
|
|
test_watch:
|
|
make start_services && LANGGRAPH_TEST_FAST=0 uv run --group test ptw --snapshot-update --now . -- -x --disable-socket --allow-unix-socket --disable-warnings tests/unit_tests; \
|
|
EXIT_CODE=$$?; \
|
|
make stop_services; \
|
|
exit $$EXIT_CODE
|
|
|
|
test_watch_extended:
|
|
make start_services && LANGGRAPH_TEST_FAST=0 uv run --group test ptw --snapshot-update --now . -- -x --disable-socket --allow-unix-socket --only-extended tests/unit_tests; \
|
|
EXIT_CODE=$$?; \
|
|
make stop_services; \
|
|
exit $$EXIT_CODE
|
|
|
|
integration_tests:
|
|
uv run --group test --group test_integration pytest tests/integration_tests
|
|
|
|
check_imports: $(shell find langchain -name '*.py')
|
|
uv run python ./scripts/check_imports.py $^
|
|
|
|
######################
|
|
# LINTING AND FORMATTING
|
|
######################
|
|
|
|
# Define a variable for Python and notebook files.
|
|
PYTHON_FILES=.
|
|
MYPY_CACHE=.mypy_cache
|
|
lint format: PYTHON_FILES=.
|
|
lint_diff format_diff: PYTHON_FILES=$(shell git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$')
|
|
lint_package: PYTHON_FILES=langchain
|
|
lint_tests: PYTHON_FILES=tests
|
|
lint_tests: MYPY_CACHE=.mypy_cache_test
|
|
|
|
lint lint_diff lint_package lint_tests:
|
|
[ "$(PYTHON_FILES)" = "" ] || uv run --all-groups ruff check $(PYTHON_FILES)
|
|
[ "$(PYTHON_FILES)" = "" ] || uv run --all-groups ruff format $(PYTHON_FILES) --diff
|
|
[ "$(PYTHON_FILES)" = "" ] || mkdir -p $(MYPY_CACHE) && uv run --all-groups mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
|
|
|
|
format format_diff:
|
|
[ "$(PYTHON_FILES)" = "" ] || uv run --all-groups ruff format $(PYTHON_FILES)
|
|
[ "$(PYTHON_FILES)" = "" ] || uv run --all-groups ruff check --fix $(PYTHON_FILES)
|
|
|
|
spell_check:
|
|
uv run --all-groups codespell --toml pyproject.toml
|
|
|
|
spell_fix:
|
|
uv run --all-groups codespell --toml pyproject.toml -w
|
|
|
|
######################
|
|
# HELP
|
|
######################
|
|
|
|
help:
|
|
@echo '===================='
|
|
@echo 'clean - run docs_clean and api_docs_clean'
|
|
@echo 'docs_build - build the documentation'
|
|
@echo 'docs_clean - clean the documentation build artifacts'
|
|
@echo 'docs_linkcheck - run linkchecker on the documentation'
|
|
@echo 'api_docs_build - build the API Reference documentation'
|
|
@echo 'api_docs_clean - clean the API Reference documentation build artifacts'
|
|
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
|
|
@echo '-- LINTING --'
|
|
@echo 'format - run code formatters'
|
|
@echo 'lint - run linters'
|
|
@echo 'spell_check - run codespell on the project'
|
|
@echo 'spell_fix - run codespell on the project and fix the errors'
|
|
@echo '-- TESTS --'
|
|
@echo 'coverage - run unit tests and generate coverage report'
|
|
@echo 'test - run unit tests with all services'
|
|
@echo 'test_fast - run unit tests with in-memory services only'
|
|
@echo 'tests - run unit tests (alias for "make test")'
|
|
@echo 'test TEST_FILE=<test_file> - run all tests in file'
|
|
@echo 'extended_tests - run only extended unit tests'
|
|
@echo 'test_watch - run unit tests in watch mode'
|
|
@echo 'integration_tests - run integration tests'
|
|
@echo '-- DOCUMENTATION tasks are from the top-level Makefile --'
|