From d206df8d3d2e7809c9e97e6e7cf9964173437087 Mon Sep 17 00:00:00 2001 From: Vadym Barda Date: Wed, 3 Jul 2024 15:25:11 -0400 Subject: [PATCH] docs: improve structure in the agent migration to langgraph guide (#23817) --- docs/docs/how_to/migrate_agent.ipynb | 52 +++++++++++++++++++++++----- 1 file changed, 43 insertions(+), 9 deletions(-) diff --git a/docs/docs/how_to/migrate_agent.ipynb b/docs/docs/how_to/migrate_agent.ipynb index 05b79436b2b..f43a8936d0c 100644 --- a/docs/docs/how_to/migrate_agent.ipynb +++ b/docs/docs/how_to/migrate_agent.ipynb @@ -351,7 +351,15 @@ "id": "68df3a09", "metadata": {}, "source": [ - "## Memory\n", + "## Memory" + ] + }, + { + "cell_type": "markdown", + "id": "96e7ffc8", + "metadata": {}, + "source": [ + "### In LangChain\n", "\n", "With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter), you could add chat [Memory](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.memory) so it can engage in a multi-turn conversation." ] @@ -439,7 +447,7 @@ "id": "c2a5a32f", "metadata": {}, "source": [ - "#### In LangGraph\n", + "### In LangGraph\n", "\n", "Memory is just [persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence/), aka [checkpointing](https://langchain-ai.github.io/langgraph/reference/checkpoints/).\n", "\n", @@ -510,6 +518,8 @@ "source": [ "## Iterating through steps\n", "\n", + "### In LangChain\n", + "\n", "With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter), you could iterate over the steps using the [stream](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.stream) (or async `astream`) methods or the [iter](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter) method. LangGraph supports stepwise iteration using [stream](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.stream) " ] }, @@ -568,7 +578,7 @@ "id": "46ccbcbf", "metadata": {}, "source": [ - "#### In LangGraph\n", + "### In LangGraph\n", "\n", "In LangGraph, things are handled natively using [stream](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.graph.CompiledGraph.stream) or the asynchronous `astream` method." ] @@ -619,6 +629,8 @@ "source": [ "## `return_intermediate_steps`\n", "\n", + "### In LangChain\n", + "\n", "Setting this parameter on AgentExecutor allows users to access intermediate_steps, which pairs agent actions (e.g., tool invocations) with their outcomes.\n" ] }, @@ -647,6 +659,8 @@ "id": "594f7567-302f-4fa8-85bb-025ac8322162", "metadata": {}, "source": [ + "### In LangGraph\n", + "\n", "By default the [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state." ] }, @@ -687,11 +701,9 @@ "source": [ "## `max_iterations`\n", "\n", - "`AgentExecutor` implements a `max_iterations` parameter, whereas this is controlled via `recursion_limit` in LangGraph.\n", + "### In LangChain\n", "\n", - "Note that in AgentExecutor, an \"iteration\" includes a full turn of tool invocation and execution. In LangGraph, each step contributes to the recursion limit, so we will need to multiply by two (and add one) to get equivalent results.\n", - "\n", - "If the recursion limit is reached, LangGraph raises a specific exception type, that we can catch and manage similarly to AgentExecutor." + "`AgentExecutor` implements a `max_iterations` parameter, allowing users to abort a run that exceeds a specified number of iterations." ] }, { @@ -769,6 +781,20 @@ "agent_executor.invoke({\"input\": query})" ] }, + { + "cell_type": "markdown", + "id": "dd3a933f", + "metadata": {}, + "source": [ + "### In LangGraph\n", + "\n", + "In LangGraph this is controlled via `recursion_limit` configuration parameter.\n", + "\n", + "Note that in `AgentExecutor`, an \"iteration\" includes a full turn of tool invocation and execution. In LangGraph, each step contributes to the recursion limit, so we will need to multiply by two (and add one) to get equivalent results.\n", + "\n", + "If the recursion limit is reached, LangGraph raises a specific exception type, that we can catch and manage similarly to AgentExecutor." + ] + }, { "cell_type": "code", "execution_count": 16, @@ -814,6 +840,8 @@ "source": [ "## `max_execution_time`\n", "\n", + "### In LangChain\n", + "\n", "`AgentExecutor` implements a `max_execution_time` parameter, allowing users to abort a run that exceeds a total time limit." ] }, @@ -880,6 +908,8 @@ "id": "d02eb025", "metadata": {}, "source": [ + "### In LangGraph\n", + "\n", "With LangGraph's react agent, you can control timeouts on two levels. \n", "\n", "You can set a `step_timeout` to bound each **step**:" @@ -968,6 +998,8 @@ "source": [ "## `early_stopping_method`\n", "\n", + "### In LangChain\n", + "\n", "With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter), you could configure an [early_stopping_method](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.early_stopping_method) to either return a string saying \"Agent stopped due to iteration limit or time limit.\" (`\"force\"`) or prompt the LLM a final time to respond (`\"generate\"`)." ] }, @@ -1028,7 +1060,7 @@ "id": "706e05c4", "metadata": {}, "source": [ - "#### In LangGraph\n", + "### In LangGraph\n", "\n", "In LangGraph, you can explicitly handle the response behavior outside the agent, since the full state can be accessed." ] @@ -1077,6 +1109,8 @@ "source": [ "## `trim_intermediate_steps`\n", "\n", + "### In LangChain\n", + "\n", "With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor), you could trim the intermediate steps of long-running agents using [trim_intermediate_steps](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.trim_intermediate_steps), which is either an integer (indicating the agent should keep the last N steps) or a custom function.\n", "\n", "For instance, we could trim the value so the agent only sees the most recent intermediate step." @@ -1180,7 +1214,7 @@ "id": "3d450c5a", "metadata": {}, "source": [ - "#### In LangGraph\n", + "### In LangGraph\n", "\n", "We can use the [`messages_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) just as before when passing in [prompt templates](#prompt-templates)." ]