|
|
|
@ -32,11 +32,11 @@
|
|
|
|
|
"\n",
|
|
|
|
|
"Here we focus on how to move from legacy LangChain agents to more flexible [LangGraph](https://langchain-ai.github.io/langgraph/) agents.\n",
|
|
|
|
|
"LangChain agents (the [AgentExecutor](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor) in particular) have multiple configuration parameters.\n",
|
|
|
|
|
"In this notebook we will show how those parameters map to the LangGraph react agent executor using the [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) prebuilt helper method.\n",
|
|
|
|
|
"In this notebook we will show how those parameters map to the LangGraph react agent executor using the [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) prebuilt helper method.\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
":::note\n",
|
|
|
|
|
"In LangGraph, the graph replaces LangChain's agent executor. It manages the agent's cycles and tracks the scratchpad as messages within its state. The LangChain \"agent\" corresponds to the state_modifier and LLM you've provided.\n",
|
|
|
|
|
"In LangGraph, the graph replaces LangChain's agent executor. It manages the agent's cycles and tracks the scratchpad as messages within its state. The LangChain \"agent\" corresponds to the prompt and LLM you've provided.\n",
|
|
|
|
|
":::\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
@ -164,7 +164,7 @@
|
|
|
|
|
"id": "94205f3b-fd2b-4fd7-af69-0a3fc313dc88",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"LangGraph's [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) manages a state that is defined by a list of messages. It will continue to process the list until there are no tool calls in the agent's output. To kick it off, we input a list of messages. The output will contain the entire state of the graph-- in this case, the conversation history.\n",
|
|
|
|
|
"LangGraph's [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) manages a state that is defined by a list of messages. It will continue to process the list until there are no tool calls in the agent's output. To kick it off, we input a list of messages. The output will contain the entire state of the graph-- in this case, the conversation history.\n",
|
|
|
|
|
"\n"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
@ -240,11 +240,12 @@
|
|
|
|
|
"\n",
|
|
|
|
|
"With legacy LangChain agents you have to pass in a prompt template. You can use this to control the agent.\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"With LangGraph [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent), by default there is no prompt. You can achieve similar control over the agent in a few ways:\n",
|
|
|
|
|
"With LangGraph [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent), by default there is no prompt. You can achieve similar control over the agent in a few ways:\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"1. Pass in a system message as input\n",
|
|
|
|
|
"2. Initialize the agent with a system message\n",
|
|
|
|
|
"3. Initialize the agent with a function to transform messages before passing to the model.\n",
|
|
|
|
|
"3. Initialize the agent with a function to transform messages in the graph state before passing to the model.\n",
|
|
|
|
|
"4. Initialize the agent with a [Runnable](/docs/concepts/lcel) to transform messages in the graph state before passing to the model. This includes passing prompt templates as well.\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"Let's take a look at all of these below. We will pass in custom instructions to get the agent to respond in Spanish.\n",
|
|
|
|
|
"\n",
|
|
|
|
@ -291,9 +292,9 @@
|
|
|
|
|
"id": "bd5f5500-5ae4-4000-a9fd-8c5a2cc6404d",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent).\n",
|
|
|
|
|
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent).\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`state_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) parameter. This modifies the graph state before the llm is called, and can be one of four values:\n",
|
|
|
|
|
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`prompt`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) parameter. This modifies the graph state before the llm is called, and can be one of four values:\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"- A `SystemMessage`, which is added to the beginning of the list of messages.\n",
|
|
|
|
|
"- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n",
|
|
|
|
@ -317,9 +318,7 @@
|
|
|
|
|
"# This could also be a SystemMessage object\n",
|
|
|
|
|
"# system_message = SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(\n",
|
|
|
|
|
" model, tools, state_modifier=system_message\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(model, tools, prompt=system_message)\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"user\", query)]})"
|
|
|
|
@ -330,8 +329,8 @@
|
|
|
|
|
"id": "fc6059fd-0df7-4b6f-a84c-b5874e983638",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"We can also pass in an arbitrary function. This function should take in a list of messages and output a list of messages.\n",
|
|
|
|
|
"We can do all types of arbitrary formatting of messages here. In this case, let's just add a SystemMessage to the start of the list of messages."
|
|
|
|
|
"We can also pass in an arbitrary function or a runnable. This function/runnable should take in a the graph state and output a list of messages.\n",
|
|
|
|
|
"We can do all types of arbitrary formatting of messages here. In this case, let's add a SystemMessage to the start of the list of messages and append another user message at the end."
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
@ -349,6 +348,7 @@
|
|
|
|
|
}
|
|
|
|
|
],
|
|
|
|
|
"source": [
|
|
|
|
|
"from langchain_core.messages import HumanMessage, SystemMessage\n",
|
|
|
|
|
"from langgraph.prebuilt import create_react_agent\n",
|
|
|
|
|
"from langgraph.prebuilt.chat_agent_executor import AgentState\n",
|
|
|
|
|
"\n",
|
|
|
|
@ -356,19 +356,20 @@
|
|
|
|
|
" [\n",
|
|
|
|
|
" (\"system\", \"You are a helpful assistant. Respond only in Spanish.\"),\n",
|
|
|
|
|
" (\"placeholder\", \"{messages}\"),\n",
|
|
|
|
|
" (\"user\", \"Also say 'Pandamonium!' after the answer.\"),\n",
|
|
|
|
|
" ]\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"def _modify_state_messages(state: AgentState):\n",
|
|
|
|
|
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages() + [\n",
|
|
|
|
|
" (\"user\", \"Also say 'Pandamonium!' after the answer.\")\n",
|
|
|
|
|
" ]\n",
|
|
|
|
|
"# alternatively, this can be passed as a function, e.g.\n",
|
|
|
|
|
"# def prompt(state: AgentState):\n",
|
|
|
|
|
"# return (\n",
|
|
|
|
|
"# [SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")] +\n",
|
|
|
|
|
"# state[\"messages\"] +\n",
|
|
|
|
|
"# [HumanMessage(content=\"Also say 'Pandamonium!' after the answer.\")]\n",
|
|
|
|
|
"# )\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(\n",
|
|
|
|
|
" model, tools, state_modifier=_modify_state_messages\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(model, tools, prompt=prompt)\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"human\", query)]})\n",
|
|
|
|
@ -516,7 +517,7 @@
|
|
|
|
|
"\n",
|
|
|
|
|
"memory = MemorySaver()\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(\n",
|
|
|
|
|
" model, tools, state_modifier=system_message, checkpointer=memory\n",
|
|
|
|
|
" model, tools, prompt=system_message, checkpointer=memory\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"config = {\"configurable\": {\"thread_id\": \"test-thread\"}}\n",
|
|
|
|
@ -643,14 +644,7 @@
|
|
|
|
|
" ]\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"def _modify_state_messages(state: AgentState):\n",
|
|
|
|
|
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages()\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(\n",
|
|
|
|
|
" model, tools, state_modifier=_modify_state_messages\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(model, tools, prompt=prompt)\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"for step in langgraph_agent_executor.stream(\n",
|
|
|
|
|
" {\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"\n",
|
|
|
|
@ -697,7 +691,7 @@
|
|
|
|
|
"source": [
|
|
|
|
|
"### In LangGraph\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"By default the [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state."
|
|
|
|
|
"By default the [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state."
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
@ -1244,7 +1238,7 @@
|
|
|
|
|
"source": [
|
|
|
|
|
"### In LangGraph\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"We can use the [`state_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) just as before when passing in [prompt templates](#prompt-templates)."
|
|
|
|
|
"We can use the [`prompt`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) just as before when passing in [prompt templates](#prompt-templates)."
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
@ -1299,7 +1293,7 @@
|
|
|
|
|
"\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"langgraph_agent_executor = create_react_agent(\n",
|
|
|
|
|
" model, tools, state_modifier=_modify_state_messages\n",
|
|
|
|
|
" model, tools, prompt=_modify_state_messages\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"try:\n",
|
|
|
|
|