mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-08 00:28:47 +00:00
docs: replace 'state_modifier' with 'prompt' (#29415)
This commit is contained in:
parent
2bb2c9bfe8
commit
7cbf885c18
docs/docs
how_to
integrations/tools
tutorials
versions/migrating_memory
@ -120,7 +120,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Great! Now let's assemble our agent using LangGraph's prebuilt [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent), which allows you to create a [tool-calling agent](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/#tool-calling-agent):"
|
||||
"Great! Now let's assemble our agent using LangGraph's prebuilt [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent), which allows you to create a [tool-calling agent](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/#tool-calling-agent):"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -131,10 +131,10 @@
|
||||
"source": [
|
||||
"from langgraph.prebuilt import create_react_agent\n",
|
||||
"\n",
|
||||
"# state_modifier allows you to preprocess the inputs to the model inside ReAct agent\n",
|
||||
"# prompt allows you to preprocess the inputs to the model inside ReAct agent\n",
|
||||
"# in this case, since we're passing a prompt string, we'll just always add a SystemMessage\n",
|
||||
"# with this prompt string before any other messages sent to the model\n",
|
||||
"agent = create_react_agent(model, tools, state_modifier=prompt)"
|
||||
"agent = create_react_agent(model, tools, prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -266,7 +266,7 @@
|
||||
"\n",
|
||||
"# highlight-start\n",
|
||||
"memory = MemorySaver()\n",
|
||||
"agent = create_react_agent(model, tools, state_modifier=prompt, checkpointer=memory)\n",
|
||||
"agent = create_react_agent(model, tools, prompt=prompt, checkpointer=memory)\n",
|
||||
"# highlight-end"
|
||||
]
|
||||
},
|
||||
|
@ -32,11 +32,11 @@
|
||||
"\n",
|
||||
"Here we focus on how to move from legacy LangChain agents to more flexible [LangGraph](https://langchain-ai.github.io/langgraph/) agents.\n",
|
||||
"LangChain agents (the [AgentExecutor](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor) in particular) have multiple configuration parameters.\n",
|
||||
"In this notebook we will show how those parameters map to the LangGraph react agent executor using the [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) prebuilt helper method.\n",
|
||||
"In this notebook we will show how those parameters map to the LangGraph react agent executor using the [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) prebuilt helper method.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
":::note\n",
|
||||
"In LangGraph, the graph replaces LangChain's agent executor. It manages the agent's cycles and tracks the scratchpad as messages within its state. The LangChain \"agent\" corresponds to the state_modifier and LLM you've provided.\n",
|
||||
"In LangGraph, the graph replaces LangChain's agent executor. It manages the agent's cycles and tracks the scratchpad as messages within its state. The LangChain \"agent\" corresponds to the prompt and LLM you've provided.\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@ -164,7 +164,7 @@
|
||||
"id": "94205f3b-fd2b-4fd7-af69-0a3fc313dc88",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"LangGraph's [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) manages a state that is defined by a list of messages. It will continue to process the list until there are no tool calls in the agent's output. To kick it off, we input a list of messages. The output will contain the entire state of the graph-- in this case, the conversation history.\n",
|
||||
"LangGraph's [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) manages a state that is defined by a list of messages. It will continue to process the list until there are no tool calls in the agent's output. To kick it off, we input a list of messages. The output will contain the entire state of the graph-- in this case, the conversation history.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
@ -240,11 +240,12 @@
|
||||
"\n",
|
||||
"With legacy LangChain agents you have to pass in a prompt template. You can use this to control the agent.\n",
|
||||
"\n",
|
||||
"With LangGraph [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent), by default there is no prompt. You can achieve similar control over the agent in a few ways:\n",
|
||||
"With LangGraph [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent), by default there is no prompt. You can achieve similar control over the agent in a few ways:\n",
|
||||
"\n",
|
||||
"1. Pass in a system message as input\n",
|
||||
"2. Initialize the agent with a system message\n",
|
||||
"3. Initialize the agent with a function to transform messages before passing to the model.\n",
|
||||
"3. Initialize the agent with a function to transform messages in the graph state before passing to the model.\n",
|
||||
"4. Initialize the agent with a [Runnable](/docs/concepts/lcel) to transform messages in the graph state before passing to the model. This includes passing prompt templates as well.\n",
|
||||
"\n",
|
||||
"Let's take a look at all of these below. We will pass in custom instructions to get the agent to respond in Spanish.\n",
|
||||
"\n",
|
||||
@ -291,9 +292,9 @@
|
||||
"id": "bd5f5500-5ae4-4000-a9fd-8c5a2cc6404d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent).\n",
|
||||
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent).\n",
|
||||
"\n",
|
||||
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`state_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) parameter. This modifies the graph state before the llm is called, and can be one of four values:\n",
|
||||
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`prompt`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) parameter. This modifies the graph state before the llm is called, and can be one of four values:\n",
|
||||
"\n",
|
||||
"- A `SystemMessage`, which is added to the beginning of the list of messages.\n",
|
||||
"- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n",
|
||||
@ -317,9 +318,7 @@
|
||||
"# This could also be a SystemMessage object\n",
|
||||
"# system_message = SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")\n",
|
||||
"\n",
|
||||
"langgraph_agent_executor = create_react_agent(\n",
|
||||
" model, tools, state_modifier=system_message\n",
|
||||
")\n",
|
||||
"langgraph_agent_executor = create_react_agent(model, tools, prompt=system_message)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"user\", query)]})"
|
||||
@ -330,8 +329,8 @@
|
||||
"id": "fc6059fd-0df7-4b6f-a84c-b5874e983638",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also pass in an arbitrary function. This function should take in a list of messages and output a list of messages.\n",
|
||||
"We can do all types of arbitrary formatting of messages here. In this case, let's just add a SystemMessage to the start of the list of messages."
|
||||
"We can also pass in an arbitrary function or a runnable. This function/runnable should take in a the graph state and output a list of messages.\n",
|
||||
"We can do all types of arbitrary formatting of messages here. In this case, let's add a SystemMessage to the start of the list of messages and append another user message at the end."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -349,6 +348,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages import HumanMessage, SystemMessage\n",
|
||||
"from langgraph.prebuilt import create_react_agent\n",
|
||||
"from langgraph.prebuilt.chat_agent_executor import AgentState\n",
|
||||
"\n",
|
||||
@ -356,19 +356,20 @@
|
||||
" [\n",
|
||||
" (\"system\", \"You are a helpful assistant. Respond only in Spanish.\"),\n",
|
||||
" (\"placeholder\", \"{messages}\"),\n",
|
||||
" (\"user\", \"Also say 'Pandamonium!' after the answer.\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _modify_state_messages(state: AgentState):\n",
|
||||
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages() + [\n",
|
||||
" (\"user\", \"Also say 'Pandamonium!' after the answer.\")\n",
|
||||
" ]\n",
|
||||
"# alternatively, this can be passed as a function, e.g.\n",
|
||||
"# def prompt(state: AgentState):\n",
|
||||
"# return (\n",
|
||||
"# [SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")] +\n",
|
||||
"# state[\"messages\"] +\n",
|
||||
"# [HumanMessage(content=\"Also say 'Pandamonium!' after the answer.\")]\n",
|
||||
"# )\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"langgraph_agent_executor = create_react_agent(\n",
|
||||
" model, tools, state_modifier=_modify_state_messages\n",
|
||||
")\n",
|
||||
"langgraph_agent_executor = create_react_agent(model, tools, prompt=prompt)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"messages = langgraph_agent_executor.invoke({\"messages\": [(\"human\", query)]})\n",
|
||||
@ -516,7 +517,7 @@
|
||||
"\n",
|
||||
"memory = MemorySaver()\n",
|
||||
"langgraph_agent_executor = create_react_agent(\n",
|
||||
" model, tools, state_modifier=system_message, checkpointer=memory\n",
|
||||
" model, tools, prompt=system_message, checkpointer=memory\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"config = {\"configurable\": {\"thread_id\": \"test-thread\"}}\n",
|
||||
@ -643,14 +644,7 @@
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _modify_state_messages(state: AgentState):\n",
|
||||
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"langgraph_agent_executor = create_react_agent(\n",
|
||||
" model, tools, state_modifier=_modify_state_messages\n",
|
||||
")\n",
|
||||
"langgraph_agent_executor = create_react_agent(model, tools, prompt=prompt)\n",
|
||||
"\n",
|
||||
"for step in langgraph_agent_executor.stream(\n",
|
||||
" {\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"\n",
|
||||
@ -697,7 +691,7 @@
|
||||
"source": [
|
||||
"### In LangGraph\n",
|
||||
"\n",
|
||||
"By default the [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state."
|
||||
"By default the [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1244,7 +1238,7 @@
|
||||
"source": [
|
||||
"### In LangGraph\n",
|
||||
"\n",
|
||||
"We can use the [`state_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) just as before when passing in [prompt templates](#prompt-templates)."
|
||||
"We can use the [`prompt`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) just as before when passing in [prompt templates](#prompt-templates)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1299,7 +1293,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"langgraph_agent_executor = create_react_agent(\n",
|
||||
" model, tools, state_modifier=_modify_state_messages\n",
|
||||
" model, tools, prompt=_modify_state_messages\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
|
@ -149,7 +149,7 @@
|
||||
"agent = create_react_agent(\n",
|
||||
" llm,\n",
|
||||
" tools,\n",
|
||||
" state_modifier=\"You are a helpful assistant. Make sure to use tool for information.\",\n",
|
||||
" prompt=\"You are a helpful assistant. Make sure to use tool for information.\",\n",
|
||||
")\n",
|
||||
"agent.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"36939 * 8922.4\"}]})"
|
||||
]
|
||||
|
@ -258,7 +258,7 @@
|
||||
"{api_spec}\n",
|
||||
"\"\"\".format(api_spec=api_spec)\n",
|
||||
"\n",
|
||||
"agent_executor = create_react_agent(llm, tools, state_modifier=system_message)"
|
||||
"agent_executor = create_react_agent(llm, tools, prompt=system_message)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -290,9 +290,7 @@
|
||||
"source": [
|
||||
"from langgraph.prebuilt import create_react_agent\n",
|
||||
"\n",
|
||||
"agent_executor = create_react_agent(\n",
|
||||
" llm, toolkit.get_tools(), state_modifier=system_message\n",
|
||||
")"
|
||||
"agent_executor = create_react_agent(llm, toolkit.get_tools(), prompt=system_message)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -718,7 +718,7 @@
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"from langgraph.prebuilt import create_react_agent\n",
|
||||
"\n",
|
||||
"agent_executor = create_react_agent(llm, tools, state_modifier=system_message)"
|
||||
"agent_executor = create_react_agent(llm, tools, prompt=system_message)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1119,7 +1119,7 @@
|
||||
"\n",
|
||||
"tools.append(retriever_tool)\n",
|
||||
"\n",
|
||||
"agent = create_react_agent(llm, tools, state_modifier=system)"
|
||||
"agent = create_react_agent(llm, tools, prompt=system)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -363,7 +363,7 @@
|
||||
"\n",
|
||||
"This example is shown here explicitly to make it easier for users to compare the legacy implementation vs. the corresponding langgraph implementation.\n",
|
||||
"\n",
|
||||
"This example shows how to add memory to the [pre-built react agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) in langgraph.\n",
|
||||
"This example shows how to add memory to the [pre-built react agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) in langgraph.\n",
|
||||
"\n",
|
||||
"For more details, please see the [how to add memory to the prebuilt ReAct agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent-memory/) guide in langgraph.\n",
|
||||
"\n",
|
||||
|
@ -500,7 +500,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"# highlight-start\n",
|
||||
"def state_modifier(state) -> list[BaseMessage]:\n",
|
||||
"def prompt(state) -> list[BaseMessage]:\n",
|
||||
" \"\"\"Given the agent state, return a list of messages for the chat model.\"\"\"\n",
|
||||
" # We're using the message processor defined above.\n",
|
||||
" return trim_messages(\n",
|
||||
@ -528,7 +528,7 @@
|
||||
" tools=[get_user_age],\n",
|
||||
" checkpointer=memory,\n",
|
||||
" # highlight-next-line\n",
|
||||
" state_modifier=state_modifier,\n",
|
||||
" prompt=prompt,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# The thread id is a unique key that identifies\n",
|
||||
|
@ -375,7 +375,7 @@
|
||||
"id": "4f1aa06c-69b0-4f86-94bc-6be588c9a778",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Our agent graph is going to be very similar to simple [ReAct agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent). The only important modification is adding a node to load memories BEFORE calling the agent for the first time."
|
||||
"Our agent graph is going to be very similar to simple [ReAct agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent). The only important modification is adding a node to load memories BEFORE calling the agent for the first time."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user