mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-15 19:38:57 +00:00
Add how-to guides to [Run notebooks job](https://github.com/langchain-ai/langchain/actions/workflows/run_notebooks.yml) and fix existing notebooks. - As with tutorials, cassettes must be updated when HTTP calls in guides change (by running existing [script](https://github.com/langchain-ai/langchain/blob/master/docs/scripts/update_cassettes.sh)). - Cassettes now total ~62mb over 474 files. - `docs/scripts/prepare_notebooks_for_ci.py` lists a number of notebooks that do not run (e.g., due to requiring additional infra, slowness, requiring `input()`, etc.).
476 lines
15 KiB
Plaintext
476 lines
15 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"id": "8165bd4c",
|
|
"metadata": {
|
|
"vscode": {
|
|
"languageId": "raw"
|
|
}
|
|
},
|
|
"source": [
|
|
"---\n",
|
|
"keywords: [memory]\n",
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f47033eb",
|
|
"metadata": {},
|
|
"source": [
|
|
"# How to add message history\n",
|
|
"\n",
|
|
":::info Prerequisites\n",
|
|
"\n",
|
|
"This guide assumes familiarity with the following concepts:\n",
|
|
"- [Chaining runnables](/docs/how_to/sequence/)\n",
|
|
"- [Prompt templates](/docs/concepts/prompt_templates)\n",
|
|
"- [Chat Messages](/docs/concepts/messages)\n",
|
|
"- [LangGraph persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence/)\n",
|
|
"\n",
|
|
":::\n",
|
|
"\n",
|
|
":::note\n",
|
|
"\n",
|
|
"This guide previously covered the [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) abstraction. You can access this version of the guide in the [v0.2 docs](https://python.langchain.com/v0.2/docs/how_to/message_history/).\n",
|
|
"\n",
|
|
"As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of [LangGraph persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) to incorporate `memory` into new LangChain applications.\n",
|
|
"\n",
|
|
"If your code is already relying on `RunnableWithMessageHistory` or `BaseChatMessageHistory`, you do **not** need to make any changes. We do not plan on deprecating this functionality in the near future as it works for simple chat applications and any code that uses `RunnableWithMessageHistory` will continue to work as expected.\n",
|
|
"\n",
|
|
"Please see [How to migrate to LangGraph Memory](/docs/versions/migrating_memory/) for more details.\n",
|
|
":::\n",
|
|
"\n",
|
|
"Passing conversation state into and out a chain is vital when building a chatbot. LangGraph implements a built-in persistence layer, allowing chain states to be automatically persisted in memory, or external backends such as SQLite, Postgres or Redis. Details can be found in the LangGraph [persistence documentation](https://langchain-ai.github.io/langgraph/how-tos/persistence/).\n",
|
|
"\n",
|
|
"In this guide we demonstrate how to add persistence to arbitrary LangChain runnables by wrapping them in a minimal LangGraph application. This lets us persist the message history and other elements of the chain's state, simplifying the development of multi-turn applications. It also supports multiple threads, enabling a single application to interact separately with multiple users.\n",
|
|
"\n",
|
|
"## Setup\n",
|
|
"\n",
|
|
"Let's initialize a chat model:\n",
|
|
"\n",
|
|
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
|
"\n",
|
|
"<ChatModelTabs\n",
|
|
" customVarName=\"llm\"\n",
|
|
"/>\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "ca50d084-ae4b-4aea-9eb7-2ebc699df9bc",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# | output: false\n",
|
|
"# | echo: false\n",
|
|
"\n",
|
|
"# import os\n",
|
|
"# from getpass import getpass\n",
|
|
"\n",
|
|
"# os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
|
|
"from langchain_anthropic import ChatAnthropic\n",
|
|
"\n",
|
|
"llm = ChatAnthropic(model=\"claude-3-haiku-20240307\", temperature=0)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1f6121bc-2080-4ccc-acf0-f77de4bc951d",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Example: message inputs\n",
|
|
"\n",
|
|
"Adding memory to a [chat model](/docs/concepts/chat_models) provides a simple example. Chat models accept a list of messages as input and output a message. LangGraph includes a built-in `MessagesState` that we can use for this purpose.\n",
|
|
"\n",
|
|
"Below, we:\n",
|
|
"1. Define the graph state to be a list of messages;\n",
|
|
"2. Add a single node to the graph that calls a chat model;\n",
|
|
"3. Compile the graph with an in-memory checkpointer to store messages between runs.\n",
|
|
"\n",
|
|
":::info\n",
|
|
"\n",
|
|
"The output of a LangGraph application is its [state](https://langchain-ai.github.io/langgraph/concepts/low_level/). This can be any Python type, but in this context it will typically be a `TypedDict` that matches the schema of your runnable.\n",
|
|
"\n",
|
|
":::"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "f691a73a-a866-4354-9fff-8315605e2b8f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.messages import HumanMessage\n",
|
|
"from langgraph.checkpoint.memory import MemorySaver\n",
|
|
"from langgraph.graph import START, MessagesState, StateGraph\n",
|
|
"\n",
|
|
"# Define a new graph\n",
|
|
"workflow = StateGraph(state_schema=MessagesState)\n",
|
|
"\n",
|
|
"\n",
|
|
"# Define the function that calls the model\n",
|
|
"def call_model(state: MessagesState):\n",
|
|
" response = llm.invoke(state[\"messages\"])\n",
|
|
" # Update message history with response:\n",
|
|
" return {\"messages\": response}\n",
|
|
"\n",
|
|
"\n",
|
|
"# Define the (single) node in the graph\n",
|
|
"workflow.add_edge(START, \"model\")\n",
|
|
"workflow.add_node(\"model\", call_model)\n",
|
|
"\n",
|
|
"# Add memory\n",
|
|
"memory = MemorySaver()\n",
|
|
"app = workflow.compile(checkpointer=memory)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c0b396a8-f81e-4139-b4b2-75adf61d8179",
|
|
"metadata": {},
|
|
"source": [
|
|
"When we run the application, we pass in a configuration `dict` that specifies a `thread_id`. This ID is used to distinguish conversational threads (e.g., between different users)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "e4309511-2140-4d91-8f5f-ea3661e6d179",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "108c45a2-4971-4120-ba64-9a4305a414bb",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can then invoke the application:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "72a5ff6c-501f-4151-8dd9-f600f70554be",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
|
"\n",
|
|
"It's nice to meet you, Bob! I'm Claude, an AI assistant created by Anthropic. How can I help you today?\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"query = \"Hi! I'm Bob.\"\n",
|
|
"\n",
|
|
"input_messages = [HumanMessage(query)]\n",
|
|
"output = app.invoke({\"messages\": input_messages}, config)\n",
|
|
"output[\"messages\"][-1].pretty_print() # output contains all messages in state"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "5931fb35-0fac-40e7-8ac6-b14cb4e926cd",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
|
"\n",
|
|
"Your name is Bob, as you introduced yourself at the beginning of our conversation.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"query = \"What's my name?\"\n",
|
|
"\n",
|
|
"input_messages = [HumanMessage(query)]\n",
|
|
"output = app.invoke({\"messages\": input_messages}, config)\n",
|
|
"output[\"messages\"][-1].pretty_print()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "91de6d12-881d-4d23-a421-f2e3bf829b79",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note that states are separated for different threads. If we issue the same query to a thread with a new `thread_id`, the model indicates that it does not know the answer:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "6f12c26f-8913-4484-b2c5-b49eda2e6d7d",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
|
"\n",
|
|
"I'm afraid I don't actually know your name. As an AI assistant, I don't have personal information about you unless you provide it to me directly.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"query = \"What's my name?\"\n",
|
|
"config = {\"configurable\": {\"thread_id\": \"abc234\"}}\n",
|
|
"\n",
|
|
"input_messages = [HumanMessage(query)]\n",
|
|
"output = app.invoke({\"messages\": input_messages}, config)\n",
|
|
"output[\"messages\"][-1].pretty_print()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6749ea95-3382-4843-bb96-cfececb9e4e5",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Example: dictionary inputs\n",
|
|
"\n",
|
|
"LangChain runnables often accept multiple inputs via separate keys in a single `dict` argument. A common example is a prompt template with multiple parameters.\n",
|
|
"\n",
|
|
"Whereas before our runnable was a chat model, here we chain together a prompt template and chat model."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "6e7a402a-0994-4fc5-a607-fb990a248aa4",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
|
"\n",
|
|
"prompt = ChatPromptTemplate.from_messages(\n",
|
|
" [\n",
|
|
" (\"system\", \"Answer in {language}.\"),\n",
|
|
" MessagesPlaceholder(variable_name=\"messages\"),\n",
|
|
" ]\n",
|
|
")\n",
|
|
"\n",
|
|
"runnable = prompt | llm"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f83107bd-ae61-45e1-a57e-94ab043aad4b",
|
|
"metadata": {},
|
|
"source": [
|
|
"For this scenario, we define the graph state to include these parameters (in addition to the message history). We then define a single-node graph in the same way as before.\n",
|
|
"\n",
|
|
"Note that in the below state:\n",
|
|
"- Updates to the `messages` list will append messages;\n",
|
|
"- Updates to the `language` string will overwrite the string."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "267429ea-be0f-4f80-8daf-c63d881a1436",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from typing import Sequence\n",
|
|
"\n",
|
|
"from langchain_core.messages import BaseMessage\n",
|
|
"from langgraph.graph.message import add_messages\n",
|
|
"from typing_extensions import Annotated, TypedDict\n",
|
|
"\n",
|
|
"\n",
|
|
"# highlight-next-line\n",
|
|
"class State(TypedDict):\n",
|
|
" # highlight-next-line\n",
|
|
" messages: Annotated[Sequence[BaseMessage], add_messages]\n",
|
|
" # highlight-next-line\n",
|
|
" language: str\n",
|
|
"\n",
|
|
"\n",
|
|
"workflow = StateGraph(state_schema=State)\n",
|
|
"\n",
|
|
"\n",
|
|
"def call_model(state: State):\n",
|
|
" response = runnable.invoke(state)\n",
|
|
" # Update message history with response:\n",
|
|
" return {\"messages\": [response]}\n",
|
|
"\n",
|
|
"\n",
|
|
"workflow.add_edge(START, \"model\")\n",
|
|
"workflow.add_node(\"model\", call_model)\n",
|
|
"\n",
|
|
"memory = MemorySaver()\n",
|
|
"app = workflow.compile(checkpointer=memory)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "f3844fb4-58d7-43c8-b427-6d9f64d7411b",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
|
"\n",
|
|
"¡Hola, Bob! Es un placer conocerte.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"config = {\"configurable\": {\"thread_id\": \"abc345\"}}\n",
|
|
"\n",
|
|
"input_dict = {\n",
|
|
" \"messages\": [HumanMessage(\"Hi, I'm Bob.\")],\n",
|
|
" \"language\": \"Spanish\",\n",
|
|
"}\n",
|
|
"output = app.invoke(input_dict, config)\n",
|
|
"output[\"messages\"][-1].pretty_print()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7df47824-ef18-4a6e-a416-345ec9203f88",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Managing message history\n",
|
|
"\n",
|
|
"The message history (and other elements of the application state) can be accessed via `.get_state`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "1cbd6d82-43c1-4d11-98af-5c3ad9cd9b3b",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Language: Spanish\n",
|
|
"================================\u001b[1m Human Message \u001b[0m=================================\n",
|
|
"\n",
|
|
"Hi, I'm Bob.\n",
|
|
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
|
"\n",
|
|
"¡Hola, Bob! Es un placer conocerte.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"state = app.get_state(config).values\n",
|
|
"\n",
|
|
"print(f'Language: {state[\"language\"]}')\n",
|
|
"for message in state[\"messages\"]:\n",
|
|
" message.pretty_print()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "acfbccda-0bd6-4c4d-ae6e-8118520314e1",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can also update the state via `.update_state`. For example, we can manually append a new message:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "e98310d7-8ab1-461d-94a7-dd419494ab8d",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.messages import HumanMessage\n",
|
|
"\n",
|
|
"_ = app.update_state(config, {\"messages\": [HumanMessage(\"Test\")]})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "74ab3691-6f3b-49c5-aad0-2a90fc2a1e6a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Language: Spanish\n",
|
|
"================================\u001b[1m Human Message \u001b[0m=================================\n",
|
|
"\n",
|
|
"Hi, I'm Bob.\n",
|
|
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
|
"\n",
|
|
"¡Hola, Bob! Es un placer conocerte.\n",
|
|
"================================\u001b[1m Human Message \u001b[0m=================================\n",
|
|
"\n",
|
|
"Test\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"state = app.get_state(config).values\n",
|
|
"\n",
|
|
"print(f'Language: {state[\"language\"]}')\n",
|
|
"for message in state[\"messages\"]:\n",
|
|
" message.pretty_print()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e4a1ea00-d7ff-4f18-b9ec-9aec5909d027",
|
|
"metadata": {},
|
|
"source": [
|
|
"For details on managing state, including deleting messages, see the LangGraph documentation:\n",
|
|
"- [How to delete messages](https://langchain-ai.github.io/langgraph/how-tos/memory/delete-messages/)\n",
|
|
"- [How to view and update past graph state](https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/time-travel/)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "870c9c5b-c859-4c0e-9cbd-3555e6ed11e4",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|