mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-21 22:56:05 +00:00
808 lines
24 KiB
Plaintext
808 lines
24 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"sidebar_position: 1\n",
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Build a Chatbot"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Overview\n",
|
|
"\n",
|
|
"We'll go over an example of how to design and implement an LLM-powered chatbot. \n",
|
|
"This chatbot will be able to have a conversation and remember previous interactions.\n",
|
|
"\n",
|
|
"\n",
|
|
"Note that this chatbot that we build will only use the language model to have a conversation.\n",
|
|
"There are several other related concepts that you may be looking for:\n",
|
|
"\n",
|
|
"- [Conversational RAG](TODO): Enable a chatbot experience over an external source of data\n",
|
|
"- [Agents](/docs/tutorials/agents): Build a chatbot that can take actions\n",
|
|
"\n",
|
|
"This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose.\n",
|
|
"\n",
|
|
"\n",
|
|
"## Concepts\n",
|
|
"\n",
|
|
"Here are a few of the high-level components we'll be working with:\n",
|
|
"\n",
|
|
"- [`Chat Models`](/docs/concepts/#chat-models). The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs.\n",
|
|
"- [`Prompt Templates`](/docs/concepts/#prompt-templates), which simplify the process of assembling prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.\n",
|
|
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. \n",
|
|
"\n",
|
|
"We'll cover how to fit the above components together to create a powerful conversational chatbot.\n",
|
|
"\n",
|
|
"## Setup\n",
|
|
"\n",
|
|
"### Jupyter Notebook\n",
|
|
"\n",
|
|
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n",
|
|
"\n",
|
|
"You do not NEED to go through the guide in a Jupyter Notebook, but it is recommended. See [here](https://jupyter.org/install) for instructions on how to install.\n",
|
|
"\n",
|
|
"### Installation\n",
|
|
"\n",
|
|
"To install LangChain run:\n",
|
|
"\n",
|
|
"```{=mdx}\n",
|
|
"import Tabs from '@theme/Tabs';\n",
|
|
"import TabItem from '@theme/TabItem';\n",
|
|
"import CodeBlock from \"@theme/CodeBlock\";\n",
|
|
"\n",
|
|
"<Tabs>\n",
|
|
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
|
|
" <CodeBlock language=\"bash\">pip install langchain</CodeBlock>\n",
|
|
" </TabItem>\n",
|
|
" <TabItem value=\"conda\" label=\"Conda\">\n",
|
|
" <CodeBlock language=\"bash\">conda install langchain -c conda-forge</CodeBlock>\n",
|
|
" </TabItem>\n",
|
|
"</Tabs>\n",
|
|
"\n",
|
|
"```\n",
|
|
"\n",
|
|
"\n",
|
|
"For more details, see our [Installation guide](/docs/get_started/installation).\n",
|
|
"\n",
|
|
"### LangSmith\n",
|
|
"\n",
|
|
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
|
|
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
|
|
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
|
|
"\n",
|
|
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
|
|
"\n",
|
|
"```shell\n",
|
|
"export LANGCHAIN_TRACING_V2=\"true\"\n",
|
|
"export LANGCHAIN_API_KEY=\"...\"\n",
|
|
"```\n",
|
|
"\n",
|
|
"Or, if in a notebook, you can set them with:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"import getpass\n",
|
|
"import os\n",
|
|
"\n",
|
|
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
|
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
|
|
"```\n",
|
|
"\n",
|
|
"## Quickstart\n",
|
|
"\n",
|
|
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n",
|
|
"\n",
|
|
"```{=mdx}\n",
|
|
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
|
"\n",
|
|
"<ChatModelTabs openaiParams={`model=\"gpt-3.5-turbo\"`} />\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# | output: false\n",
|
|
"# | echo: false\n",
|
|
"\n",
|
|
"from langchain_openai import ChatOpenAI\n",
|
|
"\n",
|
|
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's first use the model directly. `ChatModel`s are instances of LangChain \"Runnables\", which means they expose a standard interface for interacting with them. To just simply call the model, we can pass in a list of messages to the `.invoke` method."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 12, 'total_tokens': 22}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-be38de4a-ccef-4a48-bf82-4292510a8cbf-0')"
|
|
]
|
|
},
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_core.messages import HumanMessage\n",
|
|
"\n",
|
|
"model.invoke(\n",
|
|
" [HumanMessage(content=\"Hi! I'm Bob\")]\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The model on its own does not have any concept of state. For example, if you ask a followup question:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"I'm sorry, as an AI assistant, I do not have the capability to know your name unless you provide it to me.\", response_metadata={'token_usage': {'completion_tokens': 26, 'prompt_tokens': 12, 'total_tokens': 38}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_caf95bb1ae', 'finish_reason': 'stop', 'logprobs': None}, id='run-8d8a9d8b-dddb-48f1-b0ed-ce80ce5397d8-0')"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"model.invoke([HumanMessage(content=\"What's my name?\")])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's take a look at the example LangSmith trace here: https://smith.langchain.com/public/5c21cb92-2814-4119-bae9-d02b8db577ac/r\n",
|
|
"\n",
|
|
"We can see that it doesn't take the previous conversation turn into context, and cannot answer the question.\n",
|
|
"This makes for a terrible chatbot experience!\n",
|
|
"\n",
|
|
"To get around this, we need to pass the entire conversation history into the model. Let's see what happens when we do that:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content='Your name is Bob.', response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 35, 'total_tokens': 40}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-5692718a-5d29-4f84-bad1-a9819a6118f1-0')"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_core.messages import AIMessage\n",
|
|
"\n",
|
|
"model.invoke(\n",
|
|
" [\n",
|
|
" HumanMessage(content=\"Hi! I'm Bob\"),\n",
|
|
" AIMessage(content=\"Hello Bob! How can I assist you today?\"),\n",
|
|
" HumanMessage(content=\"What's my name?\"),\n",
|
|
" ]\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And now we can see that we get a good response!\n",
|
|
"\n",
|
|
"This is the basic idea underpinning a chatbot's ability to interact conversationally.\n",
|
|
"So how do we best implement this?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Message History\n",
|
|
"\n",
|
|
"We can use a Message History class to wrap our model and make it stateful.\n",
|
|
"This will keep track of inputs and outputs of the model, and store them in some datastore.\n",
|
|
"Future interactions will then load those messages and pass them into the chain as part of the input.\n",
|
|
"Let's see how to use this!\n",
|
|
"\n",
|
|
"First, let's make sure to install `langchain-community`, as we will be using an integration in there to store message history."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# ! pip install langchain_community"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"After that, we can import the relevant classes and set up our chain which wraps the model and adds in this message history. A key part here is the function we pass into as the `get_session_history`. This function is expected to take in a `session_id` and return a Message History object. This `session_id` is used to distinguish between separate conversations, and should be passed in as part of the config when calling the new chain (we'll show how to do that."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
|
|
"from langchain_core.chat_history import BaseChatMessageHistory\n",
|
|
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
|
"\n",
|
|
"store = {}\n",
|
|
"\n",
|
|
"\n",
|
|
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
|
|
" if session_id not in store:\n",
|
|
" store[session_id] = ChatMessageHistory()\n",
|
|
" return store[session_id]\n",
|
|
"\n",
|
|
"\n",
|
|
"with_message_history = RunnableWithMessageHistory(\n",
|
|
" model,\n",
|
|
" get_session_history\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We now need to create a `config` that we pass into the runnable every time. This config contains information that is not part of the input directly, but is still useful. In this case, we want to include a `session_id`. This should look like:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"config = {\"configurable\": {\"session_id\": \"abc2\"}}"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Hello Bob! How can I assist you today?'"
|
|
]
|
|
},
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = with_message_history.invoke(\n",
|
|
" [HumanMessage(content=\"Hi! I'm Bob\")],\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Your name is Bob.'"
|
|
]
|
|
},
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = with_message_history.invoke(\n",
|
|
" [HumanMessage(content=\"What's my name?\")],\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Great! Our chatbot now remembers things about us. If we change the config to reference a different `session_id`, we can see that it starts the conversation fresh."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"\"I'm sorry, I do not have the ability to know your name unless you tell me.\""
|
|
]
|
|
},
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"config = {\"configurable\": {\"session_id\": \"abc3\"}}\n",
|
|
"\n",
|
|
"response = with_message_history.invoke(\n",
|
|
" [HumanMessage(content=\"What's my name?\")],\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"However, we can always go back to the original conversation (since we are persisting it in a database)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Your name is Bob.'"
|
|
]
|
|
},
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"config = {\"configurable\": {\"session_id\": \"abc2\"}}\n",
|
|
"\n",
|
|
"response = with_message_history.invoke(\n",
|
|
" [HumanMessage(content=\"What's my name?\")],\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This is how we can support a chatbot having conversations with many users!\n",
|
|
"\n",
|
|
"Right now, all we've done is add a simple persistence layer around the model. We can start to make the more complicated and personalized by adding in a prompt template."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Prompt templates\n",
|
|
"\n",
|
|
"Prompt Templates help to turn raw user information into a format that the LLM can work with. In this case, the raw user input is just a message, which we are passing to the LLM. Let's now make that a bit more complicated. First, let's add in a system message with some custom instructions (but still taking messages as input). Next, we'll add in more input besides just the messages.\n",
|
|
"\n",
|
|
"First, let's add in a system message. To do this, we will create a ChatPromptTemplate. We will utilize `MessagesPlaceholder` to pass all the messages in."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
|
"\n",
|
|
"prompt = ChatPromptTemplate.from_messages(\n",
|
|
" [\n",
|
|
" (\n",
|
|
" \"system\",\n",
|
|
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
|
|
" ),\n",
|
|
" MessagesPlaceholder(variable_name=\"messages\"),\n",
|
|
" ]\n",
|
|
")\n",
|
|
"\n",
|
|
"chain = prompt | model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note that this slightly changes the input type - rather than pass in a list of messages, we are now passing in a dictionary with a `messages` key where that contains a list of messages."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Hello, Bob! How can I assist you today?'"
|
|
]
|
|
},
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = chain.invoke({\"messages\": [HumanMessage(content=\"hi! I'm bob\")]})\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can now wrap this in the same Messages History object as before"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 22,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"with_message_history = RunnableWithMessageHistory(\n",
|
|
" chain,\n",
|
|
" get_session_history\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"config = {\"configurable\": {\"session_id\": \"abc5\"}}"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Hello, Jim! How can I assist you today?'"
|
|
]
|
|
},
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = with_message_history.invoke(\n",
|
|
" [HumanMessage(content=\"Hi! I'm Jim\")],\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Your name is Jim. How can I assist you further, Jim?'"
|
|
]
|
|
},
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = with_message_history.invoke(\n",
|
|
" [HumanMessage(content=\"What's my name?\")],\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Awesome! Let's now make our prompt a little bit more complicated. Let's assume that the prompt template now looks something like this:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 27,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"prompt = ChatPromptTemplate.from_messages(\n",
|
|
" [\n",
|
|
" (\n",
|
|
" \"system\",\n",
|
|
" \"You are a helpful assistant. Answer all questions to the best of your ability in {language}.\",\n",
|
|
" ),\n",
|
|
" MessagesPlaceholder(variable_name=\"messages\"),\n",
|
|
" ]\n",
|
|
")\n",
|
|
"\n",
|
|
"chain = prompt | model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note that we have added a new `language` input to the prompt. We can now invoke the chain and pass in a language of our choice."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'¡Hola Bob! ¿En qué puedo ayudarte hoy?'"
|
|
]
|
|
},
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = chain.invoke({\n",
|
|
" \"messages\": [HumanMessage(content=\"hi! I'm bob\")],\n",
|
|
" \"language\": \"Spanish\"\n",
|
|
"})\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's now wrap this more complicated chain in a Message History class. This time, because there are multiple keys in the input, we need to specify the correct key to use to save the chat history."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 29,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"with_message_history = RunnableWithMessageHistory(\n",
|
|
" chain,\n",
|
|
" get_session_history,\n",
|
|
" input_messages_key=\"messages\",\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 33,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"config = {\"configurable\": {\"session_id\": \"abc11\"}}"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 34,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'¡Hola Todd! ¿En qué puedo ayudarte hoy?'"
|
|
]
|
|
},
|
|
"execution_count": 34,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = with_message_history.invoke({\n",
|
|
" \"messages\": [HumanMessage(content=\"hi! I'm todd\")],\n",
|
|
" \"language\": \"Spanish\"\n",
|
|
" },\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 35,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Tu nombre es Todd. ¿Hay algo más en lo que pueda ayudarte?'"
|
|
]
|
|
},
|
|
"execution_count": 35,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response = with_message_history.invoke({\n",
|
|
" \"messages\": [HumanMessage(content=\"whats my name?\")],\n",
|
|
" \"language\": \"Spanish\"\n",
|
|
" },\n",
|
|
" config=config,\n",
|
|
")\n",
|
|
"\n",
|
|
"response.content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To help you understand what's happening internally, check out [this LangSmith trace](https://smith.langchain.com/public/f48fabb6-6502-43ec-8242-afc352b769ed/r)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Streaming\n",
|
|
"\n",
|
|
"Now we've got a function chatbot. However, one *really* important UX consideration for chatbot application is streaming. LLMs can sometimes take a while to respond, and so in order to improve the user experience one thing that most application do is stream back each token as it is generated. This allows the user to see progress.\n",
|
|
"\n",
|
|
"It's actually super easy to do this!\n",
|
|
"\n",
|
|
"All chains expose a `.stream` method, and ones that use message history are no different. We can simply use that method to get back a streaming response."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 40,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"|Sure|,| Todd|!| Here|'s| a| joke| for| you|:\n",
|
|
"\n",
|
|
"|Why| don|'t| scientists| trust| atoms|?\n",
|
|
"\n",
|
|
"|Because| they| make| up| everything|!||"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"config = {\"configurable\": {\"session_id\": \"abc15\"}}\n",
|
|
"for r in with_message_history.stream({\n",
|
|
" \"messages\": [HumanMessage(content=\"hi! I'm todd. tell me a joke\")],\n",
|
|
" \"language\": \"English\"\n",
|
|
" },\n",
|
|
" config=config,\n",
|
|
" ):\n",
|
|
" print(r.content, end=\"|\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Next Steps\n",
|
|
"\n",
|
|
"Now that you understand the basics of how to create a chatbot in LangChain, some more advanced tutorials you may be interested in are:\n",
|
|
"\n",
|
|
"- [Conversational RAG](TODO): Enable a chatbot experience over an external source of data\n",
|
|
"- [Agents](/docs/tutorials/agents): Build a chatbot that can take actions\n",
|
|
"\n",
|
|
"If you want to dive deeper on specifics, some things worth checking out are:\n",
|
|
"\n",
|
|
"- [Streaming](/docs/how_to/streaming): streaming is *crucial* for chat applications\n",
|
|
"- [How to add message history](/docs/how_to/message_history): for a deeper dive into all things related to message history"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.1"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|