mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-21 14:18:52 +00:00
docs: update use_cases/question_answering/chat_history (#19349)
Update following https://github.com/langchain-ai/langchain/issues/19344
This commit is contained in:
parent
8c2ed85a45
commit
b35e68c41f
@ -19,7 +19,7 @@
|
||||
"\n",
|
||||
"In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of \"memory\" of past questions and answers, and some logic for incorporating those into its current thinking.\n",
|
||||
"\n",
|
||||
"In this guide we focus on **adding logic for incorporating historical messages, and NOT on chat history management.** Chat history management is [covered here](/docs/expression_language/how_to/message_history).\n",
|
||||
"In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/docs/expression_language/how_to/message_history).\n",
|
||||
"\n",
|
||||
"We'll work off of the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Quickstart](/docs/use_cases/question_answering/quickstart). We'll need to update two things about our existing app:\n",
|
||||
"\n",
|
||||
@ -90,7 +90,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"id": "07411adb-3722-4f65-ab7f-8f6f57663d11",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@ -111,7 +111,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 2,
|
||||
"id": "d8a913b1-0eea-442a-8a64-ec73333f104b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@ -128,7 +128,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 3,
|
||||
"id": "820244ae-74b4-4593-b392-822979dd91b8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@ -168,17 +168,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d3b0f36-7b56-49c0-8e40-a1aa9ebcbf24",
|
||||
"execution_count": 4,
|
||||
"id": "22206dfd-d673-4fa4-887f-349d273cb3f2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done through prompting techniques like Chain of Thought or Tree of Thoughts, or by using task-specific instructions or human inputs. Task decomposition helps agents plan ahead and manage complicated tasks more effectively.'"
|
||||
"'Task Decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents to plan and execute tasks more efficiently by dividing them into manageable subgoals. Task decomposition can be achieved through various methods, including using prompting techniques, task-specific instructions, or human inputs.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -196,16 +196,21 @@
|
||||
"\n",
|
||||
"First we'll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information.\n",
|
||||
"\n",
|
||||
"We'll use a prompt that includes a `MessagesPlaceholder` variable under the name \"chat_history\". This allows us to pass in a list of Messages to the prompt using the \"chat_history\" input key, and these messages will be inserted after the system message and before the human message containing the latest question."
|
||||
"We'll use a prompt that includes a `MessagesPlaceholder` variable under the name \"chat_history\". This allows us to pass in a list of Messages to the prompt using the \"chat_history\" input key, and these messages will be inserted after the system message and before the human message containing the latest question.\n",
|
||||
"\n",
|
||||
"Note that we leverage a helper function [create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) for this step, which manages the case where `chat_history` is empty, and otherwise applies `prompt | llm | StrOutputParser() | retriever` in sequence.\n",
|
||||
"\n",
|
||||
"`create_history_aware_retriever` constructs a chain that accepts keys `input` and `chat_history` as input, and has the same output schema as a retriever."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 28,
|
||||
"execution_count": 5,
|
||||
"id": "2b685428-8b82-4af1-be4f-7232c5d55b73",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import create_history_aware_retriever\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"\n",
|
||||
"contextualize_q_system_prompt = \"\"\"Given a chat history and the latest user question \\\n",
|
||||
@ -215,11 +220,13 @@
|
||||
"contextualize_q_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", contextualize_q_system_prompt),\n",
|
||||
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
|
||||
" (\"human\", \"{question}\"),\n",
|
||||
" MessagesPlaceholder(\"chat_history\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"contextualize_q_chain = contextualize_q_prompt | llm | StrOutputParser()"
|
||||
"history_aware_retriever = create_history_aware_retriever(\n",
|
||||
" llm, retriever, contextualize_q_prompt\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -227,38 +234,7 @@
|
||||
"id": "23cbd8d7-7162-4fb0-9e69-67ea4d4603a5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Using this chain we can ask follow-up questions that reference past messages and have them reformulated into standalone questions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 29,
|
||||
"id": "46ee9aa1-16f1-4509-8dae-f8c71f4ad47d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'What is the definition of \"large\" in the context of a language model?'"
|
||||
]
|
||||
},
|
||||
"execution_count": 29,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages import AIMessage, HumanMessage\n",
|
||||
"\n",
|
||||
"contextualize_q_chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"chat_history\": [\n",
|
||||
" HumanMessage(content=\"What does LLM stand for?\"),\n",
|
||||
" AIMessage(content=\"Large language model\"),\n",
|
||||
" ],\n",
|
||||
" \"question\": \"What is meant by large\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
"This chain prepends a rephrasing of the input query to our retriever, so that the retrieval incorporates the context of the conversation."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -270,16 +246,21 @@
|
||||
"\n",
|
||||
"And now we can build our full QA chain. \n",
|
||||
"\n",
|
||||
"Notice we add some routing functionality to only run the \"condense question chain\" when our chat history isn't empty. Here we're taking advantage of the fact that if a function in an LCEL chain returns another chain, that chain will itself be invoked."
|
||||
"Here we use [create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) to generate a `question_answer_chain`, with input keys `context`, `chat_history`, and `input`-- it accepts the retrieved context alongside the conversation history and query to generate an answer.\n",
|
||||
"\n",
|
||||
"We build our final `rag_chain` with [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html). This chain applies the `history_aware_retriever` and `question_answer_chain` in sequence, retaining intermediate outputs such as the retrieved context for convenience. It has input keys `input` and `chat_history`, and includes `input`, `chat_history`, `context`, and `answer` in its output."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"execution_count": 6,
|
||||
"id": "66f275f3-ddef-4678-b90d-ee64576878f9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import create_retrieval_chain\n",
|
||||
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
|
||||
"\n",
|
||||
"qa_system_prompt = \"\"\"You are an assistant for question-answering tasks. \\\n",
|
||||
"Use the following pieces of retrieved context to answer the question. \\\n",
|
||||
"If you don't know the answer, just say that you don't know. \\\n",
|
||||
@ -289,54 +270,44 @@
|
||||
"qa_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", qa_system_prompt),\n",
|
||||
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
|
||||
" (\"human\", \"{question}\"),\n",
|
||||
" MessagesPlaceholder(\"chat_history\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def contextualized_question(input: dict):\n",
|
||||
" if input.get(\"chat_history\"):\n",
|
||||
" return contextualize_q_chain\n",
|
||||
" else:\n",
|
||||
" return input[\"question\"]\n",
|
||||
"question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"rag_chain = (\n",
|
||||
" RunnablePassthrough.assign(\n",
|
||||
" context=contextualized_question | retriever | format_docs\n",
|
||||
" )\n",
|
||||
" | qa_prompt\n",
|
||||
" | llm\n",
|
||||
")"
|
||||
"rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "51fd0e54-5bb4-4a9a-b012-87a18ebe2bef",
|
||||
"execution_count": 7,
|
||||
"id": "0005810b-1b95-4666-a795-08d80e478b83",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Common ways of task decomposition include:\\n\\n1. Using Chain of Thought (CoT): CoT is a prompting technique that instructs the model to \"think step by step\" and decompose complex tasks into smaller and simpler steps. This approach utilizes more computation at test-time and sheds light on the model\\'s thinking process.\\n\\n2. Prompting with LLM: Language Model (LLM) can be used to prompt the model with simple instructions like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\" This method guides the model to break down the task into manageable steps.\\n\\n3. Task-specific instructions: For certain tasks, task-specific instructions can be provided to guide the model in decomposing the task. For example, for writing a novel, the instruction \"Write a story outline\" can be given to help the model break down the task into smaller components.\\n\\n4. Human inputs: In some cases, human inputs can be used to assist in task decomposition. Humans can provide insights, expertise, and domain knowledge to help break down complex tasks into smaller subtasks.\\n\\nThese approaches aim to simplify complex tasks and enable more effective problem-solving and planning.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Task decomposition can be done in several common ways, including using Language Model (LLM) with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\", providing task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process. These methods help in breaking down complex tasks into smaller, more manageable subtasks for efficient execution.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"\n",
|
||||
"chat_history = []\n",
|
||||
"\n",
|
||||
"question = \"What is Task Decomposition?\"\n",
|
||||
"ai_msg = rag_chain.invoke({\"question\": question, \"chat_history\": chat_history})\n",
|
||||
"chat_history.extend([HumanMessage(content=question), ai_msg])\n",
|
||||
"ai_msg_1 = rag_chain.invoke({\"input\": question, \"chat_history\": chat_history})\n",
|
||||
"chat_history.extend([HumanMessage(content=question), ai_msg_1[\"answer\"]])\n",
|
||||
"\n",
|
||||
"second_question = \"What are common ways of doing it?\"\n",
|
||||
"rag_chain.invoke({\"question\": second_question, \"chat_history\": chat_history})"
|
||||
"ai_msg_2 = rag_chain.invoke({\"input\": second_question, \"chat_history\": chat_history})\n",
|
||||
"\n",
|
||||
"print(ai_msg_2[\"answer\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -346,16 +317,27 @@
|
||||
"source": [
|
||||
":::tip\n",
|
||||
"\n",
|
||||
"Check out the [LangSmith trace](https://smith.langchain.com/public/b3001782-bb30-476a-886b-12da17ec258f/r) \n",
|
||||
"Check out the [LangSmith trace](https://smith.langchain.com/public/243301e4-4cc5-4e52-a6e7-8cfe9208398d/r) \n",
|
||||
"\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fdf6c7e0-84f8-4747-b2ae-e84315152bd9",
|
||||
"id": "0ab1ded4-76d9-453f-9b9b-db9a4560c737",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tying it together"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a08a5ea-df5b-4547-93c6-2a3940dd5c3e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Here we've gone over how to add application logic for incorporating historical outputs, but we're still manually updating the chat history and inserting it into each input. In a real Q&A application we'll want some way of persisting chat history and some way of automatically inserting and updating it.\n",
|
||||
"\n",
|
||||
"For this we can use:\n",
|
||||
@ -363,23 +345,166 @@
|
||||
"- [BaseChatMessageHistory](/docs/modules/memory/chat_messages/): Store chat history.\n",
|
||||
"- [RunnableWithMessageHistory](/docs/expression_language/how_to/message_history): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.\n",
|
||||
"\n",
|
||||
"For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/docs/expression_language/how_to/message_history) LCEL page."
|
||||
"For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/docs/expression_language/how_to/message_history) LCEL page.\n",
|
||||
"\n",
|
||||
"Below, we implement a simple example of the second option, in which chat histories are stored in a simple dict.\n",
|
||||
"\n",
|
||||
"For convenience, we tie together all of the necessary steps in a single code cell:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1f67a60a-0a31-4315-9cce-19c78d658f6a",
|
||||
"execution_count": 1,
|
||||
"id": "71c32048-1a41-465f-a9e2-c4affc332fd9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
"source": [
|
||||
"import bs4\n",
|
||||
"from langchain import hub\n",
|
||||
"from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n",
|
||||
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
|
||||
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
|
||||
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||||
"from langchain_community.vectorstores import Chroma\n",
|
||||
"from langchain_core.chat_history import BaseChatMessageHistory\n",
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
|
||||
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Construct retriever ###\n",
|
||||
"loader = WebBaseLoader(\n",
|
||||
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
|
||||
" bs_kwargs=dict(\n",
|
||||
" parse_only=bs4.SoupStrainer(\n",
|
||||
" class_=(\"post-content\", \"post-title\", \"post-header\")\n",
|
||||
" )\n",
|
||||
" ),\n",
|
||||
")\n",
|
||||
"docs = loader.load()\n",
|
||||
"\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
|
||||
"splits = text_splitter.split_documents(docs)\n",
|
||||
"vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Contextualize question ###\n",
|
||||
"contextualize_q_system_prompt = \"\"\"Given a chat history and the latest user question \\\n",
|
||||
"which might reference context in the chat history, formulate a standalone question \\\n",
|
||||
"which can be understood without the chat history. Do NOT answer the question, \\\n",
|
||||
"just reformulate it if needed and otherwise return it as is.\"\"\"\n",
|
||||
"contextualize_q_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", contextualize_q_system_prompt),\n",
|
||||
" MessagesPlaceholder(\"chat_history\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"history_aware_retriever = create_history_aware_retriever(\n",
|
||||
" llm, retriever, contextualize_q_prompt\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Answer question ###\n",
|
||||
"qa_system_prompt = \"\"\"You are an assistant for question-answering tasks. \\\n",
|
||||
"Use the following pieces of retrieved context to answer the question. \\\n",
|
||||
"If you don't know the answer, just say that you don't know. \\\n",
|
||||
"Use three sentences maximum and keep the answer concise.\\\n",
|
||||
"\n",
|
||||
"{context}\"\"\"\n",
|
||||
"qa_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", qa_system_prompt),\n",
|
||||
" MessagesPlaceholder(\"chat_history\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
|
||||
"\n",
|
||||
"rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Statefully manage chat history ###\n",
|
||||
"store = {}\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
|
||||
" if session_id not in store:\n",
|
||||
" store[session_id] = ChatMessageHistory()\n",
|
||||
" return store[session_id]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"conversational_rag_chain = RunnableWithMessageHistory(\n",
|
||||
" rag_chain,\n",
|
||||
" get_session_history,\n",
|
||||
" input_messages_key=\"input\",\n",
|
||||
" history_messages_key=\"chat_history\",\n",
|
||||
" output_messages_key=\"answer\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "6d0a7a73-d151-47d9-9e99-b4f3291c0322",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents or models handle difficult tasks by dividing them into more manageable subtasks. It can be achieved through methods like Chain of Thought (CoT) or Tree of Thoughts, which guide the model in thinking step by step or exploring multiple reasoning possibilities at each step.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"conversational_rag_chain.invoke(\n",
|
||||
" {\"input\": \"What is Task Decomposition?\"},\n",
|
||||
" config={\n",
|
||||
" \"configurable\": {\"session_id\": \"abc123\"}\n",
|
||||
" }, # constructs a key \"abc123\" in `store`.\n",
|
||||
")[\"answer\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "17021822-896a-4513-a17d-1d20b1c5381c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Task decomposition can be done in common ways such as using Language Model (LLM) with simple prompting, task-specific instructions, or human inputs. For example, LLM can be guided with prompts like \"Steps for XYZ\" to break down tasks, or specific instructions like \"Write a story outline\" can be given for task decomposition. Additionally, human inputs can also be utilized to decompose tasks into smaller, more manageable steps.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"conversational_rag_chain.invoke(\n",
|
||||
" {\"input\": \"What are common ways of doing it?\"},\n",
|
||||
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
|
||||
")[\"answer\"]"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@ -391,7 +516,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
BIN
docs/static/img/conversational_retrieval_chain.png
vendored
Normal file
BIN
docs/static/img/conversational_retrieval_chain.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 91 KiB |
Loading…
Reference in New Issue
Block a user