mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-06 23:58:51 +00:00
965 lines
43 KiB
Plaintext
965 lines
43 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "5630b0ca",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Build a Retrieval Augmented Generation (RAG) App\n",
|
||
"\n",
|
||
"One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval Augmented Generation, or RAG.\n",
|
||
"\n",
|
||
"This tutorial will show how to build a simple Q&A application\n",
|
||
"over a text data source. Along the way we’ll go over a typical Q&A\n",
|
||
"architecture and highlight additional resources for more advanced Q&A techniques. We’ll also see\n",
|
||
"how LangSmith can help us trace and understand our application.\n",
|
||
"LangSmith will become increasingly helpful as our application grows in\n",
|
||
"complexity.\n",
|
||
"\n",
|
||
"If you're already familiar with basic retrieval, you might also be interested in\n",
|
||
"this [high-level overview of different retrieval techinques](/docs/concepts/#retrieval).\n",
|
||
"\n",
|
||
"## What is RAG?\n",
|
||
"\n",
|
||
"RAG is a technique for augmenting LLM knowledge with additional data.\n",
|
||
"\n",
|
||
"LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data up to a specific point in time that they were trained on. If you want to build AI applications that can reason about private data or data introduced after a model's cutoff date, you need to augment the knowledge of the model with the specific information it needs. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG).\n",
|
||
"\n",
|
||
"LangChain has a number of components designed to help build Q&A applications, and RAG applications more generally. \n",
|
||
"\n",
|
||
"**Note**: Here we focus on Q&A for unstructured data. If you are interested for RAG over structured data, check out our tutorial on doing [question/answering over SQL data](/docs/tutorials/sql_qa).\n",
|
||
"\n",
|
||
"## Concepts\n",
|
||
"A typical RAG application has two main components:\n",
|
||
"\n",
|
||
"**Indexing**: a pipeline for ingesting data from a source and indexing it. *This usually happens offline.*\n",
|
||
"\n",
|
||
"**Retrieval and generation**: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.\n",
|
||
"\n",
|
||
"The most common full sequence from raw data to answer looks like:\n",
|
||
"\n",
|
||
"### Indexing\n",
|
||
"1. **Load**: First we need to load our data. This is done with [Document Loaders](/docs/concepts/#document-loaders).\n",
|
||
"2. **Split**: [Text splitters](/docs/concepts/#text-splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.\n",
|
||
"3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/docs/concepts/#vector-stores) and [Embeddings](/docs/concepts/#embedding-models) model.\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"### Retrieval and generation\n",
|
||
"4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/concepts/#retrievers).\n",
|
||
"5. **Generate**: A [ChatModel](/docs/concepts/#chat-models) / [LLM](/docs/concepts/#llms) produces an answer using a prompt that includes the question and the retrieved data\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"## Setup\n",
|
||
"\n",
|
||
"### Jupyter Notebook\n",
|
||
"\n",
|
||
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n",
|
||
"\n",
|
||
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
|
||
"\n",
|
||
"### Installation\n",
|
||
"\n",
|
||
"This tutorial requires these langchain dependencies:\n",
|
||
"\n",
|
||
"```{=mdx}\n",
|
||
"import Tabs from '@theme/Tabs';\n",
|
||
"import TabItem from '@theme/TabItem';\n",
|
||
"import CodeBlock from \"@theme/CodeBlock\";\n",
|
||
"\n",
|
||
"<Tabs>\n",
|
||
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
|
||
" <CodeBlock language=\"bash\">pip install langchain langchain_community langchain_chroma</CodeBlock>\n",
|
||
" </TabItem>\n",
|
||
" <TabItem value=\"conda\" label=\"Conda\">\n",
|
||
" <CodeBlock language=\"bash\">conda install langchain langchain_community langchain_chroma -c conda-forge</CodeBlock>\n",
|
||
" </TabItem>\n",
|
||
"</Tabs>\n",
|
||
"\n",
|
||
"```\n",
|
||
"\n",
|
||
"\n",
|
||
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
|
||
"\n",
|
||
"### LangSmith\n",
|
||
"\n",
|
||
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
|
||
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
|
||
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
|
||
"\n",
|
||
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
|
||
"\n",
|
||
"```shell\n",
|
||
"export LANGCHAIN_TRACING_V2=\"true\"\n",
|
||
"export LANGCHAIN_API_KEY=\"...\"\n",
|
||
"```\n",
|
||
"\n",
|
||
"Or, if in a notebook, you can set them with:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"import getpass\n",
|
||
"import os\n",
|
||
"\n",
|
||
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
|
||
"```\n",
|
||
"## Preview\n",
|
||
"\n",
|
||
"In this guide we’ll build a QA app over as website. The specific website we will use is the [LLM Powered Autonomous\n",
|
||
"Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post\n",
|
||
"by Lilian Weng, which allows us to ask questions about the contents of\n",
|
||
"the post.\n",
|
||
"\n",
|
||
"We can create a simple indexing pipeline and RAG chain to do this in ~20\n",
|
||
"lines of code:\n",
|
||
"\n",
|
||
"```{=mdx}\n",
|
||
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
||
"\n",
|
||
"<ChatModelTabs customVarName=\"llm\" />\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "26ef9d35",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# | output: false\n",
|
||
"# | echo: false\n",
|
||
"\n",
|
||
"from langchain_openai import ChatOpenAI\n",
|
||
"\n",
|
||
"llm = ChatOpenAI(model=\"gpt-4\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "6281ec7b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'Task Decomposition is a process where a complex task is broken down into smaller, simpler steps or subtasks. This technique is utilized to enhance model performance on complex tasks by making them more manageable. It can be done by using language models with simple prompting, task-specific instructions, or with human inputs.'"
|
||
]
|
||
},
|
||
"execution_count": 2,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"import bs4\n",
|
||
"from langchain import hub\n",
|
||
"from langchain_chroma import Chroma\n",
|
||
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||
"from langchain_openai import OpenAIEmbeddings\n",
|
||
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
|
||
"\n",
|
||
"# Load, chunk and index the contents of the blog.\n",
|
||
"loader = WebBaseLoader(\n",
|
||
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
|
||
" bs_kwargs=dict(\n",
|
||
" parse_only=bs4.SoupStrainer(\n",
|
||
" class_=(\"post-content\", \"post-title\", \"post-header\")\n",
|
||
" )\n",
|
||
" ),\n",
|
||
")\n",
|
||
"docs = loader.load()\n",
|
||
"\n",
|
||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
|
||
"splits = text_splitter.split_documents(docs)\n",
|
||
"vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n",
|
||
"\n",
|
||
"# Retrieve and generate using the relevant snippets of the blog.\n",
|
||
"retriever = vectorstore.as_retriever()\n",
|
||
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
|
||
"\n",
|
||
"\n",
|
||
"def format_docs(docs):\n",
|
||
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
|
||
"\n",
|
||
"\n",
|
||
"rag_chain = (\n",
|
||
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
|
||
" | prompt\n",
|
||
" | llm\n",
|
||
" | StrOutputParser()\n",
|
||
")\n",
|
||
"\n",
|
||
"rag_chain.invoke(\"What is Task Decomposition?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "3d56d203",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# cleanup\n",
|
||
"vectorstore.delete_collection()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c9d51135",
|
||
"metadata": {},
|
||
"source": [
|
||
"Check out the [LangSmith\n",
|
||
"trace](https://smith.langchain.com/public/1c6ca97e-445b-4d00-84b4-c7befcbc59fe/r).\n",
|
||
"\n",
|
||
"## Detailed walkthrough\n",
|
||
"\n",
|
||
"Let’s go through the above code step-by-step to really understand what’s\n",
|
||
"going on.\n",
|
||
"\n",
|
||
"## 1. Indexing: Load {#indexing-load}\n",
|
||
"\n",
|
||
"We need to first load the blog post contents. We can use\n",
|
||
"[DocumentLoaders](/docs/concepts#document-loaders)\n",
|
||
"for this, which are objects that load in data from a source and return a\n",
|
||
"list of\n",
|
||
"[Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html).\n",
|
||
"A `Document` is an object with some `page_content` (str) and `metadata`\n",
|
||
"(dict).\n",
|
||
"\n",
|
||
"In this case we’ll use the\n",
|
||
"[WebBaseLoader](/docs/integrations/document_loaders/web_base),\n",
|
||
"which uses `urllib` to load HTML from web URLs and `BeautifulSoup` to\n",
|
||
"parse it to text. We can customize the HTML -\\> text parsing by passing\n",
|
||
"in parameters to the `BeautifulSoup` parser via `bs_kwargs` (see\n",
|
||
"[BeautifulSoup\n",
|
||
"docs](https://beautiful-soup-4.readthedocs.io/en/latest/#beautifulsoup)).\n",
|
||
"In this case only HTML tags with class “post-content”, “post-title”, or\n",
|
||
"“post-header” are relevant, so we’ll remove all others."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "f5ba0122-8c92-4895-b5ef-f03a634e3fdf",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"43131"
|
||
]
|
||
},
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"import bs4\n",
|
||
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||
"\n",
|
||
"# Only keep post title, headers, and content from the full HTML.\n",
|
||
"bs4_strainer = bs4.SoupStrainer(class_=(\"post-title\", \"post-header\", \"post-content\"))\n",
|
||
"loader = WebBaseLoader(\n",
|
||
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
|
||
" bs_kwargs={\"parse_only\": bs4_strainer},\n",
|
||
")\n",
|
||
"docs = loader.load()\n",
|
||
"\n",
|
||
"len(docs[0].page_content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "5cf74be6-5f40-4f6d-8689-b6b42ced8b70",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"\n",
|
||
" LLM Powered Autonomous Agents\n",
|
||
" \n",
|
||
"Date: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\n",
|
||
"\n",
|
||
"\n",
|
||
"Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\n",
|
||
"Agent System Overview#\n",
|
||
"In\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(docs[0].page_content[:500])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "07845e7a",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Go deeper\n",
|
||
"\n",
|
||
"`DocumentLoader`: Object that loads data from a source as list of\n",
|
||
"`Documents`.\n",
|
||
"\n",
|
||
"- [Docs](/docs/how_to#document-loaders):\n",
|
||
" Detailed documentation on how to use `DocumentLoaders`.\n",
|
||
"- [Integrations](/docs/integrations/document_loaders/): 160+\n",
|
||
" integrations to choose from.\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/document_loaders/langchain_core.document_loaders.base.BaseLoader.html):\n",
|
||
" API reference for the base interface.\n",
|
||
"\n",
|
||
"\n",
|
||
"## 2. Indexing: Split {#indexing-split}\n",
|
||
"\n",
|
||
"\n",
|
||
"Our loaded document is over 42k characters long. This is too long to fit\n",
|
||
"in the context window of many models. Even for those models that could\n",
|
||
"fit the full post in their context window, models can struggle to find\n",
|
||
"information in very long inputs.\n",
|
||
"\n",
|
||
"To handle this we’ll split the `Document` into chunks for embedding and\n",
|
||
"vector storage. This should help us retrieve only the most relevant bits\n",
|
||
"of the blog post at run time.\n",
|
||
"\n",
|
||
"In this case we’ll split our documents into chunks of 1000 characters\n",
|
||
"with 200 characters of overlap between chunks. The overlap helps\n",
|
||
"mitigate the possibility of separating a statement from important\n",
|
||
"context related to it. We use the\n",
|
||
"[RecursiveCharacterTextSplitter](/docs/how_to/recursive_text_splitter),\n",
|
||
"which will recursively split the document using common separators like\n",
|
||
"new lines until each chunk is the appropriate size. This is the\n",
|
||
"recommended text splitter for generic text use cases.\n",
|
||
"\n",
|
||
"We set `add_start_index=True` so that the character index at which each\n",
|
||
"split Document starts within the initial Document is preserved as\n",
|
||
"metadata attribute “start_index”."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "6aa3f8c0-5113-4c36-9706-ee702407173a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"66"
|
||
]
|
||
},
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
|
||
"\n",
|
||
"text_splitter = RecursiveCharacterTextSplitter(\n",
|
||
" chunk_size=1000, chunk_overlap=200, add_start_index=True\n",
|
||
")\n",
|
||
"all_splits = text_splitter.split_documents(docs)\n",
|
||
"\n",
|
||
"len(all_splits)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "2257752c-bed2-4d57-be8e-d275bfe70ace",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"969"
|
||
]
|
||
},
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"len(all_splits[0].page_content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "325fdc48-4a24-4645-9d08-0d22f5be5e13",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"{'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/',\n",
|
||
" 'start_index': 7056}"
|
||
]
|
||
},
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"all_splits[10].metadata"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "7046d580",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Go deeper\n",
|
||
"\n",
|
||
"`TextSplitter`: Object that splits a list of `Document`s into smaller\n",
|
||
"chunks. Subclass of `DocumentTransformer`s.\n",
|
||
"\n",
|
||
"- Learn more about splitting text using different methods by reading the [how-to docs](/docs/how_to#text-splitters)\n",
|
||
"- [Code (py or js)](/docs/integrations/document_loaders/source_code)\n",
|
||
"- [Scientific papers](/docs/integrations/document_loaders/grobid)\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/base/langchain_text_splitters.base.TextSplitter.html): API reference for the base interface.\n",
|
||
"\n",
|
||
"`DocumentTransformer`: Object that performs a transformation on a list\n",
|
||
"of `Document` objects.\n",
|
||
"\n",
|
||
"- [Docs](/docs/how_to#text-splitters): Detailed documentation on how to use `DocumentTransformers`\n",
|
||
"- [Integrations](/docs/integrations/document_transformers/)\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.transformers.BaseDocumentTransformer.html): API reference for the base interface.\n",
|
||
"\n",
|
||
"## 3. Indexing: Store {#indexing-store}\n",
|
||
"\n",
|
||
"Now we need to index our 66 text chunks so that we can search over them\n",
|
||
"at runtime. The most common way to do this is to embed the contents of\n",
|
||
"each document split and insert these embeddings into a vector database\n",
|
||
"(or vector store). When we want to search over our splits, we take a\n",
|
||
"text search query, embed it, and perform some sort of “similarity”\n",
|
||
"search to identify the stored splits with the most similar embeddings to\n",
|
||
"our query embedding. The simplest similarity measure is cosine\n",
|
||
"similarity — we measure the cosine of the angle between each pair of\n",
|
||
"embeddings (which are high dimensional vectors).\n",
|
||
"\n",
|
||
"We can embed and store all of our document splits in a single command\n",
|
||
"using the [Chroma](/docs/integrations/vectorstores/chroma)\n",
|
||
"vector store and\n",
|
||
"[OpenAIEmbeddings](/docs/integrations/text_embedding/openai)\n",
|
||
"model.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "0b44b41a-8b25-42ad-9e37-7baf82a058cd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain_chroma import Chroma\n",
|
||
"from langchain_openai import OpenAIEmbeddings\n",
|
||
"\n",
|
||
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "dbddc12e",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Go deeper\n",
|
||
"\n",
|
||
"`Embeddings`: Wrapper around a text embedding model, used for converting\n",
|
||
"text to embeddings.\n",
|
||
"\n",
|
||
"- [Docs](/docs/how_to/embed_text): Detailed documentation on how to use embeddings.\n",
|
||
"- [Integrations](/docs/integrations/text_embedding/): 30+ integrations to choose from.\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/embeddings/langchain_core.embeddings.Embeddings.html): API reference for the base interface.\n",
|
||
"\n",
|
||
"`VectorStore`: Wrapper around a vector database, used for storing and\n",
|
||
"querying embeddings.\n",
|
||
"\n",
|
||
"- [Docs](/docs/how_to/vectorstores): Detailed documentation on how to use vector stores.\n",
|
||
"- [Integrations](/docs/integrations/vectorstores/): 40+ integrations to choose from.\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.\n",
|
||
"\n",
|
||
"This completes the **Indexing** portion of the pipeline. At this point\n",
|
||
"we have a query-able vector store containing the chunked contents of our\n",
|
||
"blog post. Given a user question, we should ideally be able to return\n",
|
||
"the snippets of the blog post that answer the question.\n",
|
||
"\n",
|
||
"## 4. Retrieval and Generation: Retrieve {#retrieval-and-generation-retrieve}\n",
|
||
"\n",
|
||
"Now let’s write the actual application logic. We want to create a simple\n",
|
||
"application that takes a user question, searches for documents relevant\n",
|
||
"to that question, passes the retrieved documents and initial question to\n",
|
||
"a model, and returns an answer.\n",
|
||
"\n",
|
||
"First we need to define our logic for searching over documents.\n",
|
||
"LangChain defines a\n",
|
||
"[Retriever](/docs/concepts#retrievers/) interface\n",
|
||
"which wraps an index that can return relevant `Documents` given a string\n",
|
||
"query.\n",
|
||
"\n",
|
||
"The most common type of `Retriever` is the\n",
|
||
"[VectorStoreRetriever](/docs/how_to/vectorstore_retriever),\n",
|
||
"which uses the similarity search capabilities of a vector store to\n",
|
||
"facilitate retrieval. Any `VectorStore` can easily be turned into a\n",
|
||
"`Retriever` with `VectorStore.as_retriever()`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "1a0d25f8-8a45-4ec7-b419-c36e231fde13",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"6"
|
||
]
|
||
},
|
||
"execution_count": 9,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"retriever = vectorstore.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 6})\n",
|
||
"\n",
|
||
"retrieved_docs = retriever.invoke(\"What are the approaches to Task Decomposition?\")\n",
|
||
"\n",
|
||
"len(retrieved_docs)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "58db0a6a-f1ad-4d28-acf8-98be9ed3c968",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\n",
|
||
"Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(retrieved_docs[0].page_content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "8bb602b0",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Go deeper\n",
|
||
"\n",
|
||
"Vector stores are commonly used for retrieval, but there are other ways\n",
|
||
"to do retrieval, too.\n",
|
||
"\n",
|
||
"`Retriever`: An object that returns `Document`s given a text query\n",
|
||
"\n",
|
||
"- [Docs](/docs/how_to#retrievers): Further\n",
|
||
" documentation on the interface and built-in retrieval techniques.\n",
|
||
" Some of which include:\n",
|
||
" - `MultiQueryRetriever` [generates variants of the input\n",
|
||
" question](/docs/how_to/MultiQueryRetriever)\n",
|
||
" to improve retrieval hit rate.\n",
|
||
" - `MultiVectorRetriever` instead generates\n",
|
||
" [variants of the\n",
|
||
" embeddings](/docs/how_to/multi_vector),\n",
|
||
" also in order to improve retrieval hit rate.\n",
|
||
" - `Max marginal relevance` selects for [relevance and\n",
|
||
" diversity](https://www.cs.cmu.edu/~jgc/publication/The_Use_MMR_Diversity_Based_LTMIR_1998.pdf)\n",
|
||
" among the retrieved documents to avoid passing in duplicate\n",
|
||
" context.\n",
|
||
" - Documents can be filtered during vector store retrieval using\n",
|
||
" metadata filters, such as with a [Self Query\n",
|
||
" Retriever](/docs/how_to/self_query).\n",
|
||
"- [Integrations](/docs/integrations/retrievers/): Integrations\n",
|
||
" with retrieval services.\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html):\n",
|
||
" API reference for the base interface.\n",
|
||
"\n",
|
||
"## 5. Retrieval and Generation: Generate {#retrieval-and-generation-generate}\n",
|
||
"\n",
|
||
"Let’s put it all together into a chain that takes a question, retrieves\n",
|
||
"relevant documents, constructs a prompt, passes that to a model, and\n",
|
||
"parses the output.\n",
|
||
"\n",
|
||
"We’ll use the gpt-3.5-turbo OpenAI chat model, but any LangChain `LLM`\n",
|
||
"or `ChatModel` could be substituted in.\n",
|
||
"\n",
|
||
"```{=mdx}\n",
|
||
"<ChatModelTabs\n",
|
||
" customVarName=\"llm\"\n",
|
||
" anthropicParams={`\"model=\"claude-3-sonnet-20240229\", temperature=0.2, max_tokens=1024\"`}\n",
|
||
"/>\n",
|
||
"```\n",
|
||
"\n",
|
||
"We’ll use a prompt for RAG that is checked into the LangChain prompt hub\n",
|
||
"([here](https://smith.langchain.com/hub/rlm/rag-prompt))."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"id": "ff01d415-7b0f-469d-bfda-b9cb672da611",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[HumanMessage(content=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: filler question \\nContext: filler context \\nAnswer:\")]"
|
||
]
|
||
},
|
||
"execution_count": 11,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain import hub\n",
|
||
"\n",
|
||
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
|
||
"\n",
|
||
"example_messages = prompt.invoke(\n",
|
||
" {\"context\": \"filler context\", \"question\": \"filler question\"}\n",
|
||
").to_messages()\n",
|
||
"\n",
|
||
"example_messages"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"id": "2885ed99-31a0-4d7e-b9b0-af49c462caf4",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\n",
|
||
"Question: filler question \n",
|
||
"Context: filler context \n",
|
||
"Answer:\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(example_messages[0].content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4516200c",
|
||
"metadata": {},
|
||
"source": [
|
||
"We’ll use the [LCEL Runnable](/docs/concepts#langchain-expression-language-lcel)\n",
|
||
"protocol to define the chain, allowing us to \n",
|
||
"\n",
|
||
"- pipe together components and functions in a transparent way \n",
|
||
"- automatically trace our chain in LangSmith \n",
|
||
"- get streaming, async, and batched calling out of the box.\n",
|
||
"\n",
|
||
"Here is the implementation:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"id": "d6820cf3-e14d-4275-bd00-aa1b8262b1ae",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Task Decomposition is a process where a complex task is broken down into smaller, more manageable steps or parts. This is often done using techniques like \"Chain of Thought\" or \"Tree of Thoughts\", which instruct a model to \"think step by step\" and transform large tasks into multiple simple tasks. Task decomposition can be prompted in a model, guided by task-specific instructions, or influenced by human inputs."
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||
"\n",
|
||
"\n",
|
||
"def format_docs(docs):\n",
|
||
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
|
||
"\n",
|
||
"\n",
|
||
"rag_chain = (\n",
|
||
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
|
||
" | prompt\n",
|
||
" | llm\n",
|
||
" | StrOutputParser()\n",
|
||
")\n",
|
||
"\n",
|
||
"for chunk in rag_chain.stream(\"What is Task Decomposition?\"):\n",
|
||
" print(chunk, end=\"\", flush=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3dacf214-0803-46f1-960d-42336a545e39",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's dissect the LCEL to understand what's going on.\n",
|
||
"\n",
|
||
"First: each of these components (`retriever`, `prompt`, `llm`, etc.) are instances of [Runnable](/docs/concepts#langchain-expression-language-lcel). This means that they implement the same methods-- such as sync and async `.invoke`, `.stream`, or `.batch`-- which makes them easier to connect together. They can be connected into a [RunnableSequence](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSequence.html)-- another Runnable-- via the `|` operator.\n",
|
||
"\n",
|
||
"LangChain will automatically cast certain objects to runnables when met with the `|` operator. Here, `format_docs` is cast to a [RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html), and the dict with `\"context\"` and `\"question\"` is cast to a [RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html). The details are less important than the bigger point, which is that each object is a Runnable.\n",
|
||
"\n",
|
||
"Let's trace how the input question flows through the above runnables.\n",
|
||
"\n",
|
||
"As we've seen above, the input to `prompt` is expected to be a dict with keys `\"context\"` and `\"question\"`. So the first element of this chain builds runnables that will calculate both of these from the input question:\n",
|
||
"- `retriever | format_docs` passes the question through the retriever, generating [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, and then to `format_docs` to generate strings;\n",
|
||
"- `RunnablePassthrough()` passes through the input question unchanged.\n",
|
||
"\n",
|
||
"That is, if you constructed\n",
|
||
"```python\n",
|
||
"chain = (\n",
|
||
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
|
||
" | prompt\n",
|
||
")\n",
|
||
"```\n",
|
||
"Then `chain.invoke(question)` would build a formatted prompt, ready for inference. (Note: when developing with LCEL, it can be practical to test with sub-chains like this.)\n",
|
||
"\n",
|
||
"The last steps of the chain are `llm`, which runs the inference, and `StrOutputParser()`, which just plucks the string content out of the LLM's output message.\n",
|
||
"\n",
|
||
"You can analyze the individual steps of this chain via its [LangSmith\n",
|
||
"trace](https://smith.langchain.com/public/1799e8db-8a6d-4eb2-84d5-46e8d7d5a99b/r).\n",
|
||
"\n",
|
||
"### Built-in chains\n",
|
||
"\n",
|
||
"If preferred, LangChain includes convenience functions that implement the above LCEL. We compose two functions:\n",
|
||
"\n",
|
||
"- [create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) specifies how retrieved context is fed into a prompt and LLM. In this case, we will \"stuff\" the contents into the prompt -- i.e., we will include all retrieved context without any summarization or other processing. It largely implements our above `rag_chain`, with input keys `context` and `input`-- it generates an answer using retrieved context and query.\n",
|
||
"- [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) adds the retrieval step and propagates the retrieved context through the chain, providing it alongside the final answer. It has input key `input`, and includes `input`, `context`, and `answer` in its output."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 37,
|
||
"id": "e75bfe98-d9e4-4868-bae1-5811437d859b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Task Decomposition is a process in which complex tasks are broken down into smaller and simpler steps. Techniques like Chain of Thought (CoT) and Tree of Thoughts are used to enhance model performance on these tasks. The CoT method instructs the model to think step by step, decomposing hard tasks into manageable ones, while Tree of Thoughts extends CoT by exploring multiple reasoning possibilities at each step, creating a tree structure of thoughts.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain.chains import create_retrieval_chain\n",
|
||
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"\n",
|
||
"system_prompt = (\n",
|
||
" \"You are an assistant for question-answering tasks. \"\n",
|
||
" \"Use the following pieces of retrieved context to answer \"\n",
|
||
" \"the question. If you don't know the answer, say that you \"\n",
|
||
" \"don't know. Use three sentences maximum and keep the \"\n",
|
||
" \"answer concise.\"\n",
|
||
" \"\\n\\n\"\n",
|
||
" \"{context}\"\n",
|
||
")\n",
|
||
"\n",
|
||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", system_prompt),\n",
|
||
" (\"human\", \"{input}\"),\n",
|
||
" ]\n",
|
||
")\n",
|
||
"\n",
|
||
"\n",
|
||
"question_answer_chain = create_stuff_documents_chain(llm, prompt)\n",
|
||
"rag_chain = create_retrieval_chain(retriever, question_answer_chain)\n",
|
||
"\n",
|
||
"response = rag_chain.invoke({\"input\": \"What is Task Decomposition?\"})\n",
|
||
"print(response[\"answer\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0fe711ea-592b-44a1-89b3-cee33c81aca4",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### Returning sources\n",
|
||
"Often in Q&A applications it's important to show users the sources that were used to generate the answer. LangChain's built-in `create_retrieval_chain` will propagate retrieved source documents through to the output in the `\"context\"` key:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 41,
|
||
"id": "9d4cec1a-75d6-4479-929f-72cadb2dcde8",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
|
||
"\n",
|
||
"page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'start_index': 1585}\n",
|
||
"\n",
|
||
"page_content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'start_index': 2192}\n",
|
||
"\n",
|
||
"page_content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
|
||
"\n",
|
||
"page_content='Resources:\\n1. Internet access for searches and information gathering.\\n2. Long Term memory management.\\n3. GPT-3.5 powered Agents for delegation of simple tasks.\\n4. File output.\\n\\nPerformance Evaluation:\\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\\n2. Constructively self-criticize your big-picture behavior constantly.\\n3. Reflect on past decisions and strategies to refine your approach.\\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
|
||
"\n",
|
||
"page_content='Resources:\\n1. Internet access for searches and information gathering.\\n2. Long Term memory management.\\n3. GPT-3.5 powered Agents for delegation of simple tasks.\\n4. File output.\\n\\nPerformance Evaluation:\\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\\n2. Constructively self-criticize your big-picture behavior constantly.\\n3. Reflect on past decisions and strategies to refine your approach.\\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'start_index': 29630}\n",
|
||
"\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"for document in response[\"context\"]:\n",
|
||
" print(document)\n",
|
||
" print()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "7cd57618",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Go deeper\n",
|
||
"\n",
|
||
"#### Choosing a model\n",
|
||
"\n",
|
||
"`ChatModel`: An LLM-backed chat model. Takes in a sequence of messages\n",
|
||
"and returns a message.\n",
|
||
"\n",
|
||
"- [Docs](/docs/how_to#chat-models)\n",
|
||
"- [Integrations](/docs/integrations/chat/): 25+ integrations to choose from.\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html): API reference for the base interface.\n",
|
||
"\n",
|
||
"`LLM`: A text-in-text-out LLM. Takes in a string and returns a string.\n",
|
||
"\n",
|
||
"- [Docs](/docs/how_to#llms)\n",
|
||
"- [Integrations](/docs/integrations/llms): 75+ integrations to choose from.\n",
|
||
"- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html): API reference for the base interface.\n",
|
||
"\n",
|
||
"See a guide on RAG with locally-running models\n",
|
||
"[here](/docs/tutorials/local_rag).\n",
|
||
"\n",
|
||
"#### Customizing the prompt\n",
|
||
"\n",
|
||
"As shown above, we can load prompts (e.g., [this RAG\n",
|
||
"prompt](https://smith.langchain.com/hub/rlm/rag-prompt)) from the prompt\n",
|
||
"hub. The prompt can also be easily customized:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"id": "2ac552b6",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'Task decomposition is the process of breaking down a complex task into smaller, more manageable parts. Techniques like Chain of Thought (CoT) and Tree of Thoughts allow an agent to \"think step by step\" and explore multiple reasoning possibilities, respectively. This process can be executed by a Language Model with simple prompts, task-specific instructions, or human inputs. Thanks for asking!'"
|
||
]
|
||
},
|
||
"execution_count": 17,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import PromptTemplate\n",
|
||
"\n",
|
||
"template = \"\"\"Use the following pieces of context to answer the question at the end.\n",
|
||
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
|
||
"Use three sentences maximum and keep the answer as concise as possible.\n",
|
||
"Always say \"thanks for asking!\" at the end of the answer.\n",
|
||
"\n",
|
||
"{context}\n",
|
||
"\n",
|
||
"Question: {question}\n",
|
||
"\n",
|
||
"Helpful Answer:\"\"\"\n",
|
||
"custom_rag_prompt = PromptTemplate.from_template(template)\n",
|
||
"\n",
|
||
"rag_chain = (\n",
|
||
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
|
||
" | custom_rag_prompt\n",
|
||
" | llm\n",
|
||
" | StrOutputParser()\n",
|
||
")\n",
|
||
"\n",
|
||
"rag_chain.invoke(\"What is Task Decomposition?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "82e4d779",
|
||
"metadata": {},
|
||
"source": [
|
||
"Check out the [LangSmith\n",
|
||
"trace](https://smith.langchain.com/public/da23c4d8-3b33-47fd-84df-a3a582eedf84/r)\n",
|
||
"\n",
|
||
"## Next steps\n",
|
||
"\n",
|
||
"We've covered the steps to build a basic Q&A app over data:\n",
|
||
"\n",
|
||
"- Loading data with a [Document Loader](/docs/concepts#document-loaders)\n",
|
||
"- Chunking the indexed data with a [Text Splitter](/docs/concepts#text-splitters) to make it more easily usable by a model\n",
|
||
"- [Embedding the data](/docs/concepts#embedding-models) and storing the data in a [vectorstore](/docs/how_to/vectorstores)\n",
|
||
"- [Retrieving](/docs/concepts#retrievers) the previously stored chunks in response to incoming questions\n",
|
||
"- Generating an answer using the retrieved chunks as context\n",
|
||
"\n",
|
||
"There’s plenty of features, integrations, and extensions to explore in each of\n",
|
||
"the above sections. Along from the **Go deeper** sources mentioned\n",
|
||
"above, good next steps include:\n",
|
||
"\n",
|
||
"- [Return sources](/docs/how_to/qa_sources): Learn how to return source documents\n",
|
||
"- [Streaming](/docs/how_to/streaming): Learn how to stream outputs and intermediate steps\n",
|
||
"- [Add chat history](/docs/how_to/message_history): Learn how to add chat history to your app\n",
|
||
"- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.10.4"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|