Files
langchain/docs/docs/how_to/multi_vector.ipynb
2024-08-23 10:01:16 -07:00

630 lines
21 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
{
"cells": [
{
"cell_type": "markdown",
"id": "d9172545",
"metadata": {},
"source": [
"# How to retrieve using multiple vectors per document\n",
"\n",
"It can often be useful to store multiple vectors per document. There are multiple use cases where this is beneficial. For example, we can embed multiple chunks of a document and associate those embeddings with the parent document, allowing retriever hits on the chunks to return the larger document.\n",
"\n",
"LangChain implements a base [MultiVectorRetriever](https://python.langchain.com/v0.2/api_reference/langchain/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html), which simplifies this process. Much of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"\n",
"The methods to create multiple vectors per document include:\n",
"\n",
"- Smaller chunks: split a document into smaller chunks, and embed those (this is [ParentDocumentRetriever](https://python.langchain.com/v0.2/api_reference/langchain/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html)).\n",
"- Summary: create a summary for each document, embed that along with (or instead of) the document.\n",
"- Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.\n",
"\n",
"Note that this also enables another method of adding embeddings - manually. This is useful because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.\n",
"\n",
"Below we walk through an example. First we instantiate some documents. We will index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store using [OpenAI](https://python.langchain.com/v0.2/docs/integrations/text_embedding/openai/) embeddings, but any LangChain vector store or embeddings model will suffice."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09cecd95-3499-465a-895a-944627ffb77f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-chroma langchain langchain-openai > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "18c1421a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.storage import InMemoryByteStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loaders = [\n",
" TextLoader(\"paul_graham_essay.txt\"),\n",
" TextLoader(\"state_of_the_union.txt\"),\n",
"]\n",
"docs = []\n",
"for loader in loaders:\n",
" docs.extend(loader.load())\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)\n",
"docs = text_splitter.split_documents(docs)\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "fa17beda",
"metadata": {},
"source": [
"## Smaller chunks\n",
"\n",
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the [ParentDocumentRetriever](https://python.langchain.com/v0.2/api_reference/langchain/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html) does. Here we show what is going on under the hood.\n",
"\n",
"We will make a distinction between the vector store, which indexes embeddings of the (sub) documents, and the document store, which houses the \"parent\" documents and associates them with an identifier."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0e7b6b45",
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"\n",
"# The storage layer for the parent documents\n",
"store = InMemoryByteStore()\n",
"id_key = \"doc_id\"\n",
"\n",
"# The retriever (empty to start)\n",
"retriever = MultiVectorRetriever(\n",
" vectorstore=vectorstore,\n",
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "markdown",
"id": "d4feded4-856a-4282-91c3-53aabc62e6ff",
"metadata": {},
"source": [
"We next generate the \"sub\" documents by splitting the original documents. Note that we store the document identifier in the `metadata` of the corresponding [Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html) object."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5d23247d",
"metadata": {},
"outputs": [],
"source": [
"# The splitter to use to create smaller chunks\n",
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
"\n",
"sub_docs = []\n",
"for i, doc in enumerate(docs):\n",
" _id = doc_ids[i]\n",
" _sub_docs = child_text_splitter.split_documents([doc])\n",
" for _doc in _sub_docs:\n",
" _doc.metadata[id_key] = _id\n",
" sub_docs.extend(_sub_docs)"
]
},
{
"cell_type": "markdown",
"id": "8e0634f8-90d5-4250-981a-5257c8a6d455",
"metadata": {},
"source": [
"Finally, we index the documents in our vector store and document store:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "92ed5861",
"metadata": {},
"outputs": [],
"source": [
"retriever.vectorstore.add_documents(sub_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "markdown",
"id": "14c48c6d-850c-4317-9b6e-1ade92f2f710",
"metadata": {},
"source": [
"The vector store alone will retrieve small chunks:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8afed60c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '064eca46-a4c4-4789-8e3b-583f9597e54f', 'source': 'state_of_the_union.txt'})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.vectorstore.similarity_search(\"justice breyer\")[0]"
]
},
{
"cell_type": "markdown",
"id": "717097c7-61d9-4306-8625-ef8f1940c127",
"metadata": {},
"source": [
"Whereas the retriever will return the larger parent document:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "3c9017f1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"9875"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "cdef8339-f9fa-4b3b-955f-ad9dbdf2734f",
"metadata": {},
"source": [
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://python.langchain.com/v0.2/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "36739460-a737-4a8e-b70f-50bf8c8eaae7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"9875"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.retrievers.multi_vector import SearchType\n",
"\n",
"retriever.search_type = SearchType.mmr\n",
"\n",
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "d6a7ae0d",
"metadata": {},
"source": [
"## Associating summaries with a document for retrieval\n",
"\n",
"A summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those.\n",
"\n",
"We construct a simple [chain](/docs/how_to/sequence) that will receive an input [Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html) object and generate a summary using a LLM.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "6589291f-55bb-4e9a-b4ff-08f2506ed641",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1433dff4",
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
"\n",
"from langchain_core.documents import Document\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
" | ChatPromptTemplate.from_template(\"Summarize the following document:\\n\\n{doc}\")\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3faa9fde-1b09-4849-a815-8b2e89c30a02",
"metadata": {},
"source": [
"Note that we can [batch](https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) the chain accross documents:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "41a2a738",
"metadata": {},
"outputs": [],
"source": [
"summaries = chain.batch(docs, {\"max_concurrency\": 5})"
]
},
{
"cell_type": "markdown",
"id": "73ef599e-140b-4905-8b62-6c52cdde1852",
"metadata": {},
"source": [
"We can then initialize a `MultiVectorRetriever` as before, indexing the summaries in our vector store, and retaining the original documents in our document store:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "7ac5e4b1",
"metadata": {},
"outputs": [],
"source": [
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",
"# The storage layer for the parent documents\n",
"store = InMemoryByteStore()\n",
"id_key = \"doc_id\"\n",
"# The retriever (empty to start)\n",
"retriever = MultiVectorRetriever(\n",
" vectorstore=vectorstore,\n",
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"summary_docs = [\n",
" Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
" for i, s in enumerate(summaries)\n",
"]\n",
"\n",
"retriever.vectorstore.add_documents(summary_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "862ae920",
"metadata": {},
"outputs": [],
"source": [
"# # We can also add the original chunks to the vectorstore if we so want\n",
"# for i, doc in enumerate(docs):\n",
"# doc.metadata[id_key] = doc_ids[i]\n",
"# retriever.vectorstore.add_documents(docs)"
]
},
{
"cell_type": "markdown",
"id": "f0274892-29c1-4616-9040-d23f9d537526",
"metadata": {},
"source": [
"Querying the vector store will return summaries:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "299232d6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content=\"President Biden recently nominated Judge Ketanji Brown Jackson to serve on the United States Supreme Court, emphasizing her qualifications and broad support. The President also outlined a plan to secure the border, fix the immigration system, protect women's rights, support LGBTQ+ Americans, and advance mental health services. He highlighted the importance of bipartisan unity in passing legislation, such as the Violence Against Women Act. The President also addressed supporting veterans, particularly those impacted by exposure to burn pits, and announced plans to expand benefits for veterans with respiratory cancers. Additionally, he proposed a plan to end cancer as we know it through the Cancer Moonshot initiative. President Biden expressed optimism about the future of America and emphasized the strength of the American people in overcoming challenges.\", metadata={'doc_id': '84015b1b-980e-400a-94d8-cf95d7e079bd'})"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs[0]"
]
},
{
"cell_type": "markdown",
"id": "e4f77ac5-2926-4f60-aad5-b2067900dff9",
"metadata": {},
"source": [
"Whereas the retriever will return the larger source document:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "e4cce5c2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"9194"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"\n",
"len(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "097a5396",
"metadata": {},
"source": [
"## Hypothetical Queries\n",
"\n",
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document, which might bear close semantic similarity to relevant queries in a [RAG](/docs/tutorials/rag) application. These questions can then be embedded and associated with the documents to improve retrieval.\n",
"\n",
"Below, we use the [with_structured_output](/docs/how_to/structured_output/) method to structure the LLM output into a list of strings."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "03d85234-c33a-4a43-861d-47328e1ec2ea",
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class HypotheticalQuestions(BaseModel):\n",
" \"\"\"Generate hypothetical questions.\"\"\"\n",
"\n",
" questions: List[str] = Field(..., description=\"List of questions\")\n",
"\n",
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
" # Only asking for 3 hypothetical questions, but this could be adjusted\n",
" | ChatPromptTemplate.from_template(\n",
" \"Generate a list of exactly 3 hypothetical questions that the below document could be used to answer:\\n\\n{doc}\"\n",
" )\n",
" | ChatOpenAI(max_retries=0, model=\"gpt-4o\").with_structured_output(\n",
" HypotheticalQuestions\n",
" )\n",
" | (lambda x: x.questions)\n",
")"
]
},
{
"cell_type": "markdown",
"id": "6dddc40f-62af-413c-b944-f94a5e1f2f4e",
"metadata": {},
"source": [
"Invoking the chain on a single document demonstrates that it outputs a list of questions:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "11d30554",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[\"What impact did the IBM 1401 have on the author's early programming experiences?\",\n",
" \"How did the transition from using the IBM 1401 to microcomputers influence the author's programming journey?\",\n",
" \"What role did Lisp play in shaping the author's understanding and approach to AI?\"]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(docs[0])"
]
},
{
"cell_type": "markdown",
"id": "dcffc572-7b20-4b77-857a-90ec360a8f7e",
"metadata": {},
"source": [
"We can batch then batch the chain over all documents and assemble our vector store and document store as before:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "b2cd6e75",
"metadata": {},
"outputs": [],
"source": [
"# Batch chain over documents to generate hypothetical questions\n",
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})\n",
"\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"hypo-questions\", embedding_function=OpenAIEmbeddings()\n",
")\n",
"# The storage layer for the parent documents\n",
"store = InMemoryByteStore()\n",
"id_key = \"doc_id\"\n",
"# The retriever (empty to start)\n",
"retriever = MultiVectorRetriever(\n",
" vectorstore=vectorstore,\n",
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"\n",
"# Generate Document objects from hypothetical questions\n",
"question_docs = []\n",
"for i, question_list in enumerate(hypothetical_questions):\n",
" question_docs.extend(\n",
" [Document(page_content=s, metadata={id_key: doc_ids[i]}) for s in question_list]\n",
" )\n",
"\n",
"\n",
"retriever.vectorstore.add_documents(question_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "markdown",
"id": "75cba8ab-a06f-4545-85fc-cf49d0204b5e",
"metadata": {},
"source": [
"Note that querying the underlying vector store will retrieve hypothetical questions that are semantically similar to the input query:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "7b442b90",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='What might be the potential benefits of nominating Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court?', metadata={'doc_id': '43292b74-d1b8-4200-8a8b-ea0cb57fbcdb'}),\n",
" Document(page_content='How might the Bipartisan Infrastructure Law impact the economic competition between the U.S. and China?', metadata={'doc_id': '66174780-d00c-4166-9791-f0069846e734'}),\n",
" Document(page_content='What factors led to the creation of Y Combinator?', metadata={'doc_id': '72003c4e-4cc9-4f09-a787-0b541a65b38c'}),\n",
" Document(page_content='How did the ability to publish essays online change the landscape for writers and thinkers?', metadata={'doc_id': 'e8d2c648-f245-4bcc-b8d3-14e64a164b64'})]"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs"
]
},
{
"cell_type": "markdown",
"id": "63c32e43-5f4a-463b-a0c2-2101986f70e6",
"metadata": {},
"source": [
"And invoking the retriever will return the corresponding document:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "7594b24e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"9194"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"len(retrieved_docs[0].page_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}