mirror of
https://github.com/hwchase17/langchain.git
synced 2026-04-04 03:14:33 +00:00
270 lines
7.4 KiB
Plaintext
270 lines
7.4 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "bf4061ce",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Caching\n",
|
|
"\n",
|
|
"Embeddings can be stored or temporarily cached to avoid needing to recompute them.\n",
|
|
"\n",
|
|
"Caching embeddings can be done using a `CacheBackedEmbeddings`. The cache backed embedder is a wrapper around an embedder that caches\n",
|
|
"embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache.\n",
|
|
"\n",
|
|
"The main supported way to initialize a `CacheBackedEmbeddings` is `from_bytes_store`. It takes the following parameters:\n",
|
|
"\n",
|
|
"- underlying_embedder: The embedder to use for embedding.\n",
|
|
"- document_embedding_cache: Any [`ByteStore`](/docs/integrations/stores/) for caching document embeddings.\n",
|
|
"- batch_size: (optional, defaults to `None`) The number of documents to embed between store updates.\n",
|
|
"- namespace: (optional, defaults to `\"\"`) The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used.\n",
|
|
"\n",
|
|
"**Attention**:\n",
|
|
"\n",
|
|
"- Be sure to set the `namespace` parameter to avoid collisions of the same text embedded using different embeddings models.\n",
|
|
"- Currently `CacheBackedEmbeddings` does not cache embedding created with `embed_query()` `aembed_query()` methods."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "a463c3c2-749b-40d1-a433-84f68a1cd1c7",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.embeddings import CacheBackedEmbeddings"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9ddf07dd-3e72-41de-99d4-78e9521e272f",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Using with a Vector Store\n",
|
|
"\n",
|
|
"First, let's see an example that uses the local file system for storing embeddings and uses FAISS vector store for retrieval."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "50183825",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%pip install --upgrade --quiet langchain-openai faiss-cpu"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "9e4314d8-88ef-4f52-81ae-0be771168bb6",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.storage import LocalFileStore\n",
|
|
"from langchain_community.document_loaders import TextLoader\n",
|
|
"from langchain_community.vectorstores import FAISS\n",
|
|
"from langchain_openai import OpenAIEmbeddings\n",
|
|
"from langchain_text_splitters import CharacterTextSplitter\n",
|
|
"\n",
|
|
"underlying_embeddings = OpenAIEmbeddings()\n",
|
|
"\n",
|
|
"store = LocalFileStore(\"./cache/\")\n",
|
|
"\n",
|
|
"cached_embedder = CacheBackedEmbeddings.from_bytes_store(\n",
|
|
" underlying_embeddings, store, namespace=underlying_embeddings.model\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f8cdf33c-321d-4d2c-b76b-d6f5f8b42a92",
|
|
"metadata": {},
|
|
"source": [
|
|
"The cache is empty prior to embedding:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "f9ad627f-ced2-4277-b336-2434f22f2c8a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[]"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"list(store.yield_keys())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a4effe04-b40f-42f8-a449-72fe6991cf20",
|
|
"metadata": {},
|
|
"source": [
|
|
"Load the document, split it into chunks, embed each chunk and load it into the vector store."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "cf958ac2-e60e-4668-b32c-8bb2d78b3c61",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"raw_documents = TextLoader(\"../../state_of_the_union.txt\").load()\n",
|
|
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
|
"documents = text_splitter.split_documents(raw_documents)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f526444b-93f8-423f-b6d1-dab539450921",
|
|
"metadata": {},
|
|
"source": [
|
|
"Create the vector store:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "3a1d7bb8-3b72-4bb5-9013-cf7729caca61",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 218 ms, sys: 29.7 ms, total: 248 ms\n",
|
|
"Wall time: 1.02 s\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"db = FAISS.from_documents(documents, cached_embedder)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "64fc53f5-d559-467f-bf62-5daef32ffbc0",
|
|
"metadata": {},
|
|
"source": [
|
|
"If we try to create the vector store again, it'll be much faster since it does not need to re-compute any embeddings."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "714cb2e2-77ba-41a8-bb83-84e75342af2d",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 15.7 ms, sys: 2.22 ms, total: 18 ms\n",
|
|
"Wall time: 17.2 ms\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"db2 = FAISS.from_documents(documents, cached_embedder)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1acc76b9-9c70-4160-b593-5f932c75e2b6",
|
|
"metadata": {},
|
|
"source": [
|
|
"And here are some of the embeddings that got created:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "f2ca32dd-3712-4093-942b-4122f3dc8a8e",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['text-embedding-ada-00217a6727d-8916-54eb-b196-ec9c9d6ca472',\n",
|
|
" 'text-embedding-ada-0025fc0d904-bd80-52da-95c9-441015bfb438',\n",
|
|
" 'text-embedding-ada-002e4ad20ef-dfaa-5916-9459-f90c6d8e8159',\n",
|
|
" 'text-embedding-ada-002ed199159-c1cd-5597-9757-f80498e8f17b',\n",
|
|
" 'text-embedding-ada-0021297d37a-2bc1-5e19-bf13-6c950f075062']"
|
|
]
|
|
},
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"list(store.yield_keys())[:5]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c1a7fafd",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Swapping the `ByteStore`\n",
|
|
"\n",
|
|
"In order to use a different `ByteStore`, just use it when creating your `CacheBackedEmbeddings`. Below, we create an equivalent cached embeddings object, except using the non-persistent `InMemoryByteStore` instead:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "336a0538",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.embeddings import CacheBackedEmbeddings\n",
|
|
"from langchain.storage import InMemoryByteStore\n",
|
|
"\n",
|
|
"store = InMemoryByteStore()\n",
|
|
"\n",
|
|
"cached_embedder = CacheBackedEmbeddings.from_bytes_store(\n",
|
|
" underlying_embeddings, store, namespace=underlying_embeddings.model\n",
|
|
")"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|