update chroma notebook (#6664)

@rlancemartin I updated the notebook for Chroma to hopefully be a lot
easier for users.
This commit is contained in:
Jeff Huber 2023-06-23 15:03:06 -07:00 committed by GitHub
parent 48381f1f78
commit 2acf109c4b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -8,100 +8,75 @@
"source": [
"# Chroma\n",
"\n",
">[Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings.\n",
">[Chroma](https://docs.trychroma.com/getting-started) is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.\n",
"\n",
"This notebook shows how to use functionality related to the `Chroma` vector database."
"<a href=\"https://discord.gg/MMeYNTmh3x\" target=\"_blank\">\n",
" <img src=\"https://img.shields.io/discord/1073293645303795742\" alt=\"Discord\">\n",
" </a>&nbsp;&nbsp;\n",
" <a href=\"https://github.com/chroma-core/chroma/blob/master/LICENSE\" target=\"_blank\">\n",
" <img src=\"https://img.shields.io/static/v1?label=license&message=Apache 2.0&color=white\" alt=\"License\">\n",
" </a>&nbsp;&nbsp;\n",
" <img src=\"https://github.com/chroma-core/chroma/actions/workflows/chroma-integration-test.yml/badge.svg?branch=main\" alt=\"Integration Tests\">\n",
"\n",
"- [Website](https://www.trychroma.com/)\n",
"- [Documentation](https://docs.trychroma.com/)\n",
"- [Twitter](https://twitter.com/trychroma)\n",
"- [Discord](https://discord.gg/MMeYNTmh3x)\n",
"\n",
"Chroma is fully-typed, fully-tested and fully-documented.\n",
"\n",
"Install Chroma with:\n",
"\n",
"```sh\n",
"pip install chromadb\n",
"```\n",
"\n",
"Chroma runs in various modes. See below for examples of each integrated with LangChain.\n",
"- `in-memory` - in a python script or jupyter notebook\n",
"- `in-memory with persistance` - in a script or notebook and save/load to disk\n",
"- `in a docker container` - as a server running your local machine or in the cloud\n",
"\n",
"Like any other database, you can: \n",
"- `.add` \n",
"- `.get` \n",
"- `.update`\n",
"- `.upsert`\n",
"- `.delete`\n",
"- `.peek`\n",
"- and `.query` runs the similarity search.\n",
"\n",
"View full docs at [docs](https://docs.trychroma.com/reference/Collection). To access these methods directly, you can do `._collection_.method()`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0825fa4a-d950-4e78-8bba-20cfcc347765",
"metadata": {
"tags": []
},
"id": "12e83df7",
"metadata": {},
"outputs": [],
"source": [
"!pip install chromadb"
"# first install dependencies\n",
"!pip install langchain\n",
"!pip install langchainplus_sdk\n",
"!pip install chromadb\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "42080f37-8fd1-4cec-acd9-15d2b03b2f4d",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"attachments": {},
"cell_type": "markdown",
"id": "2b5ffbf8",
"metadata": {},
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"## Basic Example\n",
"\n",
"from getpass import getpass\n",
"\n",
"OPENAI_API_KEY = getpass()"
"In this basic example, we take the most recent State of the Union Address, split it into chunks, embed it using an open-source embedding model, load it into Chroma, and then query it."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "c7a94d6c-b4d4-4498-9bdd-eb50c92b85c5",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "aac9563e",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a3c3999a",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"loader = TextLoader(\"../../../state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "5eabdb75",
"metadata": {
"tags": []
},
"execution_count": 14,
"id": "ae9fcf3e",
"metadata": {},
"outputs": [
{
"name": "stderr",
@ -109,21 +84,7 @@
"text": [
"Using embedded DuckDB without persistence: data will be transient\n"
]
}
],
"source": [
"db = Chroma.from_documents(docs, embeddings)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = db.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4b172de8",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
@ -139,16 +100,315 @@
}
],
"source": [
"# import\n",
"from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.document_loaders import TextLoader\n",
"\n",
"# load the document and split it into chunks\n",
"loader = TextLoader(\"../../../state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"\n",
"# split it into chunks\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"# create the open-source embedding function\n",
"embedding_function = SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
"\n",
"# load it into Chroma\n",
"db = Chroma.from_documents(docs, embedding_function)\n",
"\n",
"# query it\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = db.similarity_search(query)\n",
"\n",
"# print results\n",
"print(docs[0].page_content)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "5c9a11cc",
"metadata": {},
"source": [
"## Basic Example (including saving to disk)\n",
"\n",
"Extending the previous example, if you want to save to disk, simply initialize the Chroma client and pass the directory where you want the data to be saved to. \n",
"\n",
"`Caution`: Chroma makes a best-effort to automatically save data to disk, however multiple in-memory clients can stomp each other's work. As a best practice, only have one client per path running at any given time.\n",
"\n",
"`Protip`: Sometimes you can call `db.persist()` to force a save. "
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "49f9bd49",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using embedded DuckDB with persistence: data will be stored in: ./chroma_db\n",
"Using embedded DuckDB with persistence: data will be stored in: ./chroma_db\n",
"No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"# save to disk\n",
"db2 = Chroma.from_documents(docs, embedding_function, persist_directory=\"./chroma_db\")\n",
"db2.persist()\n",
"docs = db.similarity_search(query)\n",
"\n",
"# load from disk\n",
"db3 = Chroma(persist_directory=\"./chroma_db\")\n",
"docs = db.similarity_search(query)\n",
"print(docs[0].page_content)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "e9cf6d70",
"metadata": {},
"source": [
"## Basic Example (using the Docker Container)\n",
"\n",
"You can also run the Chroma Server in a Docker container separately, create a Client to connect to it, and then pass that to LangChain. \n",
"\n",
"Chroma has the ability to handle multiple `Collections` of documents, but the LangChain interface expects one, so we need to specify the collection name. The default collection name used by LangChain is \"langchain\".\n",
"\n",
"Here is how to clone, build, and run the Docker Image:\n",
"```\n",
"git clone git@github.com:chroma-core/chroma.git\n",
"docker-compose up -d --build\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "74aee70e",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction\n",
"No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"# create the chroma client\n",
"import chromadb\n",
"import uuid\n",
"from chromadb.config import Settings\n",
"client = chromadb.Client(Settings(chroma_api_impl=\"rest\",\n",
" chroma_server_host=\"localhost\",\n",
" chroma_server_http_port=\"8000\"\n",
" ))\n",
"client.reset() # resets the database\n",
"collection = client.create_collection(\"my_collection\")\n",
"for doc in docs:\n",
" collection.add(ids=[str(uuid.uuid1())], metadatas=doc.metadata, documents=doc.page_content)\n",
"\n",
"# tell LangChain to use our client and collection name\n",
"db4 = Chroma(client=client, collection_name=\"my_collection\")\n",
"docs = db.similarity_search(query)\n",
"print(docs[0].page_content)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9ed3ec50",
"metadata": {},
"source": [
"## Update and Delete\n",
"\n",
"While building toward a real application, you want to go beyond adding data, and also update and delete data. \n",
"\n",
"Chroma has users provide `ids` to simplify the bookkeeping here. `ids` can be the name of the file, or a combined has like `filename_paragraphNumber`, etc.\n",
"\n",
"Chroma supports all these operations - though some of them are still being integrated all the way through the LangChain interface. Additional workflow improvements will be added soon.\n",
"\n",
"Here is a basic example showing how to do various operations:"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "81a02810",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using embedded DuckDB without persistence: data will be transient\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'source': '../../../state_of_the_union.txt', 'new_value': 'hello world'}\n",
"{'ids': ['1'], 'embeddings': None, 'documents': ['Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.'], 'metadatas': [{'source': '../../../state_of_the_union.txt', 'new_value': 'hello world'}]}\n",
"count before 4\n",
"count after 3\n"
]
}
],
"source": [
"# create simple ids\n",
"ids = [str(i) for i in range(1, len(docs)+1)]\n",
"\n",
"# add data\n",
"example_db = Chroma.from_documents(docs, embedding_function, ids=ids)\n",
"docs = example_db.similarity_search(query)\n",
"print(docs[0].metadata)\n",
"\n",
"# update the metadata for a document\n",
"docs[0].metadata = {'source': '../../../state_of_the_union.txt', 'new_value': 'hello world'}\n",
"example_db.update_document(ids[0], docs[0])\n",
"print(example_db._collection.get(ids=[ids[0]]))\n",
"\n",
"# delete the last document\n",
"print(\"count before\", example_db._collection.count())\n",
"example_db._collection.delete(ids=[ids[-1]])\n",
"print(\"count after\", example_db._collection.count())\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ac6bc71a",
"metadata": {},
"source": [
"## Use OpenAI Embeddings\n",
"\n",
"Many people like to use OpenAIEmbeddings, here is how to set that up."
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "42080f37-8fd1-4cec-acd9-15d2b03b2f4d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"\n",
"from getpass import getpass\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "c7a94d6c-b4d4-4498-9bdd-eb50c92b85c5",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "5eabdb75",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using embedded DuckDB without persistence: data will be transient\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"embeddings = OpenAIEmbeddings()\n",
"db5 = Chroma.from_documents(docs, embeddings)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = db.similarity_search(query)\n",
"print(docs[0].page_content)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6d9c28ad",
"metadata": {},
"source": [
"***\n",
"\n",
"## Other Information"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "18152965",
"metadata": {},
"source": [
"## Similarity search with score"
"### Similarity search with score"
]
},
{
@ -196,128 +456,17 @@
"docs[0]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8061454b",
"metadata": {},
"source": [
"## Persistance\n",
"\n",
"The below steps cover how to persist a ChromaDB instance"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "2b76db26",
"metadata": {},
"source": [
"### Initialize PeristedChromaDB\n",
"Create embeddings for each chunk and insert into the Chroma vector database. The persist_directory argument tells ChromaDB where to store the database when it's persisted.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "cdb86e0d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"No existing DB found in db, skipping load\n",
"No existing DB found in db, skipping load\n"
]
}
],
"source": [
"# Embed and store the texts\n",
"# Supplying a persist_directory will store the embeddings on disk\n",
"persist_directory = \"db\"\n",
"\n",
"embedding = OpenAIEmbeddings()\n",
"vectordb = Chroma.from_documents(\n",
" documents=docs, embedding=embedding, persist_directory=persist_directory\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "f568a322",
"metadata": {},
"source": [
"### Persist the Database\n",
"We should call persist() to ensure the embeddings are written to disk."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "74b08cb4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Persisting DB to disk, putting it in the save folder db\n",
"PersistentDuckDB del, about to run persist\n",
"Persisting DB to disk, putting it in the save folder db\n"
]
}
],
"source": [
"vectordb.persist()\n",
"vectordb = None"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "cc9ed900",
"metadata": {},
"source": [
"### Load the Database from disk, and create the chain\n",
"Be sure to pass the same persist_directory and embedding_function as you did when you instantiated the database. Initialize the chain we will use for question answering."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "31fecfe9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"loaded in 4 embeddings\n",
"loaded in 1 collections\n"
]
}
],
"source": [
"# Now we can load the persisted database from disk, and use it as normal.\n",
"vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "794a7552",
"metadata": {},
"source": [
"## Retriever options\n",
"### Retriever options\n",
"\n",
"This section goes over different options for how to use Chroma as a retriever.\n",
"\n",
"### MMR\n",
"#### MMR\n",
"\n",
"In addition to using similarity search in the retriever object, you can also use `mmr`."
]
@ -352,82 +501,6 @@
"source": [
"retriever.get_relevant_documents(query)[0]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "2a877f08",
"metadata": {},
"source": [
"## Updating a Document\n",
"The `update_document` function allows you to modify the content of a document in the Chroma instance after it has been added. Let's see an example of how to use this function."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "a559c3f1",
"metadata": {},
"outputs": [],
"source": [
"# Import Document class\n",
"from langchain.docstore.document import Document\n",
"\n",
"# Initial document content and id\n",
"initial_content = \"This is an initial document content\"\n",
"document_id = \"doc1\"\n",
"\n",
"# Create an instance of Document with initial content and metadata\n",
"original_doc = Document(page_content=initial_content, metadata={\"page\": \"0\"})\n",
"\n",
"# Initialize a Chroma instance with the original document\n",
"new_db = Chroma.from_documents(\n",
" collection_name=\"test_collection\",\n",
" documents=[original_doc],\n",
" embedding=OpenAIEmbeddings(), # using the same embeddings as before\n",
" ids=[document_id],\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "60a7c273",
"metadata": {},
"source": [
"At this point, we have a new Chroma instance with a single document \"This is an initial document content\" with id \"doc1\". Now, let's update the content of the document."
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "55e48056",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This is the updated document content {'page': '1'}\n"
]
}
],
"source": [
"# Updated document content\n",
"updated_content = \"This is the updated document content\"\n",
"\n",
"# Create a new Document instance with the updated content\n",
"updated_doc = Document(page_content=updated_content, metadata={\"page\": \"1\"})\n",
"\n",
"# Update the document in the Chroma instance by passing the document id and the updated document\n",
"new_db.update_document(document_id=document_id, document=updated_doc)\n",
"\n",
"# Now, let's retrieve the updated document using similarity search\n",
"output = new_db.similarity_search(updated_content, k=1)\n",
"\n",
"# Print the content of the retrieved document\n",
"print(output[0].page_content, output[0].metadata)"
]
}
],
"metadata": {
@ -446,7 +519,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.10.10"
}
},
"nbformat": 4,