diff --git a/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb b/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb index 1b981466e2e..e95c33ce8c3 100644 --- a/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb +++ b/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb @@ -1,752 +1,739 @@ { - "cells": [ - { - "attachments": {}, - "cell_type": "markdown", - "id": "683953b3", - "metadata": {}, - "source": [ - "# Qdrant\n", - "\n", - ">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n", - "\n", - "\n", - "This notebook shows how to use functionality related to the `Qdrant` vector database. \n", - "\n", - "There are various modes of how to run `Qdrant`, and depending on the chosen one, there will be some subtle differences. The options include:\n", - "- Local mode, no server required\n", - "- On-premise server deployment\n", - "- Qdrant Cloud\n", - "\n", - "See the [installation instructions](https://qdrant.tech/documentation/install/)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "e03e8460-8f32-4d1f-bb93-4f7636a476fa", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "!pip install qdrant-client" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5", - "metadata": {}, - "source": [ - "We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key." - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "082e7e8b-ac52-430c-98d6-8f0924457642", - "metadata": { - "tags": [] - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "OpenAI API Key: \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n" - ] - } - ], - "source": [ - "import os\n", - "import getpass\n", - "\n", - "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "aac9563e", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:22.282884Z", - "start_time": "2023-04-04T10:51:21.408077Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "from langchain.embeddings.openai import OpenAIEmbeddings\n", - "from langchain.text_splitter import CharacterTextSplitter\n", - "from langchain.vectorstores import Qdrant\n", - "from langchain.document_loaders import TextLoader" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "a3c3999a", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:22.520144Z", - "start_time": "2023-04-04T10:51:22.285826Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "loader = TextLoader(\"../../../state_of_the_union.txt\")\n", - "documents = loader.load()\n", - "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", - "docs = text_splitter.split_documents(documents)\n", - "\n", - "embeddings = OpenAIEmbeddings()" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "eeead681", - "metadata": {}, - "source": [ - "## Connecting to Qdrant from LangChain\n", - "\n", - "### Local mode\n", - "\n", - "Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.\n", - "\n", - "#### In-memory\n", - "\n", - "For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook." - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "8429667e", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:22.525091Z", - "start_time": "2023-04-04T10:51:22.522015Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " location=\":memory:\", # Local mode with in-memory storage only\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "59f0b954", - "metadata": {}, - "source": [ - "#### On-disk storage\n", - "\n", - "Local mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs." - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "24b370e2", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.827567Z", - "start_time": "2023-04-04T10:51:22.529080Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " path=\"/tmp/local_qdrant\",\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "749658ce", - "metadata": {}, - "source": [ - "### On-premise server deployment\n", - "\n", - "No matter if you choose to launch Qdrant locally with [a Docker container](https://qdrant.tech/documentation/install/), or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service." - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "91e7f5ce", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.832708Z", - "start_time": "2023-04-04T10:51:24.829905Z" - } - }, - "outputs": [], - "source": [ - "url = \"<---qdrant url here --->\"\n", - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " url,\n", - " prefer_grpc=True,\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "c9e21ce9", - "metadata": {}, - "source": [ - "### Qdrant Cloud\n", - "\n", - "If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on [Qdrant Cloud](https://cloud.qdrant.io/). There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you'll need to provide an API key to secure your deployment from being accessed publicly." - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "dcf88bdf", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.837599Z", - "start_time": "2023-04-04T10:51:24.834690Z" - } - }, - "outputs": [], - "source": [ - "url = \"<---qdrant cloud cluster url here --->\"\n", - "api_key = \"<---api key here--->\"\n", - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " url,\n", - " prefer_grpc=True,\n", - " api_key=api_key,\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "93540013", - "metadata": {}, - "source": [ - "## Reusing the same collection\n", - "\n", - "Both `Qdrant.from_texts` and `Qdrant.from_documents` methods are great to start using Qdrant with LangChain, but **they are going to destroy the collection and create it from scratch**! If you want to reuse the existing collection, you can always create an instance of `Qdrant` on your own and pass the `QdrantClient` instance with the connection details." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "b7b432d7", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.843090Z", - "start_time": "2023-04-04T10:51:24.840041Z" - } - }, - "outputs": [], - "source": [ - "del qdrant" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "30a87570", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.854117Z", - "start_time": "2023-04-04T10:51:24.845385Z" - } - }, - "outputs": [], - "source": [ - "import qdrant_client\n", - "\n", - "client = qdrant_client.QdrantClient(path=\"/tmp/local_qdrant\", prefer_grpc=True)\n", - "qdrant = Qdrant(client=client, collection_name=\"my_documents\", embeddings=embeddings)" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "1f9215c8", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T09:27:29.920258Z", - "start_time": "2023-04-04T09:27:29.913714Z" - } - }, - "source": [ - "## Similarity search\n", - "\n", - "The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the `embedding_function` and used to find similar documents in Qdrant collection." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "a8c513ab", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.204469Z", - "start_time": "2023-04-04T10:51:24.855618Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.similarity_search(query)" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "fc516993", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.220984Z", - "start_time": "2023-04-04T10:51:25.213943Z" - }, - "tags": [] - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", - "\n", - "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", - "\n", - "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", - "\n", - "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n" - ] - } - ], - "source": [ - "print(found_docs[0].page_content)" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "1bda9bf5", - "metadata": {}, - "source": [ - "## Similarity search with score\n", - "\n", - "Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. \n", - "The returned distance score is cosine distance. Therefore, a lower score is better." - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "id": "8804a21d", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.631585Z", - "start_time": "2023-04-04T10:51:25.227384Z" - } - }, - "outputs": [], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.similarity_search_with_score(query)" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "id": "756a6887", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.642282Z", - "start_time": "2023-04-04T10:51:25.635947Z" - } - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", - "\n", - "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", - "\n", - "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", - "\n", - "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n", - "\n", - "Score: 0.8153784913324512\n" - ] - } - ], - "source": [ - "document, score = found_docs[0]\n", - "print(document.page_content)\n", - "print(f\"\\nScore: {score}\")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "525e3582", - "metadata": {}, - "source": [ - "### Metadata filtering\n", - "\n", - "Qdrant has an [extensive filtering system](https://qdrant.tech/documentation/concepts/filtering/) with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the `similarity_search_with_score` and `similarity_search` methods." - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "1c2c58dc", - "metadata": {}, - "source": [ - "```python\n", - "from qdrant_client.http import models as rest\n", - "\n", - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))\n", - "```" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "c58c30bf", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:39:53.032744Z", - "start_time": "2023-04-04T10:39:53.028673Z" - } - }, - "source": [ - "## Maximum marginal relevance search (MMR)\n", - "\n", - "If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents." - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "id": "76810fb6", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.010947Z", - "start_time": "2023-04-04T10:51:25.647687Z" - } - }, - "outputs": [], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "id": "80c6db11", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.016979Z", - "start_time": "2023-04-04T10:51:26.013329Z" - } - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", - "\n", - "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", - "\n", - "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", - "\n", - "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence. \n", - "\n", - "2. We can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n", - "\n", - "I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n", - "\n", - "They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n", - "\n", - "Officer Mora was 27 years old. \n", - "\n", - "Officer Rivera was 22. \n", - "\n", - "Both Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n", - "\n", - "I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n", - "\n", - "I\u2019ve worked on these issues a long time. \n", - "\n", - "I know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \n", - "\n" - ] - } - ], - "source": [ - "for i, doc in enumerate(found_docs):\n", - " print(f\"{i + 1}.\", doc.page_content, \"\\n\")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "691a82d6", - "metadata": {}, - "source": [ - "## Qdrant as a Retriever\n", - "\n", - "Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. " - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "id": "9427195f", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.031451Z", - "start_time": "2023-04-04T10:51:26.018763Z" - } - }, - "outputs": [ - { - "data": { - "text/plain": [ - "VectorStoreRetriever(vectorstore=, search_type='similarity', search_kwargs={})" - ] - }, - "execution_count": 15, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "retriever = qdrant.as_retriever()\n", - "retriever" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "0c851b4f", - "metadata": {}, - "source": [ - "It might be also specified to use MMR as a search strategy, instead of similarity." - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "id": "64348f1b", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.043909Z", - "start_time": "2023-04-04T10:51:26.034284Z" - } - }, - "outputs": [ - { - "data": { - "text/plain": [ - "VectorStoreRetriever(vectorstore=, search_type='mmr', search_kwargs={})" - ] - }, - "execution_count": 16, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "retriever = qdrant.as_retriever(search_type=\"mmr\")\n", - "retriever" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "id": "f3c70c31", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.495652Z", - "start_time": "2023-04-04T10:51:26.046407Z" - } - }, - "outputs": [ - { - "data": { - "text/plain": [ - "Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})" - ] - }, - "execution_count": 17, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "retriever.get_relevant_documents(query)[0]" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "0358ecde", - "metadata": {}, - "source": [ - "## Customizing Qdrant\n", - "\n", - "There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain `Document`.\n", - "\n", - "### Named vectors\n", - "\n", - "Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have the named vector used, you can configure it by providing its name.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "outputs": [], - "source": [ - "Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " location=\":memory:\",\n", - " collection_name=\"my_documents_2\",\n", - " vector_name=\"custom_vector\",\n", - ")" - ], - "metadata": { - "collapsed": false - }, - "id": "3166ff99" - }, - { - "cell_type": "markdown", - "source": [ - "As a Langchain user, you won't see any difference whether you use named vectors or not. Qdrant integration will handle the conversion under the hood." - ], - "metadata": { - "collapsed": false - }, - "id": "79848e80" - }, - { - "cell_type": "markdown", - "source": [ - "### Metadata\n", - "\n", - "Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.\n", - "\n", - "By default, your document is going to be stored in the following payload structure:\n", - "\n", - "```json\n", - "{\n", - " \"page_content\": \"Lorem ipsum dolor sit amet\",\n", - " \"metadata\": {\n", - " \"foo\": \"bar\"\n", - " }\n", - "}\n", - "```\n", - "\n", - "You can, however, decide to use different keys for the page content and metadata. That's useful if you already have a collection that you'd like to reuse." - ], - "metadata": { - "collapsed": false - }, - "id": "daeafd6d" - }, - { - "cell_type": "code", - "execution_count": 19, - "id": "e4d6baf9", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T11:08:31.739141Z", - "start_time": "2023-04-04T11:08:30.229748Z" - } - }, - "outputs": [ - { - "data": { - "text/plain": [ - "" - ] - }, - "execution_count": 19, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " location=\":memory:\",\n", - " collection_name=\"my_documents_2\",\n", - " content_payload_key=\"my_page_content_key\",\n", - " metadata_payload_key=\"my_meta\",\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "2300e785", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.3" - } + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "id": "683953b3", + "metadata": {}, + "source": [ + "# Qdrant\n", + "\n", + ">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n", + "\n", + "\n", + "This notebook shows how to use functionality related to the `Qdrant` vector database. \n", + "\n", + "There are various modes of how to run `Qdrant`, and depending on the chosen one, there will be some subtle differences. The options include:\n", + "- Local mode, no server required\n", + "- On-premise server deployment\n", + "- Qdrant Cloud\n", + "\n", + "See the [installation instructions](https://qdrant.tech/documentation/install/)." + ] }, - "nbformat": 4, - "nbformat_minor": 5 -} \ No newline at end of file + { + "cell_type": "code", + "execution_count": null, + "id": "e03e8460-8f32-4d1f-bb93-4f7636a476fa", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "!pip install qdrant-client" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5", + "metadata": {}, + "source": [ + "We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "082e7e8b-ac52-430c-98d6-8f0924457642", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key: ········\n" + ] + } + ], + "source": [ + "import os\n", + "import getpass\n", + "\n", + "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "aac9563e", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:22.282884Z", + "start_time": "2023-04-04T10:51:21.408077Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "from langchain.embeddings.openai import OpenAIEmbeddings\n", + "from langchain.text_splitter import CharacterTextSplitter\n", + "from langchain.vectorstores import Qdrant\n", + "from langchain.document_loaders import TextLoader" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "a3c3999a", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:22.520144Z", + "start_time": "2023-04-04T10:51:22.285826Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "loader = TextLoader(\"../../../state_of_the_union.txt\")\n", + "documents = loader.load()\n", + "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", + "docs = text_splitter.split_documents(documents)\n", + "\n", + "embeddings = OpenAIEmbeddings()" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "eeead681", + "metadata": {}, + "source": [ + "## Connecting to Qdrant from LangChain\n", + "\n", + "### Local mode\n", + "\n", + "Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.\n", + "\n", + "#### In-memory\n", + "\n", + "For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "8429667e", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:22.525091Z", + "start_time": "2023-04-04T10:51:22.522015Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " location=\":memory:\", # Local mode with in-memory storage only\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "59f0b954", + "metadata": {}, + "source": [ + "#### On-disk storage\n", + "\n", + "Local mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "24b370e2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.827567Z", + "start_time": "2023-04-04T10:51:22.529080Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " path=\"/tmp/local_qdrant\",\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "749658ce", + "metadata": {}, + "source": [ + "### On-premise server deployment\n", + "\n", + "No matter if you choose to launch Qdrant locally with [a Docker container](https://qdrant.tech/documentation/install/), or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "91e7f5ce", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.832708Z", + "start_time": "2023-04-04T10:51:24.829905Z" + } + }, + "outputs": [], + "source": [ + "url = \"<---qdrant url here --->\"\n", + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " url,\n", + " prefer_grpc=True,\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "c9e21ce9", + "metadata": {}, + "source": [ + "### Qdrant Cloud\n", + "\n", + "If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on [Qdrant Cloud](https://cloud.qdrant.io/). There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you'll need to provide an API key to secure your deployment from being accessed publicly." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "dcf88bdf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.837599Z", + "start_time": "2023-04-04T10:51:24.834690Z" + } + }, + "outputs": [], + "source": [ + "url = \"<---qdrant cloud cluster url here --->\"\n", + "api_key = \"<---api key here--->\"\n", + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " url,\n", + " prefer_grpc=True,\n", + " api_key=api_key,\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "93540013", + "metadata": {}, + "source": [ + "## Recreating the collection\n", + "\n", + "Both `Qdrant.from_texts` and `Qdrant.from_documents` methods are great to start using Qdrant with Langchain. In the previous versions the collection was recreated every time you called any of them. That behaviour has changed. Currently, the collection is going to be reused if it already exists. Setting `force_recreate` to `True` allows to remove the old collection and start from scratch." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "30a87570", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.854117Z", + "start_time": "2023-04-04T10:51:24.845385Z" + } + }, + "outputs": [], + "source": [ + "url = \"<---qdrant url here --->\"\n", + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " url,\n", + " prefer_grpc=True,\n", + " collection_name=\"my_documents\",\n", + " force_recreate=True,\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "1f9215c8", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T09:27:29.920258Z", + "start_time": "2023-04-04T09:27:29.913714Z" + } + }, + "source": [ + "## Similarity search\n", + "\n", + "The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the `embedding_function` and used to find similar documents in Qdrant collection." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "a8c513ab", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.204469Z", + "start_time": "2023-04-04T10:51:24.855618Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.similarity_search(query)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "fc516993", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.220984Z", + "start_time": "2023-04-04T10:51:25.213943Z" + }, + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", + "\n", + "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", + "\n", + "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", + "\n", + "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n" + ] + } + ], + "source": [ + "print(found_docs[0].page_content)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "1bda9bf5", + "metadata": {}, + "source": [ + "## Similarity search with score\n", + "\n", + "Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. \n", + "The returned distance score is cosine distance. Therefore, a lower score is better." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "8804a21d", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.631585Z", + "start_time": "2023-04-04T10:51:25.227384Z" + } + }, + "outputs": [], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.similarity_search_with_score(query)" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "756a6887", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.642282Z", + "start_time": "2023-04-04T10:51:25.635947Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", + "\n", + "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", + "\n", + "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", + "\n", + "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n", + "\n", + "Score: 0.8153784913324512\n" + ] + } + ], + "source": [ + "document, score = found_docs[0]\n", + "print(document.page_content)\n", + "print(f\"\\nScore: {score}\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "525e3582", + "metadata": {}, + "source": [ + "### Metadata filtering\n", + "\n", + "Qdrant has an [extensive filtering system](https://qdrant.tech/documentation/concepts/filtering/) with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the `similarity_search_with_score` and `similarity_search` methods." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "1c2c58dc", + "metadata": {}, + "source": [ + "```python\n", + "from qdrant_client.http import models as rest\n", + "\n", + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))\n", + "```" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "c58c30bf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:39:53.032744Z", + "start_time": "2023-04-04T10:39:53.028673Z" + } + }, + "source": [ + "## Maximum marginal relevance search (MMR)\n", + "\n", + "If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "76810fb6", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.010947Z", + "start_time": "2023-04-04T10:51:25.647687Z" + } + }, + "outputs": [], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "80c6db11", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.016979Z", + "start_time": "2023-04-04T10:51:26.013329Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", + "\n", + "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", + "\n", + "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", + "\n", + "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. \n", + "\n", + "2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n", + "\n", + "I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n", + "\n", + "They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n", + "\n", + "Officer Mora was 27 years old. \n", + "\n", + "Officer Rivera was 22. \n", + "\n", + "Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n", + "\n", + "I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n", + "\n", + "I’ve worked on these issues a long time. \n", + "\n", + "I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. \n", + "\n" + ] + } + ], + "source": [ + "for i, doc in enumerate(found_docs):\n", + " print(f\"{i + 1}.\", doc.page_content, \"\\n\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "691a82d6", + "metadata": {}, + "source": [ + "## Qdrant as a Retriever\n", + "\n", + "Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. " + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "9427195f", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.031451Z", + "start_time": "2023-04-04T10:51:26.018763Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "VectorStoreRetriever(vectorstore=, search_type='similarity', search_kwargs={})" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "retriever = qdrant.as_retriever()\n", + "retriever" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "0c851b4f", + "metadata": {}, + "source": [ + "It might be also specified to use MMR as a search strategy, instead of similarity." + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "64348f1b", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.043909Z", + "start_time": "2023-04-04T10:51:26.034284Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "VectorStoreRetriever(vectorstore=, search_type='mmr', search_kwargs={})" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "retriever = qdrant.as_retriever(search_type=\"mmr\")\n", + "retriever" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "f3c70c31", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.495652Z", + "start_time": "2023-04-04T10:51:26.046407Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "retriever.get_relevant_documents(query)[0]" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "0358ecde", + "metadata": {}, + "source": [ + "## Customizing Qdrant\n", + "\n", + "There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain `Document`.\n", + "\n", + "### Named vectors\n", + "\n", + "Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have the named vector used, you can configure it by providing its name.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "outputs": [], + "source": [ + "Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " location=\":memory:\",\n", + " collection_name=\"my_documents_2\",\n", + " vector_name=\"custom_vector\",\n", + ")" + ], + "metadata": { + "collapsed": false + } + }, + { + "cell_type": "markdown", + "source": [ + "As a Langchain user, you won't see any difference whether you use named vectors or not. Qdrant integration will handle the conversion under the hood." + ], + "metadata": { + "collapsed": false + } + }, + { + "cell_type": "markdown", + "source": [ + "### Metadata\n", + "\n", + "Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.\n", + "\n", + "By default, your document is going to be stored in the following payload structure:\n", + "\n", + "```json\n", + "{\n", + " \"page_content\": \"Lorem ipsum dolor sit amet\",\n", + " \"metadata\": {\n", + " \"foo\": \"bar\"\n", + " }\n", + "}\n", + "```\n", + "\n", + "You can, however, decide to use different keys for the page content and metadata. That's useful if you already have a collection that you'd like to reuse." + ], + "metadata": { + "collapsed": false + } + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "e4d6baf9", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T11:08:31.739141Z", + "start_time": "2023-04-04T11:08:30.229748Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " location=\":memory:\",\n", + " collection_name=\"my_documents_2\",\n", + " content_payload_key=\"my_page_content_key\",\n", + " metadata_payload_key=\"my_meta\",\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2300e785", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.3" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/langchain/vectorstores/qdrant.py b/langchain/vectorstores/qdrant.py index bfc4ffbd742..0271396b187 100644 --- a/langchain/vectorstores/qdrant.py +++ b/langchain/vectorstores/qdrant.py @@ -34,6 +34,10 @@ if TYPE_CHECKING: MetadataFilter = Union[DictFilter, common_types.Filter] +class QdrantException(Exception): + """Base class for all the Qdrant related exceptions""" + + class Qdrant(VectorStore): """Wrapper around Qdrant vector database. @@ -552,6 +556,7 @@ class Qdrant(VectorStore): wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, + force_recreate: bool = False, **kwargs: Any, ) -> Qdrant: """Construct Qdrant wrapper from a list of texts. @@ -636,6 +641,8 @@ class Qdrant(VectorStore): Params for quantization, if None - quantization will be disabled init_from: Use data stored in another collection to initialize this collection + force_recreate: + Force recreating the collection **kwargs: Additional arguments passed directly into REST client initialization @@ -663,7 +670,9 @@ class Qdrant(VectorStore): "Please install it with `pip install qdrant-client`." ) + from grpc import RpcError from qdrant_client.http import models as rest + from qdrant_client.http.exceptions import UnexpectedResponse # Just do a single quick embedding to get vector size partial_embeddings = embedding.embed_documents(texts[:1]) @@ -687,62 +696,98 @@ class Qdrant(VectorStore): **kwargs, ) - vectors_config = rest.VectorParams( - size=vector_size, - distance=rest.Distance[distance_func], - ) + try: + # Skip any validation in case of forced collection recreate. + if force_recreate: + raise ValueError - # If vector name was provided, we're going to use the named vectors feature - # with just a single vector. - if vector_name is not None: - vectors_config = { # type: ignore[assignment] - vector_name: vectors_config, - } + # Get the vector configuration of the existing collection and vector, if it + # was specified. If the old configuration does not match the current one, + # an exception is being thrown. + collection_info = client.get_collection(collection_name=collection_name) + current_vector_config = collection_info.config.params.vectors + if isinstance(current_vector_config, dict) and vector_name is not None: + if vector_name not in current_vector_config: + raise QdrantException( + f"Existing Qdrant collection {collection_name} does not " + f"contain vector named {vector_name}. Did you mean one of the " + f"existing vectors: {', '.join(current_vector_config.keys())}? " + f"If you want to recreate the collection, set `force_recreate` " + f"parameter to `True`." + ) + current_vector_config = current_vector_config.get( + vector_name + ) # type: ignore[assignment] + elif isinstance(current_vector_config, dict) and vector_name is None: + raise QdrantException( + f"Existing Qdrant collection {collection_name} uses named vectors. " + f"If you want to reuse it, please set `vector_name` to any of the " + f"existing named vectors: " + f"{', '.join(current_vector_config.keys())}." # noqa + f"If you want to recreate the collection, set `force_recreate` " + f"parameter to `True`." + ) + elif ( + not isinstance(current_vector_config, dict) and vector_name is not None + ): + raise QdrantException( + f"Existing Qdrant collection {collection_name} doesn't use named " + f"vectors. If you want to reuse it, please set `vector_name` to " + f"`None`. If you want to recreate the collection, set " + f"`force_recreate` parameter to `True`." + ) - client.recreate_collection( - collection_name=collection_name, - vectors_config=vectors_config, - shard_number=shard_number, - replication_factor=replication_factor, - write_consistency_factor=write_consistency_factor, - on_disk_payload=on_disk_payload, - hnsw_config=hnsw_config, - optimizers_config=optimizers_config, - wal_config=wal_config, - quantization_config=quantization_config, - init_from=init_from, - timeout=timeout, # type: ignore[arg-type] - ) + # Check if the vector configuration has the same dimensionality. + if current_vector_config.size != vector_size: # type: ignore[union-attr] + raise QdrantException( + f"Existing Qdrant collection is configured for vectors with " + f"{current_vector_config.size} " # type: ignore[union-attr] + f"dimensions. Selected embeddings are {vector_size}-dimensional. " + f"If you want to recreate the collection, set `force_recreate` " + f"parameter to `True`." + ) - texts_iterator = iter(texts) - metadatas_iterator = iter(metadatas or []) - ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)]) - while batch_texts := list(islice(texts_iterator, batch_size)): - # Take the corresponding metadata and id for each text in a batch - batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None - batch_ids = list(islice(ids_iterator, batch_size)) - - # Generate the embeddings for all the texts in a batch - batch_embeddings = embedding.embed_documents(batch_texts) - if vector_name is not None: - batch_embeddings = { # type: ignore[assignment] - vector_name: batch_embeddings - } - - points = rest.Batch.construct( - ids=batch_ids, - vectors=batch_embeddings, - payloads=cls._build_payloads( - batch_texts, - batch_metadatas, - content_payload_key, - metadata_payload_key, - ), + current_distance_func = ( + current_vector_config.distance.name.upper() # type: ignore[union-attr] + ) + if current_distance_func != distance_func: + raise QdrantException( + f"Existing Qdrant collection is configured for " + f"{current_vector_config.distance} " # type: ignore[union-attr] + f"similarity. Please set `distance_func` parameter to " + f"`{distance_func}` if you want to reuse it. If you want to " + f"recreate the collection, set `force_recreate` parameter to " + f"`True`." + ) + except (UnexpectedResponse, RpcError, ValueError): + vectors_config = rest.VectorParams( + size=vector_size, + distance=rest.Distance[distance_func], ) - client.upsert(collection_name=collection_name, points=points) + # If vector name was provided, we're going to use the named vectors feature + # with just a single vector. + if vector_name is not None: + vectors_config = { # type: ignore[assignment] + vector_name: vectors_config, + } - return cls( + client.recreate_collection( + collection_name=collection_name, + vectors_config=vectors_config, + shard_number=shard_number, + replication_factor=replication_factor, + write_consistency_factor=write_consistency_factor, + on_disk_payload=on_disk_payload, + hnsw_config=hnsw_config, + optimizers_config=optimizers_config, + wal_config=wal_config, + quantization_config=quantization_config, + init_from=init_from, + timeout=timeout, # type: ignore[arg-type] + ) + + qdrant = cls( client=client, collection_name=collection_name, embeddings=embedding, @@ -751,6 +796,8 @@ class Qdrant(VectorStore): distance_strategy=distance_func, vector_name=vector_name, ) + qdrant.add_texts(texts, metadatas, ids, batch_size) + return qdrant @classmethod def _build_payloads( diff --git a/tests/integration_tests/vectorstores/fake_embeddings.py b/tests/integration_tests/vectorstores/fake_embeddings.py index c818b35dce3..1b299bba18d 100644 --- a/tests/integration_tests/vectorstores/fake_embeddings.py +++ b/tests/integration_tests/vectorstores/fake_embeddings.py @@ -27,8 +27,9 @@ class ConsistentFakeEmbeddings(FakeEmbeddings): """Fake embeddings which remember all the texts seen so far to return consistent vectors for the same texts.""" - def __init__(self) -> None: + def __init__(self, dimensionality: int = 10) -> None: self.known_texts: List[str] = [] + self.dimensionality = dimensionality def embed_documents(self, texts: List[str]) -> List[List[float]]: """Return consistent embeddings for each text seen so far.""" @@ -36,7 +37,9 @@ class ConsistentFakeEmbeddings(FakeEmbeddings): for text in texts: if text not in self.known_texts: self.known_texts.append(text) - vector = [float(1.0)] * 9 + [float(self.known_texts.index(text))] + vector = [float(1.0)] * (self.dimensionality - 1) + [ + float(self.known_texts.index(text)) + ] out_vectors.append(vector) return out_vectors @@ -44,8 +47,10 @@ class ConsistentFakeEmbeddings(FakeEmbeddings): """Return consistent embeddings for the text, if seen before, or a constant one if the text is unknown.""" if text not in self.known_texts: - return [float(1.0)] * 9 + [float(0.0)] - return [float(1.0)] * 9 + [float(self.known_texts.index(text))] + return [float(1.0)] * (self.dimensionality - 1) + [float(0.0)] + return [float(1.0)] * (self.dimensionality - 1) + [ + float(self.known_texts.index(text)) + ] class AngularTwoDimensionalEmbeddings(Embeddings): diff --git a/tests/integration_tests/vectorstores/test_qdrant.py b/tests/integration_tests/vectorstores/test_qdrant.py index e46a6e36e64..5a9686ab631 100644 --- a/tests/integration_tests/vectorstores/test_qdrant.py +++ b/tests/integration_tests/vectorstores/test_qdrant.py @@ -9,6 +9,7 @@ from qdrant_client.http import models as rest from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.vectorstores import Qdrant +from langchain.vectorstores.qdrant import QdrantException from tests.integration_tests.vectorstores.fake_embeddings import ( ConsistentFakeEmbeddings, ) @@ -537,3 +538,148 @@ def test_qdrant_similarity_search_with_relevance_scores( assert all( (1 >= score or np.isclose(score, 1)) and score >= 0 for _, score in output ) + + +@pytest.mark.parametrize("vector_name", [None, "custom-vector"]) +def test_qdrant_from_texts_reuses_same_collection(vector_name: Optional[str]) -> None: + """Test if Qdrant.from_texts reuses the same collection""" + from qdrant_client import QdrantClient + + collection_name = "test" + embeddings = ConsistentFakeEmbeddings() + with tempfile.TemporaryDirectory() as tmpdir: + vec_store = Qdrant.from_texts( + ["lorem", "ipsum", "dolor", "sit", "amet"], + embeddings, + collection_name=collection_name, + path=str(tmpdir), + vector_name=vector_name, + ) + del vec_store + + vec_store = Qdrant.from_texts( + ["foo", "bar"], + embeddings, + collection_name=collection_name, + path=str(tmpdir), + vector_name=vector_name, + ) + del vec_store + + client = QdrantClient(path=str(tmpdir)) + assert 7 == client.count(collection_name).count + + +@pytest.mark.parametrize("vector_name", [None, "custom-vector"]) +def test_qdrant_from_texts_raises_error_on_different_dimensionality( + vector_name: Optional[str], +) -> None: + """Test if Qdrant.from_texts raises an exception if dimensionality does not match""" + collection_name = "test" + with tempfile.TemporaryDirectory() as tmpdir: + vec_store = Qdrant.from_texts( + ["lorem", "ipsum", "dolor", "sit", "amet"], + ConsistentFakeEmbeddings(dimensionality=10), + collection_name=collection_name, + path=str(tmpdir), + vector_name=vector_name, + ) + del vec_store + + with pytest.raises(QdrantException): + Qdrant.from_texts( + ["foo", "bar"], + ConsistentFakeEmbeddings(dimensionality=5), + collection_name=collection_name, + path=str(tmpdir), + vector_name=vector_name, + ) + + +@pytest.mark.parametrize( + ["first_vector_name", "second_vector_name"], + [ + (None, "custom-vector"), + ("custom-vector", None), + ("my-first-vector", "my-second_vector"), + ], +) +def test_qdrant_from_texts_raises_error_on_different_vector_name( + first_vector_name: Optional[str], + second_vector_name: Optional[str], +) -> None: + """Test if Qdrant.from_texts raises an exception if vector name does not match""" + collection_name = "test" + with tempfile.TemporaryDirectory() as tmpdir: + vec_store = Qdrant.from_texts( + ["lorem", "ipsum", "dolor", "sit", "amet"], + ConsistentFakeEmbeddings(dimensionality=10), + collection_name=collection_name, + path=str(tmpdir), + vector_name=first_vector_name, + ) + del vec_store + + with pytest.raises(QdrantException): + Qdrant.from_texts( + ["foo", "bar"], + ConsistentFakeEmbeddings(dimensionality=5), + collection_name=collection_name, + path=str(tmpdir), + vector_name=second_vector_name, + ) + + +def test_qdrant_from_texts_raises_error_on_different_distance() -> None: + """Test if Qdrant.from_texts raises an exception if distance does not match""" + collection_name = "test" + with tempfile.TemporaryDirectory() as tmpdir: + vec_store = Qdrant.from_texts( + ["lorem", "ipsum", "dolor", "sit", "amet"], + ConsistentFakeEmbeddings(dimensionality=10), + collection_name=collection_name, + path=str(tmpdir), + distance_func="Cosine", + ) + del vec_store + + with pytest.raises(QdrantException): + Qdrant.from_texts( + ["foo", "bar"], + ConsistentFakeEmbeddings(dimensionality=5), + collection_name=collection_name, + path=str(tmpdir), + distance_func="Euclid", + ) + + +@pytest.mark.parametrize("vector_name", [None, "custom-vector"]) +def test_qdrant_from_texts_recreates_collection_on_force_recreate( + vector_name: Optional[str], +) -> None: + """Test if Qdrant.from_texts recreates the collection even if config mismatches""" + from qdrant_client import QdrantClient + + collection_name = "test" + with tempfile.TemporaryDirectory() as tmpdir: + vec_store = Qdrant.from_texts( + ["lorem", "ipsum", "dolor", "sit", "amet"], + ConsistentFakeEmbeddings(dimensionality=10), + collection_name=collection_name, + path=str(tmpdir), + vector_name=vector_name, + ) + del vec_store + + vec_store = Qdrant.from_texts( + ["foo", "bar"], + ConsistentFakeEmbeddings(dimensionality=5), + collection_name=collection_name, + path=str(tmpdir), + vector_name=vector_name, + force_recreate=True, + ) + del vec_store + + client = QdrantClient(path=str(tmpdir)) + assert 2 == client.count(collection_name).count