Compare commits

...

4 Commits

Author SHA1 Message Date
Dev 2049
fb99d23b15 merge 2023-06-28 18:13:50 -07:00
Dev 2049
2f4a680c05 Merge branch 'master' into harrison/marqo 2023-06-28 18:11:27 -07:00
Harrison Chase
bb2cd74967 cr 2023-06-28 14:00:14 -07:00
OwenElliott
76b85d485d Adding Marqo to vectorstore ecosystem (#2807)
This PR brings in a vectorstore interface for
[Marqo](https://www.marqo.ai/).

The Marqo vectorstore exposes some of Marqo's functionality in addition
the the VectorStore base class. The Marqo vectorstore also makes the
embedding parameter optional because inference for embeddings is an
inherent part of Marqo.

Docs, notebook examples and integration tests included.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-04-24 16:16:26 -07:00
7 changed files with 1090 additions and 3 deletions

View File

@@ -0,0 +1,31 @@
# Marqo
This page covers how to use the Marqo ecosystem within LangChain.
### **What is Marqo?**
Marqo is a tensor search engine that uses embeddings stored in in-memory HNSW indexes to achieve cutting edge search speeds. Marqo can scale to hundred-million document indexes with horizontal index sharding and allows for async and non-blocking data upload and search. Marqo uses the latest machine learning models from PyTorch, Huggingface, OpenAI and more. You can start with a pre-configured model or bring your own. The built in ONNX support and conversion allows for faster inference and higher throughput on both CPU and GPU.
Because Marqo include its own inference your documents can have a mix of text and images, you can bring Marqo indexes with data from your other systems into the langchain ecosystem without having to worry about your embeddings being compatible.
Deployment of Marqo is flexible, you can get started yourself with our docker image or [contact us about our managed cloud offering!](https://www.marqo.ai/pricing)
To run Marqo locally with our docker image, [see our getting started.](https://docs.marqo.ai/latest/)
## Installation and Setup
- Install the Python SDK with `pip install marqo`
## Wrappers
### VectorStore
There exists a wrapper around Marqo indexes, allowing you to use them within the vectorstore framework. Marqo lets you select from a range of models for generating embeddings and exposes some preprocessing configurations.
The Marqo vectorstore can also work with existing multimodel indexes where your documents have a mix of images and text, for more information refer to [our documentation](https://docs.marqo.ai/latest/#multi-modal-and-cross-modal-search). Note that instaniating the Marqo vectorstore with an existing multimodal index will disable the ability to add any new documents to it via the langchain vectorstore `add_texts` method.
To import this vectorstore:
```python
from langchain.vectorstores import Marqo
```
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](../modules/indexes/vectorstores/examples/marqo.ipynb)

View File

@@ -0,0 +1,442 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Marqo\n",
"\n",
"This notebook shows how to use functionality related to the Marqo database."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "aac9563e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Marqo\n",
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a3c3999a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6e104aee",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Index langchain-demo exists.\n"
]
}
],
"source": [
"import marqo \n",
"\n",
"# initialize marqo\n",
"marqo_url = \"http://localhost:8882\" # if using marqo cloud replace with your endpoint (console.marqo.ai)\n",
"marqo_api_key = \"\" # if using marqo cloud replace with your api key (console.marqo.ai)\n",
"\n",
"client = marqo.Client(url=marqo_url, api_key=marqo_api_key)\n",
"\n",
"index_name = \"langchain-demo\"\n",
"\n",
"docsearch = Marqo.from_documents(docs, index_name=index_name)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result_docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "9c608226",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"print(result_docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "98704b27",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"0.68647254\n"
]
}
],
"source": [
"result_docs = docsearch.similarity_search_with_score(query)\n",
"print(result_docs[0][0].page_content, result_docs[0][1], sep=\"\\n\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "eb3395b6",
"metadata": {},
"source": [
"## Additional features\n",
"\n",
"One of the powerful features of Marqo as a vectorstore is that you can use indexes created externally. For example:\n",
"\n",
"+ If you had a database of image and text pairs from another application, you can simply just use it in langchain with the Marqo vectorstore. Note that bringing your own multimodal indexes will disable the `add_texts` method.\n",
"\n",
"+ If you had a database of text documents, you can bring it into the langchain framework and add more texts through `add_texts`.\n",
"\n",
"The documents that are returned are customised by passing your own function to the `page_content_builder` callback in the search methods."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "35b99fef",
"metadata": {},
"source": [
"#### Multimodal Example"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a359ed74",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'errors': False,\n",
" 'processingTimeMs': 4675.6921890009835,\n",
" 'index_name': 'langchain-multimodal-demo',\n",
" 'items': [{'_id': '7af25f35-5d41-4ff5-95fa-ab6bd6755176',\n",
" 'result': 'created',\n",
" 'status': 201},\n",
" {'_id': '70434d17-2680-4e33-b060-a37b9b8b6959',\n",
" 'result': 'created',\n",
" 'status': 201}]}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"# use a new index\n",
"index_name = \"langchain-multimodal-demo\"\n",
"\n",
"# incase the demo is re-run\n",
"try:\n",
" client.delete_index(index_name)\n",
"except Exception:\n",
" print(f\"Creating {index_name}\")\n",
" \n",
"# This index could have been created by another system\n",
"settings = {\"treat_urls_and_pointers_as_images\": True, \"model\": \"ViT-L/14\"}\n",
"client.create_index(index_name, **settings)\n",
"client.index(index_name).add_documents(\n",
" [ \n",
" # image of a bus\n",
" {\n",
" \"caption\": \"Bus\",\n",
" \"image\": \"https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpg\"\n",
" },\n",
" # image of a plane\n",
" { \n",
" \"caption\": \"Plane\", \n",
" \"image\": \"https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg\"\n",
" }\n",
" ],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "368d1fab",
"metadata": {},
"outputs": [],
"source": [
"def get_content(res):\n",
" \"\"\"Helper to format Marqo's documents into text to be used as page_content\"\"\"\n",
" return f\"{res['caption']}: {res['image']}\"\n",
"\n",
"docsearch = Marqo(client, index_name, page_content_builder=get_content)\n",
"\n",
"\n",
"query = \"vehicles that fly\"\n",
"doc_results = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "eef4edf9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Plane: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg\n",
"Bus: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpg\n"
]
}
],
"source": [
"for doc in doc_results:\n",
" print(doc.page_content)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c255f603",
"metadata": {},
"source": [
"#### Text only example"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "9e9a2b20",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'errors': False,\n",
" 'processingTimeMs': 500.1302719992964,\n",
" 'index_name': 'langchain-byo-index-demo',\n",
" 'items': [{'_id': 'cbad6f9e-a4ea-45c6-9a85-1b9c0a59827c',\n",
" 'result': 'created',\n",
" 'status': 201},\n",
" {'_id': 'c0be68cb-8847-4e95-a4c9-4791b54f772c',\n",
" 'result': 'created',\n",
" 'status': 201}]}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"# use a new index\n",
"index_name = \"langchain-byo-index-demo\"\n",
"\n",
"# incase the demo is re-run\n",
"try:\n",
" client.delete_index(index_name)\n",
"except Exception:\n",
" print(f\"Creating {index_name}\")\n",
"\n",
"# This index could have been created by another system\n",
"client.index(index_name).add_documents(\n",
" [ \n",
" {\n",
" \"Title\": \"Smartphone\",\n",
" \"Description\": \"A smartphone is a portable computer device that combines mobile telephone \"\n",
" \"functions and computing functions into one unit.\",\n",
" },\n",
" { \n",
" \"Title\": \"Telephone\",\n",
" \"Description\": \"A telephone is a telecommunications device that permits two or more users to\"\n",
" \"conduct a conversation when they are too far apart to be easily heard directly.\",\n",
" }\n",
" ],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b2943ea9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['484d8436-cb09-49f2-8f9d-39671c7ebfaa']"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Note text indexes retain the ability to use add_texts despite different field names in documents\n",
"# this is because the page_content_builder callback lets you handle these document fields as required\n",
"\n",
"def get_content(res):\n",
" \"\"\"Helper to format Marqo's documents into text to be used as page_content\"\"\"\n",
" if 'text' in res:\n",
" return res['text']\n",
" return res['Description']\n",
"\n",
"\n",
"docsearch = Marqo(client, index_name, page_content_builder=get_content)\n",
"\n",
"docsearch.add_texts([\"This is a document that is about elephants\"])\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "851450e9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.\n"
]
}
],
"source": [
"query = \"modern communications devices\"\n",
"doc_results = docsearch.similarity_search(query)\n",
"\n",
"print(doc_results[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "9a438fec",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This is a document that is about elephants\n"
]
}
],
"source": [
"query = \"elephants\"\n",
"doc_results = docsearch.similarity_search(query, page_content_builder=get_content)\n",
"\n",
"print(doc_results[0].page_content)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "0d04c9d4",
"metadata": {},
"source": [
"## Weighted Queries\n",
"\n",
"We also expose marqos weighted queries which are a powerful way to compose complex semantic searches."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "d42ba0d6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.\n"
]
}
],
"source": [
"query = {\"communications devices\" : 1.0}\n",
"doc_results = docsearch.similarity_search(query)\n",
"print(doc_results[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "b5918a16",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"A telephone is a telecommunications device that permits two or more users toconduct a conversation when they are too far apart to be easily heard directly.\n"
]
}
],
"source": [
"query = {\"communications devices\" : 1.0, \"technology post 2000\": -1.0}\n",
"doc_results = docsearch.similarity_search(query)\n",
"print(doc_results[0].page_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -19,6 +19,7 @@ from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores.faiss import FAISS
from langchain.vectorstores.hologres import Hologres
from langchain.vectorstores.lancedb import LanceDB
from langchain.vectorstores.marqo import Marqo
from langchain.vectorstores.matching_engine import MatchingEngine
from langchain.vectorstores.milvus import Milvus
from langchain.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
@@ -56,6 +57,7 @@ __all__ = [
"DocArrayInMemorySearch",
"ElasticVectorSearch",
"FAISS",
"Marqo",
"Hologres",
"LanceDB",
"MatchingEngine",

View File

@@ -0,0 +1,420 @@
"""Wrapper around weaviate vector database."""
from __future__ import annotations
import json
import uuid
import warnings
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
Union,
)
from langchain.docstore.document import Document
from langchain.vectorstores.base import VectorStore
if TYPE_CHECKING:
import marqo
class Marqo(VectorStore):
"""Wrapper around Marqo database.
Marqo indexes have their own models associated with them to generate your embeddings. This means that
you can selected from a range of different models and also use CLIP models to create multimodal indexes
with images and text together.
Marqo also supports more advanced queries with mutliple weighted terms, see See https://docs.marqo.ai/latest/#searching-using-weights-in-queries.
This class can flexibly take strings or dictionaries for weighted queries in its similarity search methods.
To use, you should have the `marqo` python package installed, you can do this with `pip install marqo`.
Example:
.. code-block:: python
import marqo
from langchain.vectorstores import Marqo
client = marqo.Client(url=os.environ["MARQO_URL"], ...)
vectorstore = Marqo(client, index_name)
"""
def __init__(
self,
client: marqo.Client,
index_name: str,
add_documents_settings: Optional[Dict[str, Any]] = {},
searchable_attributes: Optional[List[str]] = None,
page_content_builder: Optional[Callable[[dict], str]] = None,
):
"""Initialize with Marqo client."""
try:
import marqo
except ImportError:
raise ValueError(
"Could not import marqo python package. "
"Please install it with `pip install marqo`."
)
if not isinstance(client, marqo.Client):
raise ValueError(
f"client should be an instance of marqo.Client, got {type(client)}"
)
self._client = client
self._index_name = index_name
self._add_documents_settings = add_documents_settings
self._searchable_attributes = searchable_attributes
self.page_content_builder = page_content_builder
self._non_tensor_fields = ["metadata"]
self._document_batch_size = 1024
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
continue_on_failure: bool = True,
) -> List[str]:
"""Upload texts with metadata (properties) to Marqo.
You can either have marqo generate ids for each document or you can provide your own by including
a "_id" field in the metadata objects.
Args:
texts (Iterable[str]): am iterator of texts - assumed to preserve an order that matches the metadatas.
metadatas (Optional[List[dict]], optional): a list of metadatas.
Raises:
ValueError: if metadatas is provided and the number of metadatas differs from the number of texts.
Returns:
List[str]: The list of ids that were added.
"""
if self._client.index(self._index_name).get_settings()["index_defaults"][
"treat_urls_and_pointers_as_images"
]:
raise ValueError(
"Marqo.add_texts is disabled for multimodal indexes. To add documents with a multimodal index use the Python client for Marqo directly."
)
if metadatas and len(texts) != len(metadatas):
raise ValueError(
f"The lengths of texts and metadatas must be the same, {len(texts)} texts were provided and {len(metadatas)} metadatas were provided."
)
documents: List[Dict[str, Union[str, int, float, List[str]]]] = []
for i, text in enumerate(texts):
doc = {"text": text}
doc["metadata"] = json.dumps(metadatas[i]) if metadatas else json.dumps({})
documents.append(doc)
ids = []
for i in range(0, len(documents), self._document_batch_size):
response = self._client.index(self._index_name).add_documents(
documents[i : i + self._document_batch_size],
non_tensor_fields=self._non_tensor_fields,
**self._add_documents_settings,
)
if response["errors"]:
err_msg = f"Error in upload for documents in index range [{i},{i+self._document_batch_size}], check Marqo logs."
if continue_on_failure:
warnings.warn(err_msg)
else:
raise RuntimeError(err_msg)
ids += [item["_id"] for item in response["items"]]
return ids
def similarity_search(
self,
query: Union[str, Dict[str, float]],
k: int = 4,
**kwargs: Any,
) -> List[Document]:
"""Search the marqo index for the most similar documents.
Args:
query (Union[str, Dict[str, float]]): The query for the search, either as a string or a weighted query.
k (int, optional): The number of documents to return. Defaults to 4.
Returns:
List[Document]: k documents ordered from best to worst match.
"""
results = self.marqo_similarity_search(query=query, k=k)
documents = self._construct_documents_from_results(
results, include_scores=False
)
return documents
def similarity_search_with_score(
self,
query: Union[str, Dict[str, float]],
k: int = 4,
) -> List[Tuple[Document, float]]:
"""Return documents from Marqo that are similar to the query as well as their scores.
Args:
query (str): The query to search with, either as a string or a weighted query.
k (int, optional): The number of documents to return. Defaults to 4.
Returns:
List[Tuple[Document, float]]: The matching documents and their scores, ordered by descending score.
"""
results = self.marqo_similarity_search(query=query, k=k)
scored_documents = self._construct_documents_from_results(
results, include_scores=True
)
return scored_documents
def bulk_similarity_search(
self,
queries: Iterable[Union[str, Dict[str, float]]],
k: int = 4,
**kwargs: Any,
) -> List[List[Document]]:
"""Search the marqo index for the most similar documents in bulk with multiple queries.
Args:
queries (Iterable[Union[str, Dict[str, float]]]): An iterable of queries to execute in bulk, queries in the list can be strings or dictonaries of weighted queries.
k (int, optional): The number of documents to return for each query. Defaults to 4.
Returns:
List[List[Document]]: A list of results for each query.
"""
bulk_results = self.marqo_bulk_similarity_search(queries=queries, k=k)
bulk_documents: List[List[Document]] = []
for results in bulk_results["result"]:
documents = self._construct_documents_from_results(
results, include_scores=False
)
bulk_documents.append(documents)
return bulk_documents
def bulk_similarity_search_with_score(
self,
queries: Iterable[Union[str, Dict[str, float]]],
k: int = 4,
**kwargs: Any,
) -> List[List[Tuple[Document, float]]]:
"""Return documents from Marqo that are similar to the query as well as their scores using
a batch of queries.
Args:
query (Iterable[Union[str, Dict[str, float]]]): An iterable of queries to execute in bulk, queries in the list can be strings or dictonaries of weighted queries.
k (int, optional): The number of documents to return. Defaults to 4.
Returns:
List[Tuple[Document, float]]: A list of lists of the matching documents and their scores for each query
"""
bulk_results = self.marqo_bulk_similarity_search(queries=queries, k=k)
bulk_documents: List[List[Tuple[Document, float]]] = []
for results in bulk_results["result"]:
documents = self._construct_documents_from_results(
results, include_scores=True
)
bulk_documents.append(documents)
return bulk_documents
def _construct_documents_from_results(
self,
results: List[dict],
include_scores: bool = False,
) -> Union[List[Document], List[Tuple[Document, float]]]:
"""Helper to convert Marqo results into documents.
Args:
results (List[dict]): A marqo results object with the 'hits'.
include_scores (bool, optional): Include scores alongside documents. Defaults to False.
Returns:
Union[List[Document], List[Tuple[Document, float]]]: The documents or document score pairs if `include_scores` is true.
"""
documents: Union[List[Document], List[Tuple[Document, float]]] = []
for res in results["hits"]:
if self.page_content_builder is None:
text = res["text"]
else:
text = self.page_content_builder(res)
metadata = json.loads(res.get("metadata", "{}"))
if include_scores:
documents.append(
(Document(page_content=text, metadata=metadata), res["_score"])
)
else:
documents.append(Document(page_content=text, metadata=metadata))
return documents
def marqo_similarity_search(
self,
query: Union[str, Dict[str, float]],
k: int = 4,
) -> List[Dict[str, Any]]:
"""Return documents from Marqo exposing Marqo's output directly
Args:
query (str): The query to search with.
k (int, optional): The number of documents to return. Defaults to 4.
Returns:
List[Dict[str, Any]]: This hits from marqo.
"""
results = self._client.index(self._index_name).search(
q=query, searchable_attributes=self._searchable_attributes, limit=k
)
return results
def marqo_bulk_similarity_search(
self, queries: Iterable[Union[str, Dict[str, float]]], k: int = 4
) -> Dict[str, List[Dict[str, Dict[str, Any]]]]:
"""Return documents from Marqo using a bulk search, exposes Marqo's output directly
Args:
queries (Iterable[Union[str, Dict[str, float]]]): A list of queries.
k (int, optional): The number of documents to return for each query. Defaults to 4.
Returns:
Dict[str, List[Dict[str, Dict[str, Any]]]]: A bulk search results object
"""
bulk_results = self._client.bulk_search(
[
{
"index": self._index_name,
"q": query,
"searchableAttributes": self._searchable_attributes,
"limit": k,
}
for query in queries
]
)
return bulk_results
@classmethod
def from_documents(
cls: Marqo,
documents: List[Document],
embedding: Any = None,
**kwargs: Any,
) -> Marqo:
"""Return VectorStore initialized from documents. Note that Marqo does not need embeddings, we retain the
parameter to adhere to the Liskov substitution principle.
Args:
documents (List[Document]): Input documents
embedding (Any, optional): Embeddings (not required). Defaults to None.
Returns:
VectorStore: A Marqo vectorstore
"""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
return cls.from_texts(texts, metadatas=metadatas, **kwargs)
@classmethod
def from_texts(
cls: Marqo,
texts: List[str],
embedding: Any = None,
metadatas: Optional[List[dict]] = None,
index_name: str = None,
url: str = "http://localhost:8882",
api_key: str = "",
marqo_device: str = "cpu",
add_documents_settings: Optional[Dict[str, Any]] = {},
searchable_attributes: Optional[List[str]] = None,
page_content_builder: Optional[Callable[[dict], str]] = None,
index_settings: Optional[Dict[str, Any]] = {},
verbose: bool = True,
**kwargs: Any,
) -> Marqo:
"""Return Marqo initialized from texts. Note that Marqo does not need embeddings, we retain the
parameter to adhere to the Liskov substitution principle.
This is a quick way to get started with marqo - simply provide your texts and metadatas and this
will create an instance of the data store and index the provided data.
To know the ids of your documents with this approach you will need to include them in under the key "_id"
in your metadatas for each text
Example:
.. code-block:: python
from langchain.vectorstores import Marqo
datastore = Marqo(texts=['text'], index_name='my-first-index', url='http://localhost:8882')
Args:
texts (List[str]): A list of texts to index into marqo upon creation.
embedding (Any, optional): Embeddings (not required). Defaults to None.
index_name (str, optional): The name of the index to use, if none is provided then one will be created with a UUID. Defaults to None.
url (str, optional): The URL for Marqo. Defaults to "http://localhost:8882".
api_key (str, optional): The API key for Marqo. Defaults to "".
metadatas (Optional[List[dict]], optional): A list of metadatas, to accompany the texts. Defaults to None.
marqo_device (str, optional): The device for the marqo to use on the server, this is only used when a new index is being created. Defaults to "cpu". Can be "cpu" or "cuda".
add_documents_settings (Optional[Dict[str, Any]], optional): Settings for adding documents, see https://docs.marqo.ai/0.0.16/API-Reference/documents/#query-parameters. Defaults to {}.
index_settings (Optional[Dict[str, Any]], optional): Index settings if the index doesn't exist, see https://docs.marqo.ai/0.0.16/API-Reference/indexes/#index-defaults-object. Defaults to {}.
Returns:
Marqo: An instance of the Marqo vector store
"""
try:
import marqo
except ImportError:
raise ValueError(
"Could not import marqo python package. "
"Please install it with `pip install marqo`."
)
if not index_name:
index_name = str(uuid.uuid4())
client = marqo.Client(url=url, api_key=api_key, indexing_device=marqo_device)
try:
client.create_index(index_name, settings_dict=index_settings)
if verbose:
print(f"Created {index_name} successfully.")
except Exception:
if verbose:
print(f"Index {index_name} exists.")
instance: Marqo = cls(
client,
index_name,
searchable_attributes=searchable_attributes,
add_documents_settings=add_documents_settings,
page_content_builder=page_content_builder,
)
instance.add_texts(texts, metadatas)
return instance
def get_indexes(self) -> List[Dict[str, str]]:
"""Helper to see your available indexes in marqo, useful if the
from_texts method was used without an index name specified
Returns:
List[Dict[str, str]]: The list of indexes
"""
return self._client.get_indexes()["results"]
def get_number_of_documents(self) -> int:
"""Helper to see the number of documents in the index
Returns:
int: The number of documents
"""
return self._client.index(self._index_name).get_stats()["numberOfDocuments"]

22
poetry.lock generated
View File

@@ -4270,7 +4270,6 @@ optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
files = [
{file = "jsonpointer-2.4-py2.py3-none-any.whl", hash = "sha256:15d51bba20eea3165644553647711d150376234112651b4f1811022aecad7d7a"},
{file = "jsonpointer-2.4.tar.gz", hash = "sha256:585cee82b70211fa9e6043b7bb89db6e1aa49524340dde8ad6b63206ea689d88"},
]
[[package]]
@@ -4968,6 +4967,23 @@ files = [
{file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
]
[[package]]
name = "marqo"
version = "0.9.1"
description = "Tensor search for humans"
category = "main"
optional = true
python-versions = ">=3"
files = [
{file = "marqo-0.9.1-py3-none-any.whl", hash = "sha256:5e84c68d20b65723daaada65f9a4780d4ce20f3c679522258f826fde12333bbf"},
{file = "marqo-0.9.1.tar.gz", hash = "sha256:11af3ef7b4612aca3f4e65ba8379a8685f852d6960ff2eefe6788a0734d00975"},
]
[package.dependencies]
pydantic = "*"
requests = "*"
urllib3 = "*"
[[package]]
name = "marshmallow"
version = "3.19.0"
@@ -12275,7 +12291,7 @@ cffi = {version = ">=1.11", markers = "platform_python_implementation == \"PyPy\
cffi = ["cffi (>=1.11)"]
[extras]
all = ["O365", "aleph-alpha-client", "anthropic", "arxiv", "atlassian-python-api", "awadb", "azure-ai-formrecognizer", "azure-ai-vision", "azure-cognitiveservices-speech", "azure-cosmos", "azure-identity", "beautifulsoup4", "clarifai", "clickhouse-connect", "cohere", "deeplake", "docarray", "duckduckgo-search", "elasticsearch", "esprima", "faiss-cpu", "google-api-python-client", "google-auth", "google-search-results", "gptcache", "html2text", "huggingface_hub", "jina", "jinja2", "jq", "lancedb", "langkit", "lark", "lxml", "manifest-ml", "momento", "nebula3-python", "neo4j", "networkx", "nlpcloud", "nltk", "nomic", "openai", "openlm", "opensearch-py", "pdfminer-six", "pexpect", "pgvector", "pinecone-client", "pinecone-text", "psycopg2-binary", "pymongo", "pyowm", "pypdf", "pytesseract", "pyvespa", "qdrant-client", "redis", "requests-toolbelt", "sentence-transformers", "singlestoredb", "spacy", "steamship", "tensorflow-text", "tigrisdb", "tiktoken", "torch", "transformers", "weaviate-client", "wikipedia", "wolframalpha"]
all = ["O365", "aleph-alpha-client", "anthropic", "arxiv", "atlassian-python-api", "awadb", "azure-ai-formrecognizer", "azure-ai-vision", "azure-cognitiveservices-speech", "azure-cosmos", "azure-identity", "beautifulsoup4", "clarifai", "clickhouse-connect", "cohere", "deeplake", "docarray", "duckduckgo-search", "elasticsearch", "esprima", "faiss-cpu", "google-api-python-client", "google-auth", "google-search-results", "gptcache", "html2text", "huggingface_hub", "jina", "jinja2", "jq", "lancedb", "langkit", "lark", "lxml", "manifest-ml", "marqo", "momento", "nebula3-python", "neo4j", "networkx", "nlpcloud", "nltk", "nomic", "openai", "openlm", "opensearch-py", "pdfminer-six", "pexpect", "pgvector", "pinecone-client", "pinecone-text", "psycopg2-binary", "pymongo", "pyowm", "pypdf", "pytesseract", "pyvespa", "qdrant-client", "redis", "requests-toolbelt", "sentence-transformers", "singlestoredb", "spacy", "steamship", "tensorflow-text", "tigrisdb", "tiktoken", "torch", "transformers", "weaviate-client", "wikipedia", "wolframalpha"]
azure = ["azure-ai-formrecognizer", "azure-ai-vision", "azure-cognitiveservices-speech", "azure-core", "azure-cosmos", "azure-identity", "azure-search-documents", "openai"]
clarifai = ["clarifai"]
cohere = ["cohere"]
@@ -12291,4 +12307,4 @@ text-helpers = ["chardet"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<4.0"
content-hash = "57b4476162421fde16357804a4436cf01af1c0d22d799251ec4320a4216fd566"
content-hash = "86aee270da56d7dd9659c552b44be62a845be47111eade2fe139348eeea4f64f"

View File

@@ -38,6 +38,7 @@ pinecone-text = {version = "^0.4.2", optional = true}
pymongo = {version = "^4.3.3", optional = true}
clickhouse-connect = {version="^0.5.14", optional=true}
weaviate-client = {version = "^3", optional = true}
marqo = {version = "^0.9.1", optional=true}
google-api-python-client = {version = "2.70.0", optional = true}
google-auth = {version = "^2.18.1", optional = true}
wolframalpha = {version = "5.0.0", optional = true}
@@ -305,6 +306,7 @@ all = [
"tigrisdb",
"nebula3-python",
"awadb",
"marqo",
"esprima",
]

View File

@@ -0,0 +1,174 @@
"""Test Marqo functionality."""
import marqo
import pytest
from langchain.docstore.document import Document
from langchain.vectorstores.marqo import Marqo
DEFAULT_MARQO_URL = "http://localhost:8882"
DEFAULT_MARQO_API_KEY = ""
INDEX_NAME = "langchain-integration-tests"
@pytest.fixture
def client():
# fixture for marqo client to be used throughout testing, resets the index
client = marqo.Client(url=DEFAULT_MARQO_URL, api_key=DEFAULT_MARQO_API_KEY)
try:
client.index(INDEX_NAME).delete()
except Exception:
pass
client.create_index(INDEX_NAME)
return client
def test_marqo(client) -> None:
"""Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
marqo_search = Marqo.from_texts(
texts=texts,
index_name=INDEX_NAME,
url=DEFAULT_MARQO_URL,
api_key=DEFAULT_MARQO_API_KEY,
verbose=False,
)
results = marqo_search.similarity_search("foo", k=1)
assert results == [Document(page_content="foo")]
def test_marqo_with_metadatas(client) -> None:
"""Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
marqo_search = Marqo.from_texts(
texts=texts,
metadatas=metadatas,
index_name=INDEX_NAME,
url=DEFAULT_MARQO_URL,
api_key=DEFAULT_MARQO_API_KEY,
verbose=False,
)
results = marqo_search.similarity_search("foo", k=1)
assert results == [Document(page_content="foo", metadata={"page": 0})]
def test_marqo_with_scores(client) -> None:
"""Test end to end construction and search with scores and IDs."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
marqo_search = Marqo.from_texts(
texts=texts,
metadatas=metadatas,
index_name=INDEX_NAME,
url=DEFAULT_MARQO_URL,
api_key=DEFAULT_MARQO_API_KEY,
verbose=False,
)
results = marqo_search.similarity_search_with_score("foo", k=3)
docs = [r[0] for r in results]
scores = [r[1] for r in results]
assert docs == [
Document(page_content="foo", metadata={"page": 0}),
Document(page_content="bar", metadata={"page": 1}),
Document(page_content="baz", metadata={"page": 2}),
]
assert scores[0] > scores[1] > scores[2]
def test_marqo_add_texts(client) -> None:
marqo_search = Marqo(client=client, index_name=INDEX_NAME)
ids1 = marqo_search.add_texts(["1", "2", "3"])
assert len(ids1) == 3
ids2 = marqo_search.add_texts(["1", "2", "3"])
assert len(ids2) == 3
assert len(set(ids1).union(set(ids2))) == 6
def test_marqo_search(client) -> None:
marqo_search = Marqo(client=client, index_name=INDEX_NAME)
input_documents = ["This is document 1", "2", "3"]
ids = marqo_search.add_texts(input_documents)
results = marqo_search.marqo_similarity_search("What is the first document?", k=3)
assert len(ids) == len(input_documents)
assert ids[0] == results["hits"][0]["_id"]
def test_marqo_bulk(client) -> None:
marqo_search = Marqo(client=client, index_name=INDEX_NAME)
input_documents = ["This is document 1", "2", "3"]
ids = marqo_search.add_texts(input_documents)
bulk_results = marqo_search.bulk_similarity_search(
["What is the first document?", "2", "3"], k=3
)
assert len(ids) == len(input_documents)
assert bulk_results[0][0].page_content == input_documents[0]
assert bulk_results[1][0].page_content == input_documents[1]
assert bulk_results[2][0].page_content == input_documents[2]
def test_marqo_weighted_query(client) -> None:
"""Test end to end construction and search."""
texts = ["Smartphone", "Telephone"]
marqo_search = Marqo.from_texts(
texts=texts,
index_name=INDEX_NAME,
url=DEFAULT_MARQO_URL,
api_key=DEFAULT_MARQO_API_KEY,
verbose=False,
)
results = marqo_search.similarity_search(
{"communications device": 1.0, "Old technology": -5.0}, k=1
)
assert results == [Document(page_content="Smartphone")]
def test_marqo_multimodal():
client = marqo.Client(url=DEFAULT_MARQO_URL, api_key=DEFAULT_MARQO_API_KEY)
try:
client.index(INDEX_NAME).delete()
except Exception:
pass
# reset the index for this example
client.delete_index(INDEX_NAME)
# This index could have been created by another system
settings = {"treat_urls_and_pointers_as_images": True, "model": "ViT-L/14"}
client.create_index(INDEX_NAME, **settings)
client.index(INDEX_NAME).add_documents(
[
# image of a bus
{
"caption": "Bus",
"image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpg",
},
# image of a plane
{
"caption": "Plane",
"image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg",
},
],
)
def get_content(res):
if "text" in res:
return res["text"]
return f"{res['caption']}: {res['image']}"
marqo_search = Marqo(client, INDEX_NAME, page_content_builder=get_content)
query = "vehicles that fly"
docs = marqo_search.similarity_search(query)
assert docs[0].page_content.split(":")[0] == "Plane"
raised_value_error = False
try:
marqo_search.add_texts(["text"])
except ValueError:
raised_value_error = True
assert raised_value_error