Merge branch 'master' into sqldocstore_postgres_compat

This commit is contained in:
Alex Lee 2025-03-19 08:33:33 -07:00 committed by GitHub
commit 51ddb322f1
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
13 changed files with 397 additions and 95 deletions

View File

@ -35,7 +35,7 @@ from a template, which can be edited to implement your LangChain components.
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
### Boostrapping a new Python package with langchain-cli
### Bootstrapping a new Python package with langchain-cli
First, install `langchain-cli` and `poetry`:
@ -104,7 +104,7 @@ dependency management and packaging, and you're welcome to use any other tools y
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
### Boostrapping a new Python package with Poetry
### Bootstrapping a new Python package with Poetry
First, install Poetry:

View File

@ -0,0 +1,244 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Dell PowerScale Document Loader\n",
"\n",
"[Dell PowerScale](https://www.dell.com/en-us/shop/powerscale-family/sf/powerscale) is an enterprise scale out storage system that hosts industry leading OneFS filesystem that can be hosted on-prem or deployed in the cloud.\n",
"\n",
"This document loader utilizes unique capabilities from PowerScale that can determine what files that have been modified since an application's last run and only returns modified files for processing. This will eliminate the need to re-process (chunk and embed) files that have not been changed, improving the overall data ingestion workflow.\n",
"\n",
"This loader requires PowerScale's MetadataIQ feature enabled. Additional information can be found on our GitHub Repo: [https://github.com/dell/powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/__module_name___loader)|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [PowerScaleDocumentLoader](https://github.com/dell/powerscale-rag-connector/blob/main/src/powerscale_rag_connector/PowerScaleDocumentLoader.py) | [powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector) | ✅ | ❌ | ❌ | \n",
"| [PowerScaleUnstructuredLoader](https://github.com/dell/powerscale-rag-connector/blob/main/src/powerscale_rag_connector/PowerScaleUnstructuredLoader.py) | [powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector) | ✅ | ❌ | ❌ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| PowerScaleDocumentLoader | ✅ | ✅ | \n",
"| PowerScaleUnstructuredLoader | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"This document loader requires the use of a Dell PowerScale system with MetadataIQ enabled. Additional information can be found on our github page: [https://github.com/dell/powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The document loader lives in an external pip package and can be installed using standard tooling"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet powerscale-rag-connector"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can instantiate document loader:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generic Document Loader\n",
"\n",
"Our generic document loader can be used to incrementally load all files from PowerScale in the following manner:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from powerscale_rag_connector import PowerScaleDocumentLoader\n",
"\n",
"loader = PowerScaleDocumentLoader(\n",
" es_host_url=\"http://elasticsearch:9200\",\n",
" es_index_name=\"metadataiq\",\n",
" es_api_key=\"your-api-key\",\n",
" folder_path=\"/ifs/data\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### UnstructuredLoader Loader\n",
"\n",
"Optionally, the `PowerScaleUnstructuredLoader` can be used to locate the changed files _and_ automatically process the files producing elements of the source file. This is done using LangChain's `UnstructuredLoader` class."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from powerscale_rag_connector import PowerScaleUnstructuredLoader\n",
"\n",
"# Or load files with the Unstructured Loader\n",
"loader = PowerScaleUnstructuredLoader(\n",
" es_host_url=\"http://elasticsearch:9200\",\n",
" es_index_name=\"metadataiq\",\n",
" es_api_key=\"your-api-key\",\n",
" folder_path=\"/ifs/data\",\n",
" # 'elements' mode splits the document into more granular chunks\n",
" # Use 'single' mode if you want the entire document as a single chunk\n",
" mode=\"elements\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The fields:\n",
" - `es_host_url` is the endpoint to to MetadataIQ Elasticsearch database\n",
" - `es_index_index` is the name of the index where PowerScale writes it file system metadata\n",
" - `es_api_key` is the **encoded** version of your elasticsearch API key\n",
" - `folder_path` is the path on PowerScale to be queried for changes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load\n",
"\n",
"Internally, all code is asynchronous with PowerScale and MetadataIQ and the load and lazy load methods will return a python generator. We recommend using the lazy load function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='' metadata={'source': '/ifs/pdfs/1994-Graph.Theoretic.Obstacles.to.Perfect.Hashing.TR0257.pdf', 'snapshot': 20834, 'change_types': ['ENTRY_ADDED']}),\n",
"Document(page_content='' metadata={'source': '/ifs/pdfs/New.sendfile-FreeBSD.20.Feb.2015.pdf', 'snapshot': 20920, 'change_types': ['ENTRY_MODIFIED']}),\n",
"Document(page_content='' metadata={'source': '/ifs/pdfs/FAST-Fast.Architecture.Sensitive.Tree.Search.on.Modern.CPUs.and.GPUs-Slides.pdf', 'snapshot': 20924, 'change_types': ['ENTRY_ADDED']})]\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"for doc in loader.load():\n",
" print(doc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Returned Object\n",
"\n",
"Both document loaders will keep track of what files were previously returned to your application. When called again, the document loader will only return new or modified files since your previous run.\n",
"\n",
" - The `metadata` fields in the returned `Document` will return the path on PowerScale that contains the modified file. You will use this path to read the data via NFS (or S3) and process the data in your application (e.g.: create chunks and embedding). \n",
" - The `source` field is the path on PowerScale and not necessarily on your local system (depending on your mount strategy); OneFS expresses the entire storage system as a single tree rooted at `/ifs`.\n",
" - The `change_types` property will inform you on what change occurred since the last one - e.g.: new, modified or delete.\n",
"\n",
"Your RAG application can use the information from `change_types` to add, update or delete entries your chunk and vector store.\n",
"\n",
"When using `PowerScaleUnstructuredLoader` the `page_content` field will be filled with data from the Unstructured Loader"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"\n",
"Internally, all code is asynchronous with PowerScale and MetadataIQ and the load and lazy load methods will return a python generator. We recommend using the lazy load function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for doc in loader.lazy_load():\n",
" print(doc) # do something specific with the document"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The same `Document` is returned as the load function with all the same properties mentioned above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Additional Examples\n",
"\n",
"Additional examples and code can be found on our public github webpage: [https://github.com/dell/powerscale-rag-connector/tree/main/examples](https://github.com/dell/powerscale-rag-connector/tree/main/examples) that provide full working examples. \n",
"\n",
" - [PowerScale LangChain Document Loader](https://github.com/dell/powerscale-rag-connector/blob/main/examples/powerscale_langchain_doc_loader.py) - Working example of our standard document loader\n",
" - [PowerScale LangChain Unstructured Loader](https://github.com/dell/powerscale-rag-connector/blob/main/examples/powerscale_langchain_unstructured_loader.py) - Working example of our standard document loader using unstructured loader for chunking and embedding\n",
" - [PowerScale NVIDIA Retriever Microservice Loader](https://github.com/dell/powerscale-rag-connector/blob/main/examples/powerscale_nvingest_example.py) - Working example of our document loader with NVIDIA NeMo Retriever microservices for chunking and embedding"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all PowerScale Document Loader features and configurations head to the github page: [https://github.com/dell/powerscale-rag-connector/](https://github.com/dell/powerscale-rag-connector/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@ -0,0 +1,22 @@
# Dell
Dell is a global technology company that provides a range of hardware, software, and
services, including AI solutions. Their AI portfolio includes purpose-built
infrastructure for AI workloads, including Dell PowerScale storage systems optimized
for AI data management.
## PowerScale
Dell [PowerScale](https://www.dell.com/en-us/shop/powerscale-family/sf/powerscale) is
an enterprise scale out storage system that hosts industry leading OneFS filesystem
that can be hosted on-prem or deployed in the cloud.
### Installation and Setup
```bash
pip install powerscale-rag-connector
```
### Document loaders
See detail on available loaders [here](/docs/integrations/document_loaders/powerscale).

View File

@ -171,7 +171,7 @@ print(result)
# Prompt declarations
By default the prompt is is the whole function docs, unless you mark your prompt
By default the prompt is the whole function docs, unless you mark your prompt
## Documenting your prompt

View File

@ -62,25 +62,23 @@
},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, OpenAIFunctionsAgent\n",
"from langchain.tools import Tool\n",
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"tools = Tool(\n",
" name=\"email-sender\",\n",
" description=\"Sends an email with the specified content to test@testing123.com\",\n",
" func=lambda input_text: f\"Email sent to test@testing123.com with content: {input_text}\",\n",
"tools = load_tools(\n",
" [\"awslambda\"],\n",
" awslambda_tool_name=\"email-sender\",\n",
" awslambda_tool_description=\"sends an email with the specified content to test@testing123.com\",\n",
" function_name=\"testFunction1\",\n",
")\n",
"\n",
"agent = OpenAIFunctionsAgent(llm=llm, tools=[tools])\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")\n",
"\n",
"agent_executor = AgentExecutor(agent=agent, tools=[tools], verbose=True)\n",
"\n",
"agent_executor.invoke(\n",
" {\"input\": \" Send an email to test@testing123.com saying hello world.\"}\n",
")"
"agent.run(\"Send an email to test@testing123.com saying hello world.\")"
]
},
{

View File

@ -8,11 +8,9 @@
"source": [
"# Qdrant\n",
"\n",
">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n",
">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n",
"\n",
"This documentation demonstrates how to use Qdrant with Langchain for dense/sparse and hybrid retrieval.\n",
"\n",
"> This page documents the `QdrantVectorStore` class that supports multiple retrieval modes via Qdrant's new [Query API](https://qdrant.tech/blog/qdrant-1.10.x/). It requires you to run Qdrant v1.10.0 or above.\n",
"This documentation demonstrates how to use Qdrant with LangChain for dense (i.e., embedding-based), sparse (i.e., text search) and hybrid retrieval. The `QdrantVectorStore` class supports multiple retrieval modes via Qdrant's new [Query API](https://qdrant.tech/blog/qdrant-1.10.x/). It requires you to run Qdrant v1.10.0 or above.\n",
"\n",
"\n",
"## Setup\n",
@ -22,7 +20,7 @@
"- Docker deployments\n",
"- Qdrant Cloud\n",
"\n",
"See the [installation instructions](https://qdrant.tech/documentation/install/)."
"Please see the installation instructions [here](https://qdrant.tech/documentation/install/)."
]
},
{
@ -70,11 +68,11 @@
"\n",
"### Local mode\n",
"\n",
"Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or storing just a small amount of vectors. The embeddings might be fully kept in memory or persisted on disk.\n",
"The Python client provides the option to run the code in local mode without running the Qdrant server. This is great for testing things out and debugging or storing just a small amount of vectors. The embeddings can be kept fully in-memory or persisted on-disk.\n",
"\n",
"#### In-memory\n",
"\n",
"For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.\n",
"For some testing scenarios and quick experiments, you may prefer to keep all the data in-memory only, so it gets removed when the client is destroyed - usually at the end of your script/notebook.\n",
"\n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
@ -135,7 +133,7 @@
"source": [
"#### On-disk storage\n",
"\n",
"Local mode, without using the Qdrant server, may also store your vectors on disk so they persist between runs."
"Local mode, without using the Qdrant server, may also store your vectors on-disk so they persist between runs."
]
},
{
@ -173,7 +171,7 @@
"source": [
"### On-premise server deployment\n",
"\n",
"No matter if you choose to launch Qdrant locally with [a Docker container](https://qdrant.tech/documentation/install/), or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service."
"No matter if you choose to launch Qdrant locally with [a Docker container](https://qdrant.tech/documentation/install/) or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service."
]
},
{
@ -280,42 +278,22 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "7697a362",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['c04134c3-273d-4766-949a-eee46052ad32',\n",
" '9e6ba50c-794f-4b88-94e5-411f15052a02',\n",
" 'd3202666-6f2b-4186-ac43-e35389de8166',\n",
" '50d8d6ee-69bf-4173-a6a2-b254e9928965',\n",
" 'bd2eae02-74b5-43ec-9fcf-09e9d9db6fd3',\n",
" '6dae6b37-826d-4f14-8376-da4603b35de3',\n",
" 'b0964ab5-5a14-47b4-a983-37fa5c5bd154',\n",
" '91ed6c56-fe53-49e2-8199-c3bb3c33c3eb',\n",
" '42a580cb-7469-4324-9927-0febab57ce92',\n",
" 'ff774e5c-f158-4d12-94e2-0a0162b22f27']"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_2 = Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.\",\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees Fahrenheit.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
@ -371,8 +349,16 @@
" document_9,\n",
" document_10,\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "413c3d9a",
"metadata": {},
"outputs": [],
"source": [
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
@ -418,11 +404,11 @@
"source": [
"## Query vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n",
"Once your vector store has been created and the relevant documents have been added, you will most likely wish to query it during the running of your chain or agent. \n",
"\n",
"### Query directly\n",
"\n",
"The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded into vector embeddings and used to find similar documents in Qdrant collection."
"The simplest scenario for using the Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded into vector embeddings and used to find similar documents in a Qdrant collection."
]
},
{
@ -459,17 +445,17 @@
"id": "79bcb0ce",
"metadata": {},
"source": [
"`QdrantVectorStore` supports 3 modes for similarity searches. They can be configured using the `retrieval_mode` parameter when setting up the class.\n",
"`QdrantVectorStore` supports 3 modes for similarity searches. They can be configured using the `retrieval_mode` parameter.\n",
"\n",
"- Dense Vector Search(Default)\n",
"- Dense Vector Search (default)\n",
"- Sparse Vector Search\n",
"- Hybrid Search\n",
"\n",
"### Dense Vector Search\n",
"\n",
"To search with only dense vectors,\n",
"Dense vector search involves calculating similarity via vector-based embeddings. To search with only dense vectors:\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.DENSE`(default).\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.DENSE`. This is the default behavior.\n",
"- A [dense embeddings](https://python.langchain.com/docs/integrations/text_embedding/) value should be provided to the `embedding` parameter."
]
},
@ -480,18 +466,31 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_qdrant import RetrievalMode\n",
"from langchain_qdrant import QdrantVectorStore, RetrievalMode\n",
"from qdrant_client import QdrantClient\n",
"from qdrant_client.http.models import Distance, VectorParams\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embedding=embeddings,\n",
" location=\":memory:\",\n",
"# Create a Qdrant client for local storage\n",
"client = QdrantClient(path=\"/tmp/langchain_qdrant\")\n",
"\n",
"# Create a collection with dense vectors\n",
"client.create_collection(\n",
" collection_name=\"my_documents\",\n",
" vectors_config=VectorParams(size=3072, distance=Distance.COSINE),\n",
")\n",
"\n",
"qdrant = QdrantVectorStore(\n",
" client=client,\n",
" collection_name=\"my_documents\",\n",
" embedding=embeddings,\n",
" retrieval_mode=RetrievalMode.DENSE,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
"qdrant.add_documents(documents=documents, ids=uuids)\n",
"\n",
"query = \"How much money did the robbers steal?\"\n",
"found_docs = qdrant.similarity_search(query)\n",
"found_docs"
]
},
{
@ -501,10 +500,10 @@
"source": [
"### Sparse Vector Search\n",
"\n",
"To search with only sparse vectors,\n",
"To search with only sparse vectors:\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.SPARSE`.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as a value to the `sparse_embedding` parameter.\n",
"\n",
"The `langchain-qdrant` package provides a [FastEmbed](https://github.com/qdrant/fastembed) based implementation out of the box.\n",
"\n",
@ -518,7 +517,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install fastembed"
"%pip install -qU fastembed"
]
},
{
@ -528,20 +527,37 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_qdrant import FastEmbedSparse, RetrievalMode\n",
"from langchain_qdrant import FastEmbedSparse, QdrantVectorStore, RetrievalMode\n",
"from qdrant_client import QdrantClient, models\n",
"from qdrant_client.http.models import Distance, SparseVectorParams, VectorParams\n",
"\n",
"sparse_embeddings = FastEmbedSparse(model_name=\"Qdrant/bm25\")\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
"# Create a Qdrant client for local storage\n",
"client = QdrantClient(path=\"/tmp/langchain_qdrant\")\n",
"\n",
"# Create a collection with sparse vectors\n",
"client.create_collection(\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.SPARSE,\n",
" vectors_config={\"dense\": VectorParams(size=3072, distance=Distance.COSINE)},\n",
" sparse_vectors_config={\n",
" \"sparse\": SparseVectorParams(index=models.SparseIndexParams(on_disk=False))\n",
" },\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
"qdrant = QdrantVectorStore(\n",
" client=client,\n",
" collection_name=\"my_documents\",\n",
" sparse_embedding=sparse_embeddings,\n",
" retrieval_mode=RetrievalMode.SPARSE,\n",
" sparse_vector_name=\"sparse\",\n",
")\n",
"\n",
"qdrant.add_documents(documents=documents, ids=uuids)\n",
"\n",
"query = \"How much money did the robbers steal?\"\n",
"found_docs = qdrant.similarity_search(query)\n",
"found_docs"
]
},
{
@ -555,9 +571,9 @@
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.HYBRID`.\n",
"- A [dense embeddings](https://python.langchain.com/docs/integrations/text_embedding/) value should be provided to the `embedding` parameter.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as a value to the `sparse_embedding` parameter.\n",
"\n",
"Note that if you've added documents with the `HYBRID` mode, you can switch to any retrieval mode when searching. Since both the dense and sparse vectors are available in the collection."
"Note that if you've added documents with the `HYBRID` mode, you can switch to any retrieval mode when searching, since both the dense and sparse vectors are available in the collection."
]
},
{
@ -567,21 +583,39 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_qdrant import FastEmbedSparse, RetrievalMode\n",
"from langchain_qdrant import FastEmbedSparse, QdrantVectorStore, RetrievalMode\n",
"from qdrant_client import QdrantClient, models\n",
"from qdrant_client.http.models import Distance, SparseVectorParams, VectorParams\n",
"\n",
"sparse_embeddings = FastEmbedSparse(model_name=\"Qdrant/bm25\")\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embedding=embeddings,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
"# Create a Qdrant client for local storage\n",
"client = QdrantClient(path=\"/tmp/langchain_qdrant\")\n",
"\n",
"# Create a collection with both dense and sparse vectors\n",
"client.create_collection(\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.HYBRID,\n",
" vectors_config={\"dense\": VectorParams(size=3072, distance=Distance.COSINE)},\n",
" sparse_vectors_config={\n",
" \"sparse\": SparseVectorParams(index=models.SparseIndexParams(on_disk=False))\n",
" },\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
"qdrant = QdrantVectorStore(\n",
" client=client,\n",
" collection_name=\"my_documents\",\n",
" embedding=embeddings,\n",
" sparse_embedding=sparse_embeddings,\n",
" retrieval_mode=RetrievalMode.HYBRID,\n",
" vector_name=\"dense\",\n",
" sparse_vector_name=\"sparse\",\n",
")\n",
"\n",
"qdrant.add_documents(documents=documents, ids=uuids)\n",
"\n",
"query = \"How much money did the robbers steal?\"\n",
"found_docs = qdrant.similarity_search(query)\n",
"found_docs"
]
},
{
@ -728,7 +762,7 @@
"source": [
"## Customizing Qdrant\n",
"\n",
"There are options to use an existing Qdrant collection within your Langchain application. In such cases, you may need to define how to map Qdrant point into the Langchain `Document`.\n",
"There are options to use an existing Qdrant collection within your LangChain application. In such cases, you may need to define how to map Qdrant point into the LangChain `Document`.\n",
"\n",
"### Named vectors\n",
"\n",

View File

@ -259,7 +259,7 @@ class ChatLiteLLM(BaseChatModel):
organization: Optional[str] = None
custom_llm_provider: Optional[str] = None
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
temperature: Optional[float] = 1
temperature: Optional[float] = None
"""Run inference with this temperature. Must be in the closed
interval [0.0, 1.0]."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
@ -270,12 +270,12 @@ class ChatLiteLLM(BaseChatModel):
top_k: Optional[int] = None
"""Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive."""
n: int = 1
n: Optional[int] = None
"""Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated."""
max_tokens: Optional[int] = None
max_retries: int = 6
max_retries: int = 1
@property
def _default_params(self) -> Dict[str, Any]:

View File

@ -448,6 +448,7 @@ class ConfluenceLoader(BaseLoader):
content_format,
ocr_languages,
keep_markdown_format,
keep_newlines=keep_newlines,
)
def load(self, **kwargs: Any) -> List[Document]:

View File

@ -99,7 +99,6 @@ def test_config_traceable_handoff() -> None:
rt = get_current_run_tree()
assert rt
assert rt.session_name == "another-flippin-project"
assert rt.parent_run and rt.parent_run.name == "my_parent_function"
return my_child_function(a)
def my_parent_function(a: int) -> int:

View File

@ -519,4 +519,8 @@ packages:
- name: langchain-xinference
path: .
repo: TheSongg/langchain-xinference
- name: powerscale-rag-connector
name_title: PowerScale RAG Connector
path: .
repo: dell/powerscale-rag-connector
provider_page: dell

View File

@ -6,9 +6,9 @@ build-backend = "pdm.backend"
authors = []
license = { text = "MIT" }
requires-python = "<4.0,>=3.9"
dependencies = ["langchain-core<1.0.0,>=0.3.42", "groq<1,>=0.4.1"]
dependencies = ["langchain-core<1.0.0,>=0.3.45", "groq<1,>=0.4.1"]
name = "langchain-groq"
version = "0.2.5"
version = "0.3.0"
description = "An integration package connecting Groq and LangChain"
readme = "README.md"

View File

@ -313,7 +313,7 @@ wheels = [
[[package]]
name = "langchain-core"
version = "0.3.42"
version = "0.3.45"
source = { editable = "../../core" }
dependencies = [
{ name = "jsonpatch" },

View File

@ -527,7 +527,7 @@ class BaseChatOpenAI(BaseChatModel):
used, or it's a list of disabled values for the parameter.
For example, older models may not support the 'parallel_tool_calls' parameter at
all, in which case ``disabled_params={"parallel_tool_calls": None}`` can ben passed
all, in which case ``disabled_params={"parallel_tool_calls": None}`` can be passed
in.
If a parameter is disabled then it will not be used by default in any methods, e.g.