docs: self-query consistency (#10502)

The `self-que[ring`
navbar](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)
has repeated `self-quering` repeated in each menu item. I've simplified
it to be more readable
- removed `self-quering` from a title of each page;
- added description to the vector stores
- added description and link to the Integration Card
(`integrations/providers`) of the vector stores when they are missed.
This commit is contained in:
Leonid Ganeline
2023-09-13 14:43:04 -07:00
committed by GitHub
parent 415d38ae62
commit f4e6eac3b6
18 changed files with 336 additions and 214 deletions

View File

@@ -1,15 +1,20 @@
# Milvus
This page covers how to use the Milvus ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
>[Milvus](https://milvus.io/docs/overview.md) is a database that stores, indexes, and manages
> massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
## Installation and Setup
- Install the Python SDK with `pip install pymilvus`
## Wrappers
### VectorStore
Install the Python SDK:
There exists a wrapper around Milvus indexes, allowing you to use it as a vectorstore,
```bash
pip install pymilvus
```
## Vector Store
There exists a wrapper around `Milvus` indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
@@ -17,4 +22,4 @@ To import this vectorstore:
from langchain.vectorstores import Milvus
```
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)

View File

@@ -1,16 +1,18 @@
# Pinecone
This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
>[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.
## Installation and Setup
Install the Python SDK:
```bash
pip install pinecone-client
```
## Vectorstore
## Vector store
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.

View File

@@ -1,15 +1,22 @@
# Qdrant
This page covers how to use the Qdrant ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
>[Qdrant](https://qdrant.tech/documentation/) (read: quadrant) is a vector similarity search engine.
> It provides a production-ready service with a convenient API to store, search, and manage
> points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support.
## Installation and Setup
- Install the Python SDK with `pip install qdrant-client`
## Wrappers
### VectorStore
Install the Python SDK:
There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
```bash
pip install qdrant-client
```
## Vector Store
There exists a wrapper around `Qdrant` indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:

View File

@@ -1,18 +1,26 @@
# Redis
>[Redis](https://redis.com) is an open-source key-value store that can be used as a cache,
> message broker, database, vector database and more.
This page covers how to use the [Redis](https://redis.com) ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.
## Installation and Setup
- Install the Redis Python SDK with `pip install redis`
Install the Python SDK:
```bash
pip install redis
```
## Wrappers
All wrappers needing a redis url connection string to connect to the database support either a stand alone Redis server
All wrappers need a redis url connection string to connect to the database support either a stand alone Redis server
or a High-Availability setup with Replication and Redis Sentinels.
### Redis Standalone connection url
For standalone Redis server the official redis connection url formats can be used as describe in the python redis modules
For standalone `Redis` server, the official redis connection url formats can be used as describe in the python redis modules
"from_url()" method [Redis.from_url](https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url)
Example: `redis_url = "redis://:secret-pass@localhost:6379/0"`
@@ -20,7 +28,7 @@ Example: `redis_url = "redis://:secret-pass@localhost:6379/0"`
### Redis Sentinel connection url
For [Redis sentinel setups](https://redis.io/docs/management/sentinel/) the connection scheme is "redis+sentinel".
This is an un-offical extensions to the official IANA registered protocol schemes as long as there is no connection url
This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url
for Sentinels available.
Example: `redis_url = "redis+sentinel://:secret-pass@sentinel-host:26379/mymaster/0"`

View File

@@ -1,17 +1,18 @@
# Vectara
What is Vectara?
>[Vectara](https://docs.vectara.com/docs/) is a GenAI platform for developers. It provides a simple API to build Grounded Generation
>(aka Retrieval-augmented-generation or RAG) applications.
**Vectara Overview:**
- Vectara is developer-first API platform for building GenAI applications
- `Vectara` is developer-first API platform for building GenAI applications
- To use Vectara - first [sign up](https://console.vectara.com/signup) and create an account. Then create a corpus and an API key for indexing and searching.
- You can use Vectara's [indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) to add documents into Vectara's index
- You can use Vectara's [Search API](https://docs.vectara.com/docs/search-apis/search) to query Vectara's index (which also supports Hybrid search implicitly).
- You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.
## Installation and Setup
To use Vectara with LangChain no special installation steps are required.
To use `Vectara` with LangChain no special installation steps are required.
To get started, follow our [quickstart](https://docs.vectara.com/docs/quickstart) guide to create an account, a corpus and an API key.
Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.
@@ -19,9 +20,8 @@ Once you have these, you can provide them as arguments to the Vectara vectorstor
- export `VECTARA_CORPUS_ID`="your_corpus_id"
- export `VECTARA_API_KEY`="your-vectara-api-key"
## Usage
### VectorStore
## Vector Store
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.

View File

@@ -1,10 +1,10 @@
# Weaviate
This page covers how to use the Weaviate ecosystem within LangChain.
>[Weaviate](https://weaviate.io/) is an open-source vector database. It allows you to store data objects and vector embeddings from
>your favorite ML models, and scale seamlessly into billions of data objects.
What is Weaviate?
**Weaviate in a nutshell:**
What is `Weaviate`?
- Weaviate is an open-source database of the type vector search engine.
- Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
- Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
@@ -14,15 +14,20 @@ What is Weaviate?
**Weaviate in detail:**
Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
`Weaviate` is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
## Installation and Setup
- Install the Python SDK with `pip install weaviate-client`
## Wrappers
### VectorStore
Install the Python SDK:
There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
```bash
pip install weaviate-client
```
## Vector Store
There exists a wrapper around `Weaviate` indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore: