mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-31 12:09:58 +00:00
- updated titles and descriptions of the `integrations/memory` notebooks into consistent and laconic format; - removed `docs/extras/integrations/memory/motorhead_memory_managed.ipynb` file as a duplicate of the `docs/extras/integrations/memory/motorhead_memory.ipynb`; - added `integrations/providers` Integration Cards for `dynamodb`, `motorhead`. - updated `integrations/providers/redis.mdx` with links - renamed several notebooks; updated `vercel.json` to reroute new names.
121 lines
4.6 KiB
Plaintext
121 lines
4.6 KiB
Plaintext
# Redis
|
||
|
||
>[Redis (Remote Dictionary Server)](https://en.wikipedia.org/wiki/Redis) is an open-source in-memory storage,
|
||
> used as a distributed, in-memory key–value database, cache and message broker, with optional durability.
|
||
> Because it holds all data in memory and because of its design, `Redis` offers low-latency reads and writes,
|
||
> making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database,
|
||
> and one of the most popular databases overall.
|
||
|
||
This page covers how to use the [Redis](https://redis.com) ecosystem within LangChain.
|
||
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.
|
||
|
||
## Installation and Setup
|
||
|
||
Install the Python SDK:
|
||
|
||
```bash
|
||
pip install redis
|
||
```
|
||
|
||
## Wrappers
|
||
|
||
All wrappers need a redis url connection string to connect to the database support either a stand alone Redis server
|
||
or a High-Availability setup with Replication and Redis Sentinels.
|
||
|
||
### Redis Standalone connection url
|
||
For standalone `Redis` server, the official redis connection url formats can be used as describe in the python redis modules
|
||
"from_url()" method [Redis.from_url](https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url)
|
||
|
||
Example: `redis_url = "redis://:secret-pass@localhost:6379/0"`
|
||
|
||
### Redis Sentinel connection url
|
||
|
||
For [Redis sentinel setups](https://redis.io/docs/management/sentinel/) the connection scheme is "redis+sentinel".
|
||
This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url
|
||
for Sentinels available.
|
||
|
||
Example: `redis_url = "redis+sentinel://:secret-pass@sentinel-host:26379/mymaster/0"`
|
||
|
||
The format is `redis+sentinel://[[username]:[password]]@[host-or-ip]:[port]/[service-name]/[db-number]`
|
||
with the default values of "service-name = mymaster" and "db-number = 0" if not set explicit.
|
||
The service-name is the redis server monitoring group name as configured within the Sentinel.
|
||
|
||
The current url format limits the connection string to one sentinel host only (no list can be given) and
|
||
booth Redis server and sentinel must have the same password set (if used).
|
||
|
||
### Redis Cluster connection url
|
||
|
||
Redis cluster is not supported right now for all methods requiring a "redis_url" parameter.
|
||
The only way to use a Redis Cluster is with LangChain classes accepting a preconfigured Redis client like `RedisCache`
|
||
(example below).
|
||
|
||
### Cache
|
||
|
||
The Cache wrapper allows for [Redis](https://redis.io) to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.
|
||
|
||
#### Standard Cache
|
||
The standard cache is the Redis bread & butter of use case in production for both [open source](https://redis.io) and [enterprise](https://redis.com) users globally.
|
||
|
||
To import this cache:
|
||
```python
|
||
from langchain.cache import RedisCache
|
||
```
|
||
|
||
To use this cache with your LLMs:
|
||
```python
|
||
import langchain
|
||
import redis
|
||
|
||
redis_client = redis.Redis.from_url(...)
|
||
langchain.llm_cache = RedisCache(redis_client)
|
||
```
|
||
|
||
#### Semantic Cache
|
||
Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.
|
||
|
||
To import this cache:
|
||
```python
|
||
from langchain.cache import RedisSemanticCache
|
||
```
|
||
|
||
To use this cache with your LLMs:
|
||
```python
|
||
import langchain
|
||
import redis
|
||
|
||
# use any embedding provider...
|
||
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
|
||
|
||
redis_url = "redis://localhost:6379"
|
||
|
||
langchain.llm_cache = RedisSemanticCache(
|
||
embedding=FakeEmbeddings(),
|
||
redis_url=redis_url
|
||
)
|
||
```
|
||
|
||
### VectorStore
|
||
|
||
The vectorstore wrapper turns Redis into a low-latency [vector database](https://redis.com/solutions/use-cases/vector-database/) for semantic search or LLM content retrieval.
|
||
|
||
To import this vectorstore:
|
||
```python
|
||
from langchain.vectorstores import Redis
|
||
```
|
||
|
||
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis.html).
|
||
|
||
### Retriever
|
||
|
||
The Redis vector store retriever wrapper generalizes the vectorstore class to perform low-latency document retrieval. To create the retriever, simply call `.as_retriever()` on the base vectorstore class.
|
||
|
||
### Memory
|
||
Redis can be used to persist LLM conversations.
|
||
|
||
#### Vector Store Retriever Memory
|
||
|
||
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory.html).
|
||
|
||
#### Chat Message History Memory
|
||
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history.html).
|