mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-27 08:58:48 +00:00
docs: CrateDB: Educate readers about full and semantic cache components (#29000)
Dear @ccurme and @efriis, following up on our initial patch adding documentation about CrateDB [^1], with version 0.1.0, just released, the [CrateDB provider](https://python.langchain.com/docs/integrations/providers/cratedb/) starts providing `CrateDBCache` and `CrateDBSemanticCache` classes. This little patch updates the documentation accordingly. Happy New Year! With kind regards, Andreas. [^1]: Thanks for merging https://github.com/langchain-ai/langchain/pull/28877 so quickly. /cc @kneth, @simonprickett #### Preview - [Full Cache](https://langchain-git-fork-crate-workbench-docs-cratedb-cache-langchain.vercel.app/docs/integrations/providers/cratedb/#full-cache) - [Semantic Cache](https://langchain-git-fork-crate-workbench-docs-cratedb-cache-langchain.vercel.app/docs/integrations/providers/cratedb/#semantic-cache)
This commit is contained in:
parent
b258ff1930
commit
e493e227c9
@ -27,7 +27,7 @@ docker run --name=cratedb --rm \
|
||||
[free trial][CrateDB Cloud Console].
|
||||
|
||||
### Install Client
|
||||
Install the most recent version of the `langchain-cratedb` package
|
||||
Install the most recent version of the [langchain-cratedb] package
|
||||
and a few others that are needed for this tutorial.
|
||||
```bash
|
||||
pip install --upgrade langchain-cratedb langchain-openai unstructured
|
||||
@ -120,13 +120,78 @@ message_history = CrateDBChatMessageHistory(
|
||||
message_history.add_user_message("hi!")
|
||||
```
|
||||
|
||||
### Full Cache
|
||||
The standard / full cache avoids invoking the LLM when the supplied
|
||||
prompt is exactly the same as one encountered already.
|
||||
See also [CrateDBCache Example].
|
||||
|
||||
To use the full cache in your applications:
|
||||
```python
|
||||
import sqlalchemy as sa
|
||||
from langchain.globals import set_llm_cache
|
||||
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
|
||||
from langchain_cratedb import CrateDBCache
|
||||
|
||||
# Configure cache.
|
||||
engine = sa.create_engine("crate://crate@localhost:4200/?schema=testdrive")
|
||||
set_llm_cache(CrateDBCache(engine))
|
||||
|
||||
# Invoke LLM conversation.
|
||||
llm = ChatOpenAI(
|
||||
model_name="chatgpt-4o-latest",
|
||||
temperature=0.7,
|
||||
)
|
||||
print()
|
||||
print("Asking with full cache:")
|
||||
answer = llm.invoke("What is the answer to everything?")
|
||||
print(answer.content)
|
||||
```
|
||||
|
||||
### Semantic Cache
|
||||
|
||||
The semantic cache allows users to retrieve cached prompts based on semantic
|
||||
similarity between the user input and previously cached inputs. It also avoids
|
||||
invoking the LLM when not needed.
|
||||
See also [CrateDBSemanticCache Example].
|
||||
|
||||
To use the semantic cache in your applications:
|
||||
```python
|
||||
import sqlalchemy as sa
|
||||
from langchain.globals import set_llm_cache
|
||||
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
|
||||
from langchain_cratedb import CrateDBSemanticCache
|
||||
|
||||
# Configure embeddings.
|
||||
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
|
||||
|
||||
# Configure cache.
|
||||
engine = sa.create_engine("crate://crate@localhost:4200/?schema=testdrive")
|
||||
set_llm_cache(
|
||||
CrateDBSemanticCache(
|
||||
embedding=embeddings,
|
||||
connection=engine,
|
||||
search_threshold=1.0,
|
||||
)
|
||||
)
|
||||
|
||||
# Invoke LLM conversation.
|
||||
llm = ChatOpenAI(model_name="chatgpt-4o-latest")
|
||||
print()
|
||||
print("Asking with semantic cache:")
|
||||
answer = llm.invoke("What is the answer to everything?")
|
||||
print(answer.content)
|
||||
```
|
||||
|
||||
|
||||
[all features of CrateDB]: https://cratedb.com/docs/guide/feature/
|
||||
[CrateDB]: https://cratedb.com/database
|
||||
[CrateDB Cloud]: https://cratedb.com/database/cloud
|
||||
[CrateDB Cloud Console]: https://console.cratedb.cloud/?utm_source=langchain&utm_content=documentation
|
||||
[CrateDB installation options]: https://cratedb.com/docs/guide/install/
|
||||
[CrateDBCache Example]: https://github.com/crate/langchain-cratedb/blob/main/examples/basic/cache.py
|
||||
[CrateDBSemanticCache Example]: https://github.com/crate/langchain-cratedb/blob/main/examples/basic/cache.py
|
||||
[CrateDBChatMessageHistory Tutorial]: https://github.com/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/conversational_memory.ipynb
|
||||
[CrateDBLoader Tutorial]: https://github.com/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/document_loader.ipynb
|
||||
[CrateDBVectorStore Tutorial]: https://github.com/crate/cratedb-examples/blob/main/topic/machine-learning/llm-langchain/vector_search.ipynb
|
||||
[langchain-cratedb]: https://pypi.org/project/langchain-cratedb/
|
||||
[using LangChain with CrateDB]: https://cratedb.com/docs/guide/integrate/langchain/
|
||||
|
Loading…
Reference in New Issue
Block a user