mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-24 07:35:18 +00:00
templates: Lanceb RAG template (#17809)
Thank you for contributing to LangChain! - [x] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [x] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [x] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [x] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17. --------- Co-authored-by: Erick Friis <erick@langchain.dev>
This commit is contained in:
parent
760a16ff32
commit
b641be2edf
21
templates/rag-lancedb/LICENSE
Normal file
21
templates/rag-lancedb/LICENSE
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 LangChain, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
66
templates/rag-lancedb/README.md
Normal file
66
templates/rag-lancedb/README.md
Normal file
@ -0,0 +1,66 @@
|
||||
# rag-lancedb
|
||||
|
||||
This template performs RAG using LanceDB and OpenAI.
|
||||
|
||||
## Environment Setup
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U langchain-cli
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-lancedb
|
||||
```
|
||||
|
||||
If you want to add this to as existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-lancedb
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_lancedb import chain as rag_lancedb_chain
|
||||
|
||||
add_routes(app, rag_lancedb_chain, path="/rag-lancedb")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-lancedb/playground](http://127.0.0.1:8000/rag-lancedb/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-lancedb")
|
||||
```
|
2097
templates/rag-lancedb/poetry.lock
generated
Normal file
2097
templates/rag-lancedb/poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
34
templates/rag-lancedb/pyproject.toml
Normal file
34
templates/rag-lancedb/pyproject.toml
Normal file
@ -0,0 +1,34 @@
|
||||
[tool.poetry]
|
||||
name = "rag-lancedb"
|
||||
version = "0.0.1"
|
||||
description = "RAG using LanceDB"
|
||||
authors = ['akash desai']
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain-core = ">=0.1.5"
|
||||
langchain-openai = ">=0.0.1"
|
||||
lancedb = "^0.5.5"
|
||||
openai = "<2"
|
||||
tiktoken = "^0.5.2"
|
||||
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
langchain-cli = ">=0.0.21"
|
||||
fastapi = "^0.104.0"
|
||||
sse-starlette = "^1.6.5"
|
||||
|
||||
[tool.langserve]
|
||||
export_module = "rag_lancedb"
|
||||
export_attr = "chain"
|
||||
|
||||
[tool.templates-hub]
|
||||
use-case = "rag"
|
||||
author = "LangChain"
|
||||
integrations = ["OpenAI", "LanceDB"]
|
||||
tags = ["vectordbs"]
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
3
templates/rag-lancedb/rag_lancedb/__init__.py
Normal file
3
templates/rag-lancedb/rag_lancedb/__init__.py
Normal file
@ -0,0 +1,3 @@
|
||||
from rag_lancedb.chain import chain
|
||||
|
||||
__all__ = ["chain"]
|
60
templates/rag-lancedb/rag_lancedb/chain.py
Normal file
60
templates/rag-lancedb/rag_lancedb/chain.py
Normal file
@ -0,0 +1,60 @@
|
||||
from langchain_community.vectorstores import LanceDB
|
||||
from langchain_core.output_parsers import StrOutputParser
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
|
||||
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
|
||||
|
||||
# Example for document loading (from url), splitting, and creating vectostore
|
||||
|
||||
"""
|
||||
# Load
|
||||
from langchain_community.document_loaders import WebBaseLoader
|
||||
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
|
||||
data = loader.load()
|
||||
|
||||
# Split
|
||||
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
||||
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
|
||||
all_splits = text_splitter.split_documents(data)
|
||||
|
||||
# Add to vectorDB
|
||||
vectorstore = LanceDB.from_documents(documents=all_splits,
|
||||
collection_name="rag-chroma",
|
||||
embedding=OpenAIEmbeddings(),
|
||||
)
|
||||
retriever = vectorstore.as_retriever()
|
||||
"""
|
||||
|
||||
# Embed a single document for test
|
||||
vectorstore = LanceDB.from_texts(
|
||||
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
|
||||
)
|
||||
|
||||
retriever = vectorstore.as_retriever()
|
||||
|
||||
# RAG prompt
|
||||
template = """Answer the question based only on the following context:
|
||||
{context}
|
||||
|
||||
Question: {question}
|
||||
"""
|
||||
prompt = ChatPromptTemplate.from_template(template)
|
||||
|
||||
# LLM
|
||||
model = ChatOpenAI()
|
||||
|
||||
# RAG chain
|
||||
chain = (
|
||||
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
|
||||
| prompt
|
||||
| model
|
||||
| StrOutputParser()
|
||||
)
|
||||
|
||||
|
||||
class Question(BaseModel):
|
||||
__root__: str
|
||||
|
||||
|
||||
chain = chain.with_types(input_type=Question)
|
0
templates/rag-lancedb/tests/__init__.py
Normal file
0
templates/rag-lancedb/tests/__init__.py
Normal file
Loading…
Reference in New Issue
Block a user