mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-24 23:54:14 +00:00
Gemini multi-modal RAG template (#14678)

This commit is contained in:
parent
7234335a9a
commit
3449fce273
1
templates/rag-gemini-multi-modal/.gitignore
vendored
Normal file
1
templates/rag-gemini-multi-modal/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
|||||||
|
docs/img_*.jpg
|
21
templates/rag-gemini-multi-modal/LICENSE
Normal file
21
templates/rag-gemini-multi-modal/LICENSE
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2023 LangChain, Inc.
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
106
templates/rag-gemini-multi-modal/README.md
Normal file
106
templates/rag-gemini-multi-modal/README.md
Normal file
@ -0,0 +1,106 @@
|
|||||||
|
|
||||||
|
# rag-gemini-multi-modal
|
||||||
|
|
||||||
|
Presentations (slide decks, etc) contain visual content that challenges conventional RAG.
|
||||||
|
|
||||||
|
Multi-modal LLMs unlock new ways to build apps over visual content like presentations.
|
||||||
|
|
||||||
|
This template performs multi-modal RAG using Chroma with multi-modal OpenCLIP embeddings and [Google Gemini](https://deepmind.google/technologies/gemini/#introduction).
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
Supply a slide deck as pdf in the `/docs` directory.
|
||||||
|
|
||||||
|
Create your vectorstore with:
|
||||||
|
|
||||||
|
```
|
||||||
|
poetry install
|
||||||
|
python ingest.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Embeddings
|
||||||
|
|
||||||
|
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings.
|
||||||
|
|
||||||
|
You can select different options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
|
||||||
|
|
||||||
|
The first time you run the app, it will automatically download the multimodal embedding model.
|
||||||
|
|
||||||
|
By default, LangChain will use an embedding model with reasonably strong performance, `ViT-H-14`.
|
||||||
|
|
||||||
|
You can choose alternative `OpenCLIPEmbeddings` models in `ingest.py`:
|
||||||
|
```
|
||||||
|
vectorstore_mmembd = Chroma(
|
||||||
|
collection_name="multi-modal-rag",
|
||||||
|
persist_directory=str(re_vectorstore_path),
|
||||||
|
embedding_function=OpenCLIPEmbeddings(
|
||||||
|
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## LLM
|
||||||
|
|
||||||
|
The app will retrieve images using multi-modal embeddings, and pass them to Google Gemini.
|
||||||
|
|
||||||
|
## Environment Setup
|
||||||
|
|
||||||
|
Set the `GOOGLE_API_KEY` environment variable to access Gemini.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
To use this package, you should first have the LangChain CLI installed:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
pip install -U langchain-cli
|
||||||
|
```
|
||||||
|
|
||||||
|
To create a new LangChain project and install this as the only package, you can do:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
langchain app new my-app --package rag-gemini-multi-modal
|
||||||
|
```
|
||||||
|
|
||||||
|
If you want to add this to an existing project, you can just run:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
langchain app add rag-gemini-multi-modal
|
||||||
|
```
|
||||||
|
|
||||||
|
And add the following code to your `server.py` file:
|
||||||
|
```python
|
||||||
|
from rag_gemini_multi_modal import chain as rag_gemini_multi_modal_chain
|
||||||
|
|
||||||
|
add_routes(app, rag_gemini_multi_modal_chain, path="/rag-gemini-multi-modal")
|
||||||
|
```
|
||||||
|
|
||||||
|
(Optional) Let's now configure LangSmith.
|
||||||
|
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||||
|
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||||
|
If you don't have access, you can skip this section
|
||||||
|
|
||||||
|
```shell
|
||||||
|
export LANGCHAIN_TRACING_V2=true
|
||||||
|
export LANGCHAIN_API_KEY=<your-api-key>
|
||||||
|
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||||
|
```
|
||||||
|
|
||||||
|
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
langchain serve
|
||||||
|
```
|
||||||
|
|
||||||
|
This will start the FastAPI app with a server is running locally at
|
||||||
|
[http://localhost:8000](http://localhost:8000)
|
||||||
|
|
||||||
|
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||||
|
We can access the playground at [http://127.0.0.1:8000/rag-gemini-multi-modal/playground](http://127.0.0.1:8000/rag-gemini-multi-modal/playground)
|
||||||
|
|
||||||
|
We can access the template from code with:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langserve.client import RemoteRunnable
|
||||||
|
|
||||||
|
runnable = RemoteRunnable("http://localhost:8000/rag-gemini-multi-modal")
|
||||||
|
```
|
BIN
templates/rag-gemini-multi-modal/docs/DDOG_Q3_earnings_deck.pdf
Normal file
BIN
templates/rag-gemini-multi-modal/docs/DDOG_Q3_earnings_deck.pdf
Normal file
Binary file not shown.
58
templates/rag-gemini-multi-modal/ingest.py
Normal file
58
templates/rag-gemini-multi-modal/ingest.py
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pypdfium2 as pdfium
|
||||||
|
from langchain.vectorstores import Chroma
|
||||||
|
from langchain_experimental.open_clip import OpenCLIPEmbeddings
|
||||||
|
|
||||||
|
|
||||||
|
def get_images_from_pdf(pdf_path, img_dump_path):
|
||||||
|
"""
|
||||||
|
Extract images from each page of a PDF document and save as JPEG files.
|
||||||
|
|
||||||
|
:param pdf_path: A string representing the path to the PDF file.
|
||||||
|
:param img_dump_path: A string representing the path to dummp images.
|
||||||
|
"""
|
||||||
|
pdf = pdfium.PdfDocument(pdf_path)
|
||||||
|
n_pages = len(pdf)
|
||||||
|
for page_number in range(n_pages):
|
||||||
|
page = pdf.get_page(page_number)
|
||||||
|
bitmap = page.render(scale=1, rotation=0, crop=(0, 0, 0, 0))
|
||||||
|
pil_image = bitmap.to_pil()
|
||||||
|
pil_image.save(f"{img_dump_path}/img_{page_number + 1}.jpg", format="JPEG")
|
||||||
|
|
||||||
|
|
||||||
|
# Load PDF
|
||||||
|
doc_path = Path(__file__).parent / "docs/DDOG_Q3_earnings_deck.pdf"
|
||||||
|
img_dump_path = Path(__file__).parent / "docs/"
|
||||||
|
rel_doc_path = doc_path.relative_to(Path.cwd())
|
||||||
|
rel_img_dump_path = img_dump_path.relative_to(Path.cwd())
|
||||||
|
print("pdf index")
|
||||||
|
pil_images = get_images_from_pdf(rel_doc_path, rel_img_dump_path)
|
||||||
|
print("done")
|
||||||
|
vectorstore = Path(__file__).parent / "chroma_db_multi_modal"
|
||||||
|
re_vectorstore_path = vectorstore.relative_to(Path.cwd())
|
||||||
|
|
||||||
|
# Load embedding function
|
||||||
|
print("Loading embedding function")
|
||||||
|
embedding = OpenCLIPEmbeddings(model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k")
|
||||||
|
|
||||||
|
# Create chroma
|
||||||
|
vectorstore_mmembd = Chroma(
|
||||||
|
collection_name="multi-modal-rag",
|
||||||
|
persist_directory=str(Path(__file__).parent / "chroma_db_multi_modal"),
|
||||||
|
embedding_function=embedding,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get image URIs
|
||||||
|
image_uris = sorted(
|
||||||
|
[
|
||||||
|
os.path.join(rel_img_dump_path, image_name)
|
||||||
|
for image_name in os.listdir(rel_img_dump_path)
|
||||||
|
if image_name.endswith(".jpg")
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add images
|
||||||
|
print("Embedding images")
|
||||||
|
vectorstore_mmembd.add_images(uris=image_uris)
|
3714
templates/rag-gemini-multi-modal/poetry.lock
generated
Normal file
3714
templates/rag-gemini-multi-modal/poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
39
templates/rag-gemini-multi-modal/pyproject.toml
Normal file
39
templates/rag-gemini-multi-modal/pyproject.toml
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
[tool.poetry]
|
||||||
|
name = "rag-gemini-multi-modal"
|
||||||
|
version = "0.1.0"
|
||||||
|
description = "Multi-modal RAG using Gemini and OpenCLIP embeddings"
|
||||||
|
authors = [
|
||||||
|
"Lance Martin <lance@langchain.dev>",
|
||||||
|
]
|
||||||
|
readme = "README.md"
|
||||||
|
|
||||||
|
[tool.poetry.dependencies]
|
||||||
|
python = ">=3.9,<4.0"
|
||||||
|
langchain = ">=0.0.350"
|
||||||
|
openai = "<2"
|
||||||
|
tiktoken = ">=0.5.1"
|
||||||
|
chromadb = ">=0.4.14"
|
||||||
|
open-clip-torch = ">=2.23.0"
|
||||||
|
torch = ">=2.1.0"
|
||||||
|
pypdfium2 = ">=4.20.0"
|
||||||
|
langchain-experimental = "^0.0.43"
|
||||||
|
langchain-google-genai = ">=0.0.1"
|
||||||
|
|
||||||
|
[tool.poetry.group.dev.dependencies]
|
||||||
|
langchain-cli = ">=0.0.15"
|
||||||
|
|
||||||
|
[tool.langserve]
|
||||||
|
export_module = "rag_gemini_multi_modal"
|
||||||
|
export_attr = "chain"
|
||||||
|
|
||||||
|
[tool.templates-hub]
|
||||||
|
use-case = "rag"
|
||||||
|
author = "LangChain"
|
||||||
|
integrations = ["OpenAI", "Chroma"]
|
||||||
|
tags = ["vectordbs"]
|
||||||
|
|
||||||
|
[build-system]
|
||||||
|
requires = [
|
||||||
|
"poetry-core",
|
||||||
|
]
|
||||||
|
build-backend = "poetry.core.masonry.api"
|
@ -0,0 +1,52 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"attachments": {},
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "681a5d1e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Run Template\n",
|
||||||
|
"\n",
|
||||||
|
"In `server.py`, set -\n",
|
||||||
|
"```\n",
|
||||||
|
"add_routes(app, chain_rag_conv, path=\"/rag-gemini-multi-modal\")\n",
|
||||||
|
"```"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d774be2a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langserve.client import RemoteRunnable\n",
|
||||||
|
"\n",
|
||||||
|
"rag_app = RemoteRunnable(\"http://localhost:8001/rag-gemini-multi-modal\")\n",
|
||||||
|
"rag_app.invoke(\"What is the projected TAM for observability expected for each year through 2026?\")"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.9.16"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
@ -0,0 +1,3 @@
|
|||||||
|
from rag_gemini_multi_modal.chain import chain
|
||||||
|
|
||||||
|
__all__ = ["chain"]
|
122
templates/rag-gemini-multi-modal/rag_gemini_multi_modal/chain.py
Normal file
122
templates/rag-gemini-multi-modal/rag_gemini_multi_modal/chain.py
Normal file
@ -0,0 +1,122 @@
|
|||||||
|
import base64
|
||||||
|
import io
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from langchain.vectorstores import Chroma
|
||||||
|
from langchain_core.documents import Document
|
||||||
|
from langchain_core.messages import HumanMessage
|
||||||
|
from langchain_core.output_parsers import StrOutputParser
|
||||||
|
from langchain_core.pydantic_v1 import BaseModel
|
||||||
|
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
|
||||||
|
from langchain_experimental.open_clip import OpenCLIPEmbeddings
|
||||||
|
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||||
|
from PIL import Image
|
||||||
|
|
||||||
|
|
||||||
|
def resize_base64_image(base64_string, size=(128, 128)):
|
||||||
|
"""
|
||||||
|
Resize an image encoded as a Base64 string.
|
||||||
|
|
||||||
|
:param base64_string: A Base64 encoded string of the image to be resized.
|
||||||
|
:param size: A tuple representing the new size (width, height) for the image.
|
||||||
|
:return: A Base64 encoded string of the resized image.
|
||||||
|
"""
|
||||||
|
img_data = base64.b64decode(base64_string)
|
||||||
|
img = Image.open(io.BytesIO(img_data))
|
||||||
|
resized_img = img.resize(size, Image.LANCZOS)
|
||||||
|
buffered = io.BytesIO()
|
||||||
|
resized_img.save(buffered, format=img.format)
|
||||||
|
return base64.b64encode(buffered.getvalue()).decode("utf-8")
|
||||||
|
|
||||||
|
|
||||||
|
def get_resized_images(docs):
|
||||||
|
"""
|
||||||
|
Resize images from base64-encoded strings.
|
||||||
|
|
||||||
|
:param docs: A list of base64-encoded image to be resized.
|
||||||
|
:return: Dict containing a list of resized base64-encoded strings.
|
||||||
|
"""
|
||||||
|
b64_images = []
|
||||||
|
for doc in docs:
|
||||||
|
if isinstance(doc, Document):
|
||||||
|
doc = doc.page_content
|
||||||
|
resized_image = resize_base64_image(doc, size=(1280, 720))
|
||||||
|
b64_images.append(resized_image)
|
||||||
|
return {"images": b64_images}
|
||||||
|
|
||||||
|
|
||||||
|
def img_prompt_func(data_dict, num_images=2):
|
||||||
|
"""
|
||||||
|
Gemini prompt for image analysis.
|
||||||
|
|
||||||
|
:param data_dict: A dict with images and a user-provided question.
|
||||||
|
:param num_images: Number of images to include in the prompt.
|
||||||
|
:return: A list containing message objects for each image and the text prompt.
|
||||||
|
"""
|
||||||
|
messages = []
|
||||||
|
if data_dict["context"]["images"]:
|
||||||
|
for image in data_dict["context"]["images"][:num_images]:
|
||||||
|
image_message = {
|
||||||
|
"type": "image_url",
|
||||||
|
"image_url": {"url": f"data:image/jpeg;base64,{image}"},
|
||||||
|
}
|
||||||
|
messages.append(image_message)
|
||||||
|
text_message = {
|
||||||
|
"type": "text",
|
||||||
|
"text": (
|
||||||
|
"You are an analyst tasked with answering questions about visual content.\n"
|
||||||
|
"You will be give a set of image(s) from a slide deck / presentation.\n"
|
||||||
|
"Use this information to answer the user question. \n"
|
||||||
|
f"User-provided question: {data_dict['question']}\n\n"
|
||||||
|
),
|
||||||
|
}
|
||||||
|
messages.append(text_message)
|
||||||
|
return [HumanMessage(content=messages)]
|
||||||
|
|
||||||
|
|
||||||
|
def multi_modal_rag_chain(retriever):
|
||||||
|
"""
|
||||||
|
Multi-modal RAG chain,
|
||||||
|
|
||||||
|
:param retriever: A function that retrieves the necessary context for the model.
|
||||||
|
:return: A chain of functions representing the multi-modal RAG process.
|
||||||
|
"""
|
||||||
|
# Initialize the multi-modal Large Language Model with specific parameters
|
||||||
|
model = ChatGoogleGenerativeAI(model="gemini-pro-vision")
|
||||||
|
|
||||||
|
# Define the RAG pipeline
|
||||||
|
chain = (
|
||||||
|
{
|
||||||
|
"context": retriever | RunnableLambda(get_resized_images),
|
||||||
|
"question": RunnablePassthrough(),
|
||||||
|
}
|
||||||
|
| RunnableLambda(img_prompt_func)
|
||||||
|
| model
|
||||||
|
| StrOutputParser()
|
||||||
|
)
|
||||||
|
|
||||||
|
return chain
|
||||||
|
|
||||||
|
|
||||||
|
# Load chroma
|
||||||
|
vectorstore_mmembd = Chroma(
|
||||||
|
collection_name="multi-modal-rag",
|
||||||
|
persist_directory=str(Path(__file__).parent.parent / "chroma_db_multi_modal"),
|
||||||
|
embedding_function=OpenCLIPEmbeddings(
|
||||||
|
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make retriever
|
||||||
|
retriever_mmembd = vectorstore_mmembd.as_retriever()
|
||||||
|
|
||||||
|
# Create RAG chain
|
||||||
|
chain = multi_modal_rag_chain(retriever_mmembd)
|
||||||
|
|
||||||
|
|
||||||
|
# Add typing for input
|
||||||
|
class Question(BaseModel):
|
||||||
|
__root__: str
|
||||||
|
|
||||||
|
|
||||||
|
chain = chain.with_types(input_type=Question)
|
0
templates/rag-gemini-multi-modal/tests/__init__.py
Normal file
0
templates/rag-gemini-multi-modal/tests/__init__.py
Normal file
Loading…
Reference in New Issue
Block a user