mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-07 22:11:51 +00:00
docs:Correcting spelling mistakes in readme (#22664)
Signed-off-by: zhangwangda <zhangwangda94@163.com>
This commit is contained in:
@@ -7,7 +7,7 @@ This template create a visual assistant for slide decks, which often contain vis
|
||||
|
||||
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
|
||||
|
||||
Given a question, relevat slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.
|
||||
Given a question, relevant slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.
|
||||
|
||||

|
||||
|
||||
@@ -15,7 +15,7 @@ Given a question, relevat slides are retrieved and passed to [Google Gemini](htt
|
||||
|
||||
Supply a slide deck as pdf in the `/docs` directory.
|
||||
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company.
|
||||
|
||||
Example questions to ask can be:
|
||||
```
|
||||
@@ -37,7 +37,7 @@ You can select different embedding model options (see results [here](https://git
|
||||
|
||||
The first time you run the app, it will automatically download the multimodal embedding model.
|
||||
|
||||
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
|
||||
By default, LangChain will use an embedding model with moderate performance but lower memory requirements, `ViT-H-14`.
|
||||
|
||||
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
|
||||
```
|
||||
|
Reference in New Issue
Block a user