mirror of
https://github.com/hwchase17/langchain.git
synced 2025-04-28 03:51:50 +00:00
docs:Correcting spelling mistakes in readme (#22664)
Signed-off-by: zhangwangda <zhangwangda94@163.com>
This commit is contained in:
parent
6f54abc252
commit
28e956735c
@ -97,7 +97,7 @@ We will first follow the standard MongoDB Atlas setup instructions [here](https:
|
||||
2. Create a new project (if not already done)
|
||||
3. Locate your MongoDB URI.
|
||||
|
||||
This can be done by going to the deployement overview page and connecting to you database
|
||||
This can be done by going to the deployment overview page and connecting to you database
|
||||
|
||||

|
||||
|
||||
|
@ -7,7 +7,7 @@ This template create a visual assistant for slide decks, which often contain vis
|
||||
|
||||
It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Chroma.
|
||||
|
||||
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
|
||||
Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis.
|
||||
|
||||

|
||||
|
||||
@ -15,7 +15,7 @@ Given a question, relevat slides are retrieved and passed to GPT-4V for answer s
|
||||
|
||||
Supply a slide deck as pdf in the `/docs` directory.
|
||||
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company.
|
||||
|
||||
Example questions to ask can be:
|
||||
```
|
||||
|
@ -7,7 +7,7 @@ This template create a visual assistant for slide decks, which often contain vis
|
||||
|
||||
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
|
||||
|
||||
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
|
||||
Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis.
|
||||
|
||||

|
||||
|
||||
@ -15,7 +15,7 @@ Given a question, relevat slides are retrieved and passed to GPT-4V for answer s
|
||||
|
||||
Supply a slide deck as pdf in the `/docs` directory.
|
||||
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company.
|
||||
|
||||
Example questions to ask can be:
|
||||
```
|
||||
@ -37,7 +37,7 @@ You can select different embedding model options (see results [here](https://git
|
||||
|
||||
The first time you run the app, it will automatically download the multimodal embedding model.
|
||||
|
||||
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
|
||||
By default, LangChain will use an embedding model with moderate performance but lower memory requirements, `ViT-H-14`.
|
||||
|
||||
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
|
||||
```
|
||||
|
@ -7,7 +7,7 @@ This template create a visual assistant for slide decks, which often contain vis
|
||||
|
||||
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
|
||||
|
||||
Given a question, relevat slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.
|
||||
Given a question, relevant slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.
|
||||
|
||||

|
||||
|
||||
@ -15,7 +15,7 @@ Given a question, relevat slides are retrieved and passed to [Google Gemini](htt
|
||||
|
||||
Supply a slide deck as pdf in the `/docs` directory.
|
||||
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
|
||||
By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company.
|
||||
|
||||
Example questions to ask can be:
|
||||
```
|
||||
@ -37,7 +37,7 @@ You can select different embedding model options (see results [here](https://git
|
||||
|
||||
The first time you run the app, it will automatically download the multimodal embedding model.
|
||||
|
||||
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
|
||||
By default, LangChain will use an embedding model with moderate performance but lower memory requirements, `ViT-H-14`.
|
||||
|
||||
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
|
||||
```
|
||||
|
@ -95,7 +95,7 @@ We will first follow the standard MongoDB Atlas setup instructions [here](https:
|
||||
2. Create a new project (if not already done)
|
||||
3. Locate your MongoDB URI.
|
||||
|
||||
This can be done by going to the deployement overview page and connecting to you database
|
||||
This can be done by going to the deployment overview page and connecting to you database
|
||||
|
||||

|
||||
|
||||
|
@ -9,7 +9,7 @@ This template demonstrates how to perform private visual search and question-ans
|
||||
|
||||
It uses an open source multi-modal LLM of your choice to create image summaries for each photos, embeds the summaries, and stores them in Chroma.
|
||||
|
||||
Given a question, relevat photos are retrieved and passed to the multi-modal LLM for answer synthesis.
|
||||
Given a question, relevant photos are retrieved and passed to the multi-modal LLM for answer synthesis.
|
||||
|
||||

|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user