mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-16 06:53:16 +00:00
docs: templates
updated titles (#25646)
Updated titles into a consistent format. Fixed links to the diagrams. Fixed typos. Note: The Templates menu in the navbar is now sorted by the file names. I'll try sorting the navbar menus by the page titles, not the page file names.
This commit is contained in:
@@ -1,7 +1,6 @@
|
||||
# RAG - Ollama, Nomic, Chroma - multi-modal, local
|
||||
|
||||
# rag-multi-modal-local
|
||||
|
||||
Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.
|
||||
Visual search is a familiar application to many with iPhones or Android devices. It allows user to search photos using natural language.
|
||||
|
||||
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
|
||||
|
||||
@@ -11,7 +10,7 @@ It uses [`nomic-embed-vision-v1`](https://huggingface.co/nomic-ai/nomic-embed-vi
|
||||
|
||||
Given a question, relevant photos are retrieved and passed to an open source multi-modal LLM of your choice for answer synthesis.
|
||||
|
||||

|
||||
 "Visual Search Process Diagram"
|
||||
|
||||
## Input
|
||||
|
||||
|
Reference in New Issue
Block a user