mirror of
https://github.com/imartinez/privateGPT.git
synced 2025-06-27 07:49:55 +00:00
Update README.md
This commit is contained in:
parent
ab30465be7
commit
34cb82c784
@ -59,8 +59,8 @@ Type `exit` to finish the script.
|
|||||||
Selecting the right local models and the power of `LangChain` you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.
|
Selecting the right local models and the power of `LangChain` you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.
|
||||||
|
|
||||||
- `ingest.py` uses `LangChain` tools to parse the document and create embeddings locally using `LlamaCppEmbeddings`. It then stores the result in a local vector database using `Chroma` vector store.
|
- `ingest.py` uses `LangChain` tools to parse the document and create embeddings locally using `LlamaCppEmbeddings`. It then stores the result in a local vector database using `Chroma` vector store.
|
||||||
- `privateGPT.py` uses a local LLM based on `GPT4All` to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
|
- `privateGPT.py` uses a local LLM based on `GPT4All-J` to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
|
||||||
- `gpt4all_j.py` is a wrapper to support `GPT4All-J` models within LangChain. It was created given such support didn't exist at the moment of creation of this project (only `GPT4All` models where supported). It will be proposed as a contribution to the official `LangChain` repo soon.
|
- `GPT4All-J` wrapper was introduced in LangChain 0.0.162.
|
||||||
|
|
||||||
# Disclaimer
|
# Disclaimer
|
||||||
This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
|
This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
|
||||||
|
Loading…
Reference in New Issue
Block a user