mirror of
https://github.com/imartinez/privateGPT.git
synced 2025-06-26 07:22:42 +00:00
Update README.md
Add demo screenshot
This commit is contained in:
parent
bdd8c8748b
commit
ab30465be7
@ -3,6 +3,8 @@ Ask questions to your documents without an internet connection, using the power
|
||||
|
||||
Built with [LangChain](https://github.com/hwchase17/langchain) and [GPT4All](https://github.com/nomic-ai/gpt4all)
|
||||
|
||||
<img width="902" alt="demo" src="https://user-images.githubusercontent.com/721666/236942256-985801c9-25b9-48ef-80be-3acbb4575164.png">
|
||||
|
||||
# Environment Setup
|
||||
|
||||
In order to set your environment up to run the code here, first install all requirements:
|
||||
@ -61,4 +63,4 @@ Selecting the right local models and the power of `LangChain` you can run the en
|
||||
- `gpt4all_j.py` is a wrapper to support `GPT4All-J` models within LangChain. It was created given such support didn't exist at the moment of creation of this project (only `GPT4All` models where supported). It will be proposed as a contribution to the official `LangChain` repo soon.
|
||||
|
||||
# Disclaimer
|
||||
This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
|
||||
This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
|
||||
|
Loading…
Reference in New Issue
Block a user