mirror of
https://github.com/imartinez/privateGPT.git
synced 2025-08-14 21:57:09 +00:00
* Refactor documentation architecture Split into several `tab` and sections * Fix Fern's docs.yml after PR review Thank you Danny! Co-authored-by: dannysheridan <danny@buildwithfern.com> * Re-add quickstart in the overview tab It went missing after a refactoring of the doc architecture * Documentation writing * Adapt Makefile to fern documentation * Do not create overlapping page names in fern documentation This is causing 500. Thank you to @dsinghvi for the troubleshooting and the help! * Add a readme to help to understand how fern documentation work and how to add new pages * Rework the welcome view Redirects directly users to installation guide with links for people that are not familiar with documentation browsing. * Simplify the quickstart guide * PR feedback on installation guide A ton of refactoring can still be made there * PR feedback on ingestion * PR feedback on ingestion splitting * Rename section on LLM * Fix missing word in list of LLMs --------- Co-authored-by: dannysheridan <danny@buildwithfern.com>
39 lines
1.6 KiB
Plaintext
39 lines
1.6 KiB
Plaintext
## Gradio UI user manual
|
|
|
|
Gradio UI is a ready to use way of testing most of PrivateGPT API functionalities.
|
|
|
|

|
|
|
|
### Execution Modes
|
|
|
|
It has 3 modes of execution (you can select in the top-left):
|
|
|
|
* Query Docs: uses the context from the
|
|
ingested documents to answer the questions posted in the chat. It also takes
|
|
into account previous chat messages as context.
|
|
* Makes use of `/chat/completions` API with `use_context=true` and no
|
|
`context_filter`.
|
|
* Search in Docs: fast search that returns the 4 most related text
|
|
chunks, together with their source document and page.
|
|
* Makes use of `/chunks` API with no `context_filter`, `limit=4` and
|
|
`prev_next_chunks=0`.
|
|
* LLM Chat: simple, non-contextual chat with the LLM. The ingested documents won't
|
|
be taken into account, only the previous messages.
|
|
* Makes use of `/chat/completions` API with `use_context=false`.
|
|
|
|
### Document Ingestion
|
|
|
|
Ingest documents by using the `Upload a File` button. You can check the progress of
|
|
the ingestion in the console logs of the server.
|
|
|
|
The list of ingested files is shown below the button.
|
|
|
|
If you want to delete the ingested documents, refer to *Reset Local documents
|
|
database* section in the documentation.
|
|
|
|
### Chat
|
|
|
|
Normal chat interface, self-explanatory ;)
|
|
|
|
You can check the actual prompt being passed to the LLM by looking at the logs of
|
|
the server. We'll add better observability in future releases. |