mirror of
https://github.com/imartinez/privateGPT.git
synced 2025-06-23 22:19:49 +00:00
66 lines
3.0 KiB
Plaintext
66 lines
3.0 KiB
Plaintext
## Gradio UI user manual
|
|
|
|
Gradio UI is a ready to use way of testing most of PrivateGPT API functionalities.
|
|
|
|

|
|
|
|
### Execution Modes
|
|
|
|
It has 3 modes of execution (you can select in the top-left):
|
|
|
|
* Query Docs: uses the context from the
|
|
ingested documents to answer the questions posted in the chat. It also takes
|
|
into account previous chat messages as context.
|
|
* Makes use of `/chat/completions` API with `use_context=true` and no
|
|
`context_filter`.
|
|
* Search in Docs: fast search that returns the 4 most related text
|
|
chunks, together with their source document and page.
|
|
* Makes use of `/chunks` API with no `context_filter`, `limit=4` and
|
|
`prev_next_chunks=0`.
|
|
* LLM Chat: simple, non-contextual chat with the LLM. The ingested documents won't
|
|
be taken into account, only the previous messages.
|
|
* Makes use of `/chat/completions` API with `use_context=false`.
|
|
|
|
### Document Ingestion
|
|
|
|
Ingest documents by using the `Upload a File` button. You can check the progress of
|
|
the ingestion in the console logs of the server.
|
|
|
|
The list of ingested files is shown below the button.
|
|
|
|
If you want to delete the ingested documents, refer to *Reset Local documents
|
|
database* section in the documentation.
|
|
|
|
### Chat
|
|
|
|
Normal chat interface, self-explanatory ;)
|
|
|
|
#### System Prompt
|
|
You can view and change the system prompt being passed to the LLM by clicking "Additional Inputs"
|
|
in the chat interface. The system prompt is also logged on the server.
|
|
|
|
By default, the `Query Docs` mode uses the setting value `ui.default_query_system_prompt`.
|
|
|
|
The `LLM Chat` mode attempts to use the optional settings value `ui.default_chat_system_prompt`.
|
|
|
|
If no system prompt is entered, the UI will display the default system prompt being used
|
|
for the active mode.
|
|
|
|
##### System Prompt Examples:
|
|
|
|
The system prompt can effectively provide your chat bot specialized roles, and results tailored to the prompt
|
|
you have given the model. Examples of system prompts can be be found
|
|
[here](https://www.w3schools.com/gen_ai/chatgpt-3-5/chatgpt-3-5_roles.php).
|
|
|
|
Some interesting examples to try include:
|
|
|
|
* You are -X-. You have all the knowledge and personality of -X-. Answer as if you were -X- using
|
|
their manner of speaking and vocabulary.
|
|
* Example: You are Shakespeare. You have all the knowledge and personality of Shakespeare.
|
|
Answer as if you were Shakespeare using their manner of speaking and vocabulary.
|
|
* You are an expert (at) -role-. Answer all questions using your expertise on -specific domain topic-.
|
|
* Example: You are an expert software engineer. Answer all questions using your expertise on Python.
|
|
* You are a -role- bot, respond with -response criteria needed-. If no -response criteria- is needed,
|
|
respond with -alternate response-.
|
|
* Example: You are a grammar checking bot, respond with any grammatical corrections needed. If no corrections
|
|
are needed, respond with "verified". |