Adam Treat
27599d4a0a
Fix some usage events.
2023-05-09 23:43:16 -04:00
Adam Treat
7094fd0788
Gracefully handle when we have a previous chat where the model that it used has gone away.
2023-05-08 20:51:03 -04:00
Adam Treat
9da4fac023
Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future.
2023-05-08 17:23:02 -04:00
Adam Treat
4bcc88b051
Convert the old format properly.
2023-05-08 05:53:16 -04:00
Adam Treat
01e582f15b
First attempt at providing a persistent chat list experience.
...
Limitations:
1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
data the llama.cpp backend tries to persist. Need to investigate how
we can shrink this.
2023-05-04 15:31:41 -04:00
Adam Treat
02c9bb4ac7
Restore the model when switching chats.
2023-05-03 12:45:14 -04:00
Adam Treat
db094c5b92
More extensive usage stats to help diagnose errors and problems in the ui.
2023-05-02 20:31:17 -04:00
Adam Treat
a7c02a52ca
Don't block the GUI when reloading via combobox.
2023-05-02 15:02:25 -04:00
Adam Treat
c217b7538a
Generate names via llm.
2023-05-02 11:19:17 -04:00
Adam Treat
d91dd567e2
Hot swapping of conversations. Destroys context for now.
2023-05-01 20:27:07 -04:00
Adam Treat
8b94a23253
Continue to shrink the API space for qml and the backend.
2023-05-01 12:30:54 -04:00
Adam Treat
385743b302
Consolidate these into single api from qml to backend.
2023-05-01 12:24:51 -04:00
Adam Treat
414a12c33d
Major refactor in prep for multiple conversations.
2023-05-01 09:10:05 -04:00
Adam Treat
bbffa7364b
Add new C++ version of the chat model. Getting ready for chat history.
2023-04-30 20:28:43 -04:00