Adam Treat
5f372bd881
Gracefully handle when we have a previous chat where the model that it used has gone away.
2023-05-08 20:51:03 -04:00
Adam Treat
8c4b8f215f
Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future.
2023-05-08 17:23:02 -04:00
Adam Treat
ccbd16cf18
Fix the version.
2023-05-08 16:50:21 -04:00
Adam Treat
3c30310539
Convert the old format properly.
2023-05-08 05:53:16 -04:00
Adam Treat
7b66cb7119
Add debug for chatllm model loading and fix order of getting rid of the
...
dummy chat when no models are restored.
2023-05-07 14:40:02 -04:00
Adam Treat
9bd5609ba0
Deserialize one at a time and don't block gui until all of them are done.
2023-05-07 09:20:09 -04:00
Adam Treat
86da175e1c
Use last lts for this.
2023-05-07 06:39:32 -04:00
Adam Treat
ab13148430
The GUI should come up immediately and not wait on deserializing from disk.
2023-05-06 20:01:14 -04:00
Adam Treat
eb7b61a76d
Move the location of the chat files to the model download directory and add a magic+version.
2023-05-06 18:51:49 -04:00
Adam Treat
8d2c8c8cb0
Turn off saving chats to disk by default as it eats so much disk space.
2023-05-05 12:30:11 -04:00
Adam Treat
06bb6960d4
Add about dialog.
2023-05-05 10:47:05 -04:00
Adam Treat
f291853e51
First attempt at providing a persistent chat list experience.
...
Limitations:
1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
data the llama.cpp backend tries to persist. Need to investigate how
we can shrink this.
2023-05-04 15:31:41 -04:00