mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2025-09-14 06:49:09 +00:00
expose n_gpu_layers parameter of llama.cpp (#1890)
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model. Signed-off-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
@@ -63,5 +63,9 @@ int main(int argc, char *argv[])
|
||||
}
|
||||
#endif
|
||||
|
||||
// Make sure ChatLLM threads are joined before global destructors run.
|
||||
// Otherwise, we can get a heap-use-after-free inside of llama.cpp.
|
||||
ChatListModel::globalInstance()->clearChats();
|
||||
|
||||
return app.exec();
|
||||
}
|
||||
|
Reference in New Issue
Block a user