mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2025-09-06 02:50:36 +00:00
add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
This commit is contained in:
@@ -107,6 +107,14 @@ llmodel_model llmodel_model_create2(const char *model_path, const char *build_va
|
||||
*/
|
||||
void llmodel_model_destroy(llmodel_model model);
|
||||
|
||||
/**
|
||||
* Estimate RAM requirement for a model file
|
||||
* @param model A pointer to the llmodel_model instance.
|
||||
* @param model_path A string representing the path to the model file.
|
||||
* @return size greater than 0 if the model was parsed successfully, 0 if file could not be parsed.
|
||||
*/
|
||||
size_t llmodel_required_mem(llmodel_model model, const char *model_path);
|
||||
|
||||
/**
|
||||
* Load a model from a file.
|
||||
* @param model A pointer to the llmodel_model instance.
|
||||
|
Reference in New Issue
Block a user