python: do not print GPU name with verbose=False, expose this info via properties (#2222)

* llamamodel: only print device used in verbose mode

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* python: expose backend and device via GPT4All properties

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* backend: const correctness fixes

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* python: bump version

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* python: typing fixups

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* python: fix segfault with closed GPT4All

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

---------

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
Jared Van Bortel
2024-04-18 14:52:02 -04:00
committed by GitHub
parent 271d752701
commit ba53ab5da0
8 changed files with 91 additions and 18 deletions

View File

@@ -295,6 +295,16 @@ bool llmodel_gpu_init_gpu_device_by_int(llmodel_model model, int device);
*/
bool llmodel_has_gpu_device(llmodel_model model);
/**
* @return The name of the llama.cpp backend currently in use. One of "cpu", "kompute", or "metal".
*/
const char *llmodel_model_backend_name(llmodel_model model);
/**
* @return The name of the GPU device currently in use, or NULL for backends other than Kompute.
*/
const char *llmodel_model_gpu_device_name(llmodel_model model);
#ifdef __cplusplus
}
#endif