mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2025-12-23 12:11:18 +00:00
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
1.4 KiB
1.4 KiB