backend: bump llama.cpp for VRAM leak fix when switching models

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
Jared Van Bortel 2024-01-31 17:24:01 -05:00
parent 6db5307730
commit eadc3b8d80

@ -1 +1 @@
Subproject commit e18ff04f9fcff1c56fa50e455e3da6807a057612
Subproject commit 47aec1bcc09e090f0b8f196dc0a4e43b89507e4a