From eadc3b8d80d69f3414ead0cc970483f98d5a7b1b Mon Sep 17 00:00:00 2001 From: Jared Van Bortel Date: Wed, 31 Jan 2024 17:24:01 -0500 Subject: [PATCH] backend: bump llama.cpp for VRAM leak fix when switching models Signed-off-by: Jared Van Bortel --- gpt4all-backend/llama.cpp-mainline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gpt4all-backend/llama.cpp-mainline b/gpt4all-backend/llama.cpp-mainline index e18ff04f..47aec1bc 160000 --- a/gpt4all-backend/llama.cpp-mainline +++ b/gpt4all-backend/llama.cpp-mainline @@ -1 +1 @@ -Subproject commit e18ff04f9fcff1c56fa50e455e3da6807a057612 +Subproject commit 47aec1bcc09e090f0b8f196dc0a4e43b89507e4a