From 19c95060ec1269e5259e70b8e0ccbf8a44813804 Mon Sep 17 00:00:00 2001 From: Jared Van Bortel Date: Thu, 30 May 2024 16:34:55 -0400 Subject: [PATCH] llama.cpp: update submodule for CUDA exceptions and CPU skip Signed-off-by: Jared Van Bortel --- gpt4all-backend/llama.cpp-mainline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gpt4all-backend/llama.cpp-mainline b/gpt4all-backend/llama.cpp-mainline index f67f4651..ed126312 160000 --- a/gpt4all-backend/llama.cpp-mainline +++ b/gpt4all-backend/llama.cpp-mainline @@ -1 +1 @@ -Subproject commit f67f4651fac0b2f377dc53fe853b1dafa96f9aa9 +Subproject commit ed12631213e1069d27d6a88913d489301ae9b1a1