From 1047c5e03878776f879d08b27f9e151e59aa3b11 Mon Sep 17 00:00:00 2001 From: Ikko Eltociear Ashimine Date: Tue, 24 Sep 2024 05:12:52 +0900 Subject: [PATCH] docs: update README.md (#2979) Signed-off-by: Ikko Eltociear Ashimine Signed-off-by: AT Co-authored-by: AT --- gpt4all-backend/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gpt4all-backend/README.md b/gpt4all-backend/README.md index bb8a6727..d364538a 100644 --- a/gpt4all-backend/README.md +++ b/gpt4all-backend/README.md @@ -27,7 +27,7 @@ Unfortunately, no for three reasons: # What is being done to make them more compatible? -A few things. Number one, we are maintaining compatibility with our current model zoo by way of the submodule pinning. However, we are also exploring how we can update to newer versions of llama.cpp without breaking our current models. This might involve an additional magic header check or it could possibly involve keeping the currently pinned submodule and also adding a new submodule with later changes and differienting them with namespaces or some other manner. Investigations continue. +A few things. Number one, we are maintaining compatibility with our current model zoo by way of the submodule pinning. However, we are also exploring how we can update to newer versions of llama.cpp without breaking our current models. This might involve an additional magic header check or it could possibly involve keeping the currently pinned submodule and also adding a new submodule with later changes and differentiating them with namespaces or some other manner. Investigations continue. # What about GPU inference?