From f0dfdc28cced24ec6eb64237529cb5d08d222fc7 Mon Sep 17 00:00:00 2001 From: Gaurav Shukla Date: Fri, 2 May 2025 21:42:26 +0530 Subject: [PATCH] Updated Docs Llama-CPP Linux NVIDIA GPU support and Windows-WSL --- fern/docs/pages/installation/installation.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fern/docs/pages/installation/installation.mdx b/fern/docs/pages/installation/installation.mdx index e7f80c87..09e09933 100644 --- a/fern/docs/pages/installation/installation.mdx +++ b/fern/docs/pages/installation/installation.mdx @@ -340,7 +340,7 @@ Some tips: After that running the following command in the repository will install llama.cpp with GPU support: ```bash -CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python numpy==1.26.0 +CMAKE_ARGS='-DGGML_CUDA=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python==0.2.90 numpy==1.26.4 markupsafe==2.1.5 ``` If your installation was correct, you should see a message similar to the following next