mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-22 14:49:29 +00:00
docs: Fix llama.cpp GPU Installation in llamacpp.ipynb (Deprecated Env Variable) (#29659)
- **Description:** The llamacpp.ipynb notebook used a deprecated environment variable, LLAMA_CUBLAS, for llama.cpp installation with GPU support. This commit updates the notebook to use the correct GGML_CUDA variable, fixing the installation error. - **Issue:** none - **Dependencies:** none
This commit is contained in:
parent
3645181d0e
commit
1b064e198f
@ -65,7 +65,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -81,7 +81,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
|
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -149,9 +149,9 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"```\n",
|
"```\n",
|
||||||
"set FORCE_CMAKE=1\n",
|
"set FORCE_CMAKE=1\n",
|
||||||
"set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF\n",
|
"set CMAKE_ARGS=-DGGML_CUDA=OFF\n",
|
||||||
"```\n",
|
"```\n",
|
||||||
"If you have an NVIDIA GPU make sure `DLLAMA_CUBLAS` is set to `ON`\n",
|
"If you have an NVIDIA GPU make sure `DGGML_CUDA` is set to `ON`\n",
|
||||||
"\n",
|
"\n",
|
||||||
"#### Compiling and installing\n",
|
"#### Compiling and installing\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
Loading…
Reference in New Issue
Block a user