mirror of
https://github.com/hwchase17/langchain.git
synced 2026-01-24 22:05:39 +00:00
The `n_gpu_layers` parameter in `llama.cpp` supports the use of `-1`, which means to offload all layers to the GPU, so the document has been updated. Ref:35918873b4/llama_cpp/server/settings.py (L29C22-L29C117)35918873b4/llama_cpp/llama.py (L125)
LangChain Documentation
For more information on contributing to our documentation, see the Documentation Contributing Guide