From 5b826175c907759b61c2a51b97de448fa947ec6b Mon Sep 17 00:00:00 2001 From: Subrat Lima <74418100+subrat-lima@users.noreply.github.com> Date: Fri, 31 Jan 2025 21:48:24 +0530 Subject: [PATCH] docs: Update local_llms.ipynb - fixed a typo (#29520) Description: fixed a typo in the how to > local llma > llamafile section description. --- docs/docs/how_to/local_llms.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/how_to/local_llms.ipynb b/docs/docs/how_to/local_llms.ipynb index dbd7c341909..8ddc3c8b4cf 100644 --- a/docs/docs/how_to/local_llms.ipynb +++ b/docs/docs/how_to/local_llms.ipynb @@ -477,7 +477,7 @@ "2) Make the file executable\n", "3) Run the file\n", "\n", - "llamafiles bundle model weights and a [specially-compiled](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#technical-details) version of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) into a single file that can run on most computers any additional dependencies. They also come with an embedded inference server that provides an [API](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md#api-endpoints) for interacting with your model. \n", + "llamafiles bundle model weights and a [specially-compiled](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#technical-details) version of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) into a single file that can run on most computers without any additional dependencies. They also come with an embedded inference server that provides an [API](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md#api-endpoints) for interacting with your model. \n", "\n", "Here's a simple bash script that shows all 3 setup steps:\n", "\n",