From c28ee329c9086f2dfb20e350c3a3e11fbdb3a6d5 Mon Sep 17 00:00:00 2001 From: Joe Ferrucci <2114494+JoeFerrucci@users.noreply.github.com> Date: Thu, 20 Feb 2025 08:03:10 -0600 Subject: [PATCH] Fix typo in local_llms.ipynb docs (#29903) Change `tailed` to `tailored` `Docs > How-To > Local LLMs:` https://python.langchain.com/docs/how_to/local_llms/#:~:text=use%20a%20prompt-,tailed,-for%20your%20specific --- docs/docs/how_to/local_llms.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/how_to/local_llms.ipynb b/docs/docs/how_to/local_llms.ipynb index 8ddc3c8b4cf..8fde36a2c67 100644 --- a/docs/docs/how_to/local_llms.ipynb +++ b/docs/docs/how_to/local_llms.ipynb @@ -68,7 +68,7 @@ "\n", "### Formatting prompts\n", "\n", - "Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailed for your specific model.\n", + "Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailored for your specific model.\n", "\n", "This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n", "\n",