diff --git a/docs/docs/how_to/local_llms.ipynb b/docs/docs/how_to/local_llms.ipynb index 8ddc3c8b4cf..8fde36a2c67 100644 --- a/docs/docs/how_to/local_llms.ipynb +++ b/docs/docs/how_to/local_llms.ipynb @@ -68,7 +68,7 @@ "\n", "### Formatting prompts\n", "\n", - "Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailed for your specific model.\n", + "Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailored for your specific model.\n", "\n", "This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n", "\n",