mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-27 00:48:45 +00:00
Fix typo in local_llms.ipynb docs (#29903)
Change `tailed` to `tailored` `Docs > How-To > Local LLMs:` https://python.langchain.com/docs/how_to/local_llms/#:~:text=use%20a%20prompt-,tailed,-for%20your%20specific
This commit is contained in:
parent
ed3c2bd557
commit
c28ee329c9
@ -68,7 +68,7 @@
|
||||
"\n",
|
||||
"### Formatting prompts\n",
|
||||
"\n",
|
||||
"Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailed for your specific model.\n",
|
||||
"Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailored for your specific model.\n",
|
||||
"\n",
|
||||
"This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
|
||||
"\n",
|
||||
|
Loading…
Reference in New Issue
Block a user