mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-06 21:43:44 +00:00
docs:misc fixes (#9671)
Improve internal consistency in LangChain documentation - Change occurrences of eg and eg. to e.g. - Fix headers containing unnecessary capital letters. - Change instances of "few shot" to "few-shot". - Add periods to end of sentences where missing. - Minor spelling and grammar fixes.
This commit is contained in:
@@ -1,3 +1,3 @@
|
||||
# Tags
|
||||
|
||||
You can add tags to your callbacks by passing a `tags` argument to the `call()`/`run()`/`apply()` methods. This is useful for filtering your logs, eg. if you want to log all requests made to a specific LLMChain, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the `tags` argument of the "start" callback methods, ie. `on_llm_start`, `on_chat_model_start`, `on_chain_start`, `on_tool_start`.
|
||||
You can add tags to your callbacks by passing a `tags` argument to the `call()`/`run()`/`apply()` methods. This is useful for filtering your logs, e.g. if you want to log all requests made to a specific LLMChain, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the `tags` argument of the "start" callback methods, ie. `on_llm_start`, `on_chat_model_start`, `on_chain_start`, `on_tool_start`.
|
||||
|
@@ -5,7 +5,7 @@ In this tutorial, we'll create a custom example selector that selects every alte
|
||||
An `ExampleSelector` must implement two methods:
|
||||
|
||||
1. An `add_example` method which takes in an example and adds it into the ExampleSelector
|
||||
2. A `select_examples` method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.
|
||||
2. A `select_examples` method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few-shot prompt.
|
||||
|
||||
Let's implement a custom `ExampleSelector` that just selects two examples at random.
|
||||
|
||||
|
@@ -35,7 +35,7 @@
|
||||
"source": [
|
||||
"### Load Feast Store\n",
|
||||
"\n",
|
||||
"Again, this should be set up according to the instructions in the Feast README"
|
||||
"Again, this should be set up according to the instructions in the Feast README."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -160,7 +160,7 @@
|
||||
"source": [
|
||||
"### Use in a chain\n",
|
||||
"\n",
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store"
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -243,7 +243,7 @@
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"### Define and Load Features\n",
|
||||
"### Define and load features\n",
|
||||
"\n",
|
||||
"We will use the user_transaction_counts Feature View from the [Tecton tutorial](https://docs.tecton.ai/docs/tutorials/tecton-fundamentals) as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.\n",
|
||||
"\n",
|
||||
@@ -394,7 +394,7 @@
|
||||
"source": [
|
||||
"### Use in a chain\n",
|
||||
"\n",
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform"
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -460,7 +460,7 @@
|
||||
"source": [
|
||||
"## Featureform\n",
|
||||
"\n",
|
||||
"Finally, we will use [Featureform](https://github.com/featureform/featureform) an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations."
|
||||
"Finally, we will use [Featureform](https://github.com/featureform/featureform), an open-source and enterprise-grade feature store, to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -564,7 +564,7 @@
|
||||
"source": [
|
||||
"### Use in a chain\n",
|
||||
"\n",
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform"
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -605,7 +605,7 @@
|
||||
"source": [
|
||||
"## AzureML Managed Feature Store\n",
|
||||
"\n",
|
||||
"We will use [AzureML Managed Feature Store](https://learn.microsoft.com/en-us/azure/machine-learning/concept-what-is-managed-feature-store) to run the below example. "
|
||||
"We will use [AzureML Managed Feature Store](https://learn.microsoft.com/en-us/azure/machine-learning/concept-what-is-managed-feature-store) to run the example below. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -768,7 +768,7 @@
|
||||
"source": [
|
||||
"### Use in a chain\n",
|
||||
"\n",
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the AzureML Managed Feature Store"
|
||||
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the AzureML Managed Feature Store."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@@ -11,9 +11,7 @@
|
||||
"\n",
|
||||
"## Why are custom prompt templates needed?\n",
|
||||
"\n",
|
||||
"LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.\n",
|
||||
"\n",
|
||||
"Take a look at the current set of default prompt templates [here](/docs/modules/model_io/prompts/prompt_templates/)."
|
||||
"LangChain provides a set of [default prompt templates](/docs/modules/model_io/prompts/prompt_templates/) that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -21,7 +19,7 @@
|
||||
"id": "5d56ce86",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Creating a Custom Prompt Template\n",
|
||||
"## Creating a custom prompt template\n",
|
||||
"\n",
|
||||
"There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.\n",
|
||||
"\n",
|
||||
@@ -29,7 +27,7 @@
|
||||
"\n",
|
||||
"To create a custom string prompt template, there are two requirements:\n",
|
||||
"1. It has an input_variables attribute that exposes what input variables the prompt template expects.\n",
|
||||
"2. It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.\n",
|
||||
"2. It defines a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.\n",
|
||||
"\n",
|
||||
"We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let's first create a function that will return the source code of a function given its name."
|
||||
]
|
||||
|
@@ -5,9 +5,9 @@
|
||||
"id": "bb0735c0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Few shot examples for chat models\n",
|
||||
"# Few-shot examples for chat models\n",
|
||||
"\n",
|
||||
"This notebook covers how to use few shot examples in chat models. There does not appear to be solid consensus on how best to do few shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html) as a flexible starting point, and you can modify or replace them as you see fit.\n",
|
||||
"This notebook covers how to use few-shot examples in chat models. There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html) as a flexible starting point, and you can modify or replace them as you see fit.\n",
|
||||
"\n",
|
||||
"The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.\n",
|
||||
"\n",
|
||||
@@ -133,7 +133,7 @@
|
||||
"source": [
|
||||
"final_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are wonderous wizard of math.\"),\n",
|
||||
" (\"system\", \"You are a wondrous wizard of math.\"),\n",
|
||||
" few_shot_prompt,\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
@@ -172,7 +172,7 @@
|
||||
"id": "70ab7114-f07f-46be-8874-3705a25aba5f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Dynamic Few-shot Prompting\n",
|
||||
"## Dynamic few-shot prompting\n",
|
||||
"\n",
|
||||
"Sometimes you may want to condition which examples are shown based on the input. For this, you can replace the `examples` with an `example_selector`. The other components remain the same as above! To review, the dynamic few-shot prompt template would look like:\n",
|
||||
"\n",
|
||||
@@ -357,7 +357,7 @@
|
||||
"source": [
|
||||
"final_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are wonderous wizard of math.\"),\n",
|
||||
" (\"system\", \"You are a wondrous wizard of math.\"),\n",
|
||||
" few_shot_prompt,\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# Format template output
|
||||
|
||||
The output of the format method is available as string, list of messages and `ChatPromptValue`
|
||||
The output of the format method is available as a string, list of messages and `ChatPromptValue`
|
||||
|
||||
As string:
|
||||
|
||||
@@ -26,22 +26,7 @@ output_2 = chat_prompt.format_prompt(input_language="English", output_language="
|
||||
assert output == output_2
|
||||
```
|
||||
|
||||
As `ChatPromptValue`
|
||||
|
||||
|
||||
```python
|
||||
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
As list of Message objects
|
||||
As list of Message objects:
|
||||
|
||||
|
||||
```python
|
||||
@@ -57,3 +42,17 @@ chat_prompt.format_prompt(input_language="English", output_language="French", te
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
As `ChatPromptValue`:
|
||||
|
||||
|
||||
```python
|
||||
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
@@ -1,4 +1,4 @@
|
||||
# Template Formats
|
||||
# Template formats
|
||||
|
||||
`PromptTemplate` by default uses Python f-string as its template format. However, it can also use other formats like `jinja2`, specified through the `template_format` argument.
|
||||
|
||||
|
@@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"At a high level, the following design principles are applied to serialization:\n",
|
||||
"\n",
|
||||
"1. Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.\n",
|
||||
"1. Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like examples, different serialization methods may be supported.\n",
|
||||
"\n",
|
||||
"2. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.\n",
|
||||
"\n",
|
||||
@@ -144,7 +144,7 @@
|
||||
"id": "d788a83c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Loading Template from a File\n",
|
||||
"### Loading template from a file\n",
|
||||
"This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from `template` to `template_path`."
|
||||
]
|
||||
},
|
||||
@@ -214,7 +214,7 @@
|
||||
"source": [
|
||||
"## FewShotPromptTemplate\n",
|
||||
"\n",
|
||||
"This section covers examples for loading few shot prompt templates."
|
||||
"This section covers examples for loading few-shot prompt templates."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -282,7 +282,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Loading from YAML\n",
|
||||
"This shows an example of loading a few shot example from YAML."
|
||||
"This shows an example of loading a few-shot example from YAML."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -419,7 +419,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Loading from JSON\n",
|
||||
"This shows an example of loading a few shot example from JSON."
|
||||
"This shows an example of loading a few-shot example from JSON."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -484,7 +484,7 @@
|
||||
"id": "9d23faf4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Examples in the Config\n",
|
||||
"### Examples in the config\n",
|
||||
"This shows an example of referencing the examples directly in the config."
|
||||
]
|
||||
},
|
||||
@@ -553,7 +553,7 @@
|
||||
"id": "2e86139e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example Prompt from a File\n",
|
||||
"### Example prompt from a file\n",
|
||||
"This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from `example_prompt` to `example_prompt_path`."
|
||||
]
|
||||
},
|
||||
@@ -637,7 +637,7 @@
|
||||
"id": "c6e3f9fe",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## PromptTempalte with OutputParser\n",
|
||||
"## PromptTemplate with OutputParser\n",
|
||||
"This shows an example of loading a prompt along with an OutputParser from a file."
|
||||
]
|
||||
},
|
||||
|
@@ -5,9 +5,9 @@
|
||||
"id": "4de4e022",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Prompt Pipelining\n",
|
||||
"# Prompt pipelining\n",
|
||||
"\n",
|
||||
"The idea behind prompt pipelining is to expose a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components."
|
||||
"The idea behind prompt pipelining is to provide a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -15,26 +15,17 @@
|
||||
"id": "c3190650",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## String Prompt Pipelining\n",
|
||||
"## String prompt pipelining\n",
|
||||
"\n",
|
||||
"When working with string prompts, each template is joined togther. You can work with either prompts directly or strings (the first element in the list needs to be a prompt)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "69b17f05",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate"
|
||||
]
|
||||
@@ -160,7 +151,7 @@
|
||||
"id": "4e4f6a8a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat Prompt Pipelining"
|
||||
"## Chat prompt pipelining"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -173,19 +164,10 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "2a180f75",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\n",
|
||||
"from langchain.schema import HumanMessage, AIMessage, SystemMessage"
|
||||
@@ -214,8 +196,8 @@
|
||||
"id": "30656ef8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can then easily create a pipeline combining it with other messages OR message templates.\n",
|
||||
"Use a `Message` when there is no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string -> note that this will automatically get inferred as a HumanMessagePromptTemplate."
|
||||
"You can then easily create a pipeline combining it with other messages *or* message templates.\n",
|
||||
"Use a `Message` when there is no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a HumanMessagePromptTemplate.)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -270,7 +252,7 @@
|
||||
"id": "850357c0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also use it in an LLMChain, just like before"
|
||||
"You can also use it in an LLMChain, just like before."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Reference in New Issue
Block a user