docs:misc fixes (#9671)

Improve internal consistency in LangChain documentation
- Change occurrences of eg and eg. to e.g.
- Fix headers containing unnecessary capital letters.
- Change instances of "few shot" to "few-shot".
- Add periods to end of sentences where missing.
- Minor spelling and grammar fixes.
This commit is contained in:
seamusp 2023-08-23 22:36:54 -07:00 committed by GitHub
parent 6283f3b63c
commit 25f2c82ae8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 85 additions and 106 deletions

View File

@ -156,7 +156,7 @@ html_context = {
html_static_path = ["_static"]
# These paths are either relative to html_static_path
# or fully qualified paths (eg. https://...)
# or fully qualified paths (e.g. https://...)
html_css_files = [
"css/custom.css",
]

View File

@ -107,7 +107,7 @@ import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_tem
<PromptTemplateLLM/>
However, the advantages of using these over raw string formatting are several.
You can "partial" out variables - eg you can format only some of the variables at a time.
You can "partial" out variables - e.g. you can format only some of the variables at a time.
You can compose them together, easily combining different templates into a single prompt.
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
@ -121,12 +121,12 @@ Let's take a look at this below:
ChatPromptTemplates can also include other things besides ChatMessageTemplates - see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
## Output Parsers
## Output parsers
OutputParsers convert the raw output of an LLM into a format that can be used downstream.
There are few main type of OutputParsers, including:
- Convert text from LLM -> structured information (eg JSON)
- Convert text from LLM -> structured information (e.g. JSON)
- Convert a ChatMessage into just a string
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
@ -149,7 +149,7 @@ import LLMChain from "@snippets/get_started/quickstart/llm_chain.mdx"
<LLMChain/>
## Next Steps
## Next steps
This is it!
We've now gone over how to create the core building block of LangChain applications - the LLMChains.

View File

@ -1,6 +1,6 @@
# Few-shot prompt templates
In this tutorial, we'll learn how to create a prompt template that uses few shot examples. A few shot prompt template can be constructed from either a set of examples, or from an Example Selector object.
In this tutorial, we'll learn how to create a prompt template that uses few-shot examples. A few-shot prompt template can be constructed from either a set of examples, or from an Example Selector object.
import Example from "@snippets/modules/model_io/prompts/prompt_templates/few_shot_examples.mdx"

View File

@ -6,7 +6,7 @@ sidebar_position: 0
Prompt templates are pre-defined recipes for generating prompts for language models.
A template may include instructions, few shot examples, and specific context and
A template may include instructions, few-shot examples, and specific context and
questions appropriate for a given task.
LangChain provides tooling to create and work with prompt templates.

View File

@ -1,6 +1,6 @@
# Partial prompt templates
Like other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
Like other methods, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways:
1. Partial formatting with string values.

View File

@ -2,8 +2,8 @@
This notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:
- Final prompt: This is the final prompt that is returned
- Pipeline prompts: This is a list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.
- Final prompt: The final prompt that is returned
- Pipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.
import Example from "@snippets/modules/model_io/prompts/prompt_templates/prompt_composition.mdx"

View File

@ -1318,7 +1318,7 @@
"source": [
"template = \"\"\"Write some python code to solve the user's problem. \n",
"\n",
"Return only python code in Markdown format, eg:\n",
"Return only python code in Markdown format, e.g.:\n",
"\n",
"```python\n",
"....\n",

View File

@ -11,7 +11,7 @@
"\n",
"[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (eg [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"\n",
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
]

View File

@ -173,7 +173,7 @@
"source": [
"from langchain.document_loaders import GitLoader\n",
"\n",
"# eg. loading only python files\n",
"# e.g. loading only python files\n",
"loader = GitLoader(\n",
" repo_path=\"./example_data/test_repo1/\",\n",
" file_filter=lambda file_path: file_path.endswith(\".py\"),\n",

View File

@ -52,7 +52,7 @@ Note that using `ddtrace-run` or `patch_all()` will also enable the `requests` a
from ddtrace import config, patch
# Note: be sure to configure the integration before calling ``patch()``!
# eg. config.langchain["logs_enabled"] = True
# e.g. config.langchain["logs_enabled"] = True
patch(langchain=True)

View File

@ -1,3 +1,3 @@
# Tags
You can add tags to your callbacks by passing a `tags` argument to the `call()`/`run()`/`apply()` methods. This is useful for filtering your logs, eg. if you want to log all requests made to a specific LLMChain, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the `tags` argument of the "start" callback methods, ie. `on_llm_start`, `on_chat_model_start`, `on_chain_start`, `on_tool_start`.
You can add tags to your callbacks by passing a `tags` argument to the `call()`/`run()`/`apply()` methods. This is useful for filtering your logs, e.g. if you want to log all requests made to a specific LLMChain, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the `tags` argument of the "start" callback methods, ie. `on_llm_start`, `on_chat_model_start`, `on_chain_start`, `on_tool_start`.

View File

@ -5,7 +5,7 @@ In this tutorial, we'll create a custom example selector that selects every alte
An `ExampleSelector` must implement two methods:
1. An `add_example` method which takes in an example and adds it into the ExampleSelector
2. A `select_examples` method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.
2. A `select_examples` method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few-shot prompt.
Let's implement a custom `ExampleSelector` that just selects two examples at random.

View File

@ -35,7 +35,7 @@
"source": [
"### Load Feast Store\n",
"\n",
"Again, this should be set up according to the instructions in the Feast README"
"Again, this should be set up according to the instructions in the Feast README."
]
},
{
@ -160,7 +160,7 @@
"source": [
"### Use in a chain\n",
"\n",
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store"
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store."
]
},
{
@ -243,7 +243,7 @@
"tags": []
},
"source": [
"### Define and Load Features\n",
"### Define and load features\n",
"\n",
"We will use the user_transaction_counts Feature View from the [Tecton tutorial](https://docs.tecton.ai/docs/tutorials/tecton-fundamentals) as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.\n",
"\n",
@ -394,7 +394,7 @@
"source": [
"### Use in a chain\n",
"\n",
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform"
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform."
]
},
{
@ -460,7 +460,7 @@
"source": [
"## Featureform\n",
"\n",
"Finally, we will use [Featureform](https://github.com/featureform/featureform) an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations."
"Finally, we will use [Featureform](https://github.com/featureform/featureform), an open-source and enterprise-grade feature store, to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations."
]
},
{
@ -564,7 +564,7 @@
"source": [
"### Use in a chain\n",
"\n",
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform"
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform."
]
},
{
@ -605,7 +605,7 @@
"source": [
"## AzureML Managed Feature Store\n",
"\n",
"We will use [AzureML Managed Feature Store](https://learn.microsoft.com/en-us/azure/machine-learning/concept-what-is-managed-feature-store) to run the below example. "
"We will use [AzureML Managed Feature Store](https://learn.microsoft.com/en-us/azure/machine-learning/concept-what-is-managed-feature-store) to run the example below. "
]
},
{
@ -768,7 +768,7 @@
"source": [
"### Use in a chain\n",
"\n",
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the AzureML Managed Feature Store"
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by the AzureML Managed Feature Store."
]
},
{

View File

@ -11,9 +11,7 @@
"\n",
"## Why are custom prompt templates needed?\n",
"\n",
"LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.\n",
"\n",
"Take a look at the current set of default prompt templates [here](/docs/modules/model_io/prompts/prompt_templates/)."
"LangChain provides a set of [default prompt templates](/docs/modules/model_io/prompts/prompt_templates/) that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template."
]
},
{
@ -21,7 +19,7 @@
"id": "5d56ce86",
"metadata": {},
"source": [
"## Creating a Custom Prompt Template\n",
"## Creating a custom prompt template\n",
"\n",
"There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.\n",
"\n",
@ -29,7 +27,7 @@
"\n",
"To create a custom string prompt template, there are two requirements:\n",
"1. It has an input_variables attribute that exposes what input variables the prompt template expects.\n",
"2. It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.\n",
"2. It defines a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.\n",
"\n",
"We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let's first create a function that will return the source code of a function given its name."
]

View File

@ -5,9 +5,9 @@
"id": "bb0735c0",
"metadata": {},
"source": [
"# Few shot examples for chat models\n",
"# Few-shot examples for chat models\n",
"\n",
"This notebook covers how to use few shot examples in chat models. There does not appear to be solid consensus on how best to do few shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html) as a flexible starting point, and you can modify or replace them as you see fit.\n",
"This notebook covers how to use few-shot examples in chat models. There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html) as a flexible starting point, and you can modify or replace them as you see fit.\n",
"\n",
"The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.\n",
"\n",
@ -133,7 +133,7 @@
"source": [
"final_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are wonderous wizard of math.\"),\n",
" (\"system\", \"You are a wondrous wizard of math.\"),\n",
" few_shot_prompt,\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
@ -172,7 +172,7 @@
"id": "70ab7114-f07f-46be-8874-3705a25aba5f",
"metadata": {},
"source": [
"## Dynamic Few-shot Prompting\n",
"## Dynamic few-shot prompting\n",
"\n",
"Sometimes you may want to condition which examples are shown based on the input. For this, you can replace the `examples` with an `example_selector`. The other components remain the same as above! To review, the dynamic few-shot prompt template would look like:\n",
"\n",
@ -357,7 +357,7 @@
"source": [
"final_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are wonderous wizard of math.\"),\n",
" (\"system\", \"You are a wondrous wizard of math.\"),\n",
" few_shot_prompt,\n",
" (\"human\", \"{input}\"),\n",
" ]\n",

View File

@ -1,6 +1,6 @@
# Format template output
The output of the format method is available as string, list of messages and `ChatPromptValue`
The output of the format method is available as a string, list of messages and `ChatPromptValue`
As string:
@ -26,22 +26,7 @@ output_2 = chat_prompt.format_prompt(input_language="English", output_language="
assert output == output_2
```
As `ChatPromptValue`
```python
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.")
```
<CodeOutputBlock lang="python">
```
ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])
```
</CodeOutputBlock>
As list of Message objects
As list of Message objects:
```python
@ -57,3 +42,17 @@ chat_prompt.format_prompt(input_language="English", output_language="French", te
</CodeOutputBlock>
As `ChatPromptValue`:
```python
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.")
```
<CodeOutputBlock lang="python">
```
ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])
```
</CodeOutputBlock>

View File

@ -1,4 +1,4 @@
# Template Formats
# Template formats
`PromptTemplate` by default uses Python f-string as its template format. However, it can also use other formats like `jinja2`, specified through the `template_format` argument.

View File

@ -11,7 +11,7 @@
"\n",
"At a high level, the following design principles are applied to serialization:\n",
"\n",
"1. Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.\n",
"1. Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like examples, different serialization methods may be supported.\n",
"\n",
"2. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.\n",
"\n",
@ -144,7 +144,7 @@
"id": "d788a83c",
"metadata": {},
"source": [
"### Loading Template from a File\n",
"### Loading template from a file\n",
"This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from `template` to `template_path`."
]
},
@ -214,7 +214,7 @@
"source": [
"## FewShotPromptTemplate\n",
"\n",
"This section covers examples for loading few shot prompt templates."
"This section covers examples for loading few-shot prompt templates."
]
},
{
@ -282,7 +282,7 @@
"metadata": {},
"source": [
"### Loading from YAML\n",
"This shows an example of loading a few shot example from YAML."
"This shows an example of loading a few-shot example from YAML."
]
},
{
@ -419,7 +419,7 @@
"metadata": {},
"source": [
"### Loading from JSON\n",
"This shows an example of loading a few shot example from JSON."
"This shows an example of loading a few-shot example from JSON."
]
},
{
@ -484,7 +484,7 @@
"id": "9d23faf4",
"metadata": {},
"source": [
"### Examples in the Config\n",
"### Examples in the config\n",
"This shows an example of referencing the examples directly in the config."
]
},
@ -553,7 +553,7 @@
"id": "2e86139e",
"metadata": {},
"source": [
"### Example Prompt from a File\n",
"### Example prompt from a file\n",
"This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from `example_prompt` to `example_prompt_path`."
]
},
@ -637,7 +637,7 @@
"id": "c6e3f9fe",
"metadata": {},
"source": [
"## PromptTempalte with OutputParser\n",
"## PromptTemplate with OutputParser\n",
"This shows an example of loading a prompt along with an OutputParser from a file."
]
},

View File

@ -5,9 +5,9 @@
"id": "4de4e022",
"metadata": {},
"source": [
"# Prompt Pipelining\n",
"# Prompt pipelining\n",
"\n",
"The idea behind prompt pipelining is to expose a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components."
"The idea behind prompt pipelining is to provide a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components."
]
},
{
@ -15,26 +15,17 @@
"id": "c3190650",
"metadata": {},
"source": [
"## String Prompt Pipelining\n",
"## String prompt pipelining\n",
"\n",
"When working with string prompts, each template is joined togther. You can work with either prompts directly or strings (the first element in the list needs to be a prompt)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "69b17f05",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate"
]
@ -160,7 +151,7 @@
"id": "4e4f6a8a",
"metadata": {},
"source": [
"## Chat Prompt Pipelining"
"## Chat prompt pipelining"
]
},
{
@ -173,19 +164,10 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "2a180f75",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\n",
"from langchain.schema import HumanMessage, AIMessage, SystemMessage"
@ -214,8 +196,8 @@
"id": "30656ef8",
"metadata": {},
"source": [
"You can then easily create a pipeline combining it with other messages OR message templates.\n",
"Use a `Message` when there is no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string -> note that this will automatically get inferred as a HumanMessagePromptTemplate."
"You can then easily create a pipeline combining it with other messages *or* message templates.\n",
"Use a `Message` when there is no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a HumanMessagePromptTemplate.)"
]
},
{
@ -270,7 +252,7 @@
"id": "850357c0",
"metadata": {},
"source": [
"You can also use it in an LLMChain, just like before"
"You can also use it in an LLMChain, just like before."
]
},
{

View File

@ -130,10 +130,10 @@ chain.run(number=2, callbacks=[handler])
The `callbacks` argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) in two different places:
- **Constructor callbacks**: defined in the constructor, eg. `LLMChain(callbacks=[handler], tags=['a-tag'])`, which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the `LLMChain` constructor, it will not be used by the Model attached to that chain.
- **Request callbacks**: defined in the `run()`/`apply()` methods used for issuing a request, eg. `chain.run(input, callbacks=[handler])`, which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the `call()` method).
- **Constructor callbacks**: defined in the constructor, e.g. `LLMChain(callbacks=[handler], tags=['a-tag'])`, which will be used for all calls made on that object, and will be scoped to that object only, e.g. if you pass a handler to the `LLMChain` constructor, it will not be used by the Model attached to that chain.
- **Request callbacks**: defined in the `run()`/`apply()` methods used for issuing a request, e.g. `chain.run(input, callbacks=[handler])`, which will be used for that specific request only, and all sub-requests that it contains (e.g. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the `call()` method).
The `verbose` argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, eg. `LLMChain(verbose=True)`, and it is equivalent to passing a `ConsoleCallbackHandler` to the `callbacks` argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.
The `verbose` argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, e.g. `LLMChain(verbose=True)`, and it is equivalent to passing a `ConsoleCallbackHandler` to the `callbacks` argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.
### When do you want to use each of these?

View File

@ -628,7 +628,7 @@ local_chain("How many customers are there?")
</CodeOutputBlock>
Even this relatively large model will most likely fail to generate more complicated SQL by itself. However, you can log its inputs and outputs so that you can hand-correct them and use the corrected examples for few shot prompt examples later. In practice, you could log any executions of your chain that raise exceptions (as shown in the example below) or get direct user feedback in cases where the results are incorrect (but did not raise an exception).
Even this relatively large model will most likely fail to generate more complicated SQL by itself. However, you can log its inputs and outputs so that you can hand-correct them and use the corrected examples for few-shot prompt examples later. In practice, you could log any executions of your chain that raise exceptions (as shown in the example below) or get direct user feedback in cases where the results are incorrect (but did not raise an exception).
```bash
@ -878,7 +878,7 @@ YAML_EXAMPLES = """
"""
```
Now that you have some examples (with manually corrected output SQL), you can do few shot prompt seeding the usual way:
Now that you have some examples (with manually corrected output SQL), you can do few-shot prompt seeding the usual way:
```python
@ -925,7 +925,7 @@ few_shot_prompt = FewShotPromptTemplate(
</CodeOutputBlock>
The model should do better now with this few shot prompt, especially for inputs similar to the examples you have seeded it with.
The model should do better now with this few-shot prompt, especially for inputs similar to the examples you have seeded it with.
```python

View File

@ -4,7 +4,7 @@ In addition to controlling which characters you can split on, you can also contr
- `length_function`: how the length of chunks is calculated. Defaults to just counting number of characters, but it's pretty common to pass a token counter here.
- `chunk_size`: the maximum size of your chunks (as measured by the length function).
- `chunk_overlap`: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window).
- `chunk_overlap`: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (e.g. do a sliding window).
- `add_start_index`: whether to include the starting position of each chunk within the original document in the metadata.

View File

@ -34,7 +34,7 @@ chat(chat_prompt.format_prompt(input_language="English", output_language="French
</CodeOutputBlock>
If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:
If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, e.g.:
```python

View File

@ -1,13 +1,13 @@
### Use Case
In this tutorial, we'll configure few shot examples for self-ask with search.
In this tutorial, we'll configure few-shot examples for self-ask with search.
## Using an example set
### Create the example set
To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.
To get started, create a list of few-shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.
```python
from langchain.prompts.few_shot import FewShotPromptTemplate
@ -69,9 +69,9 @@ So the final answer is: No
]
```
### Create a formatter for the few shot examples
### Create a formatter for the few-shot examples
Configure a formatter that will format the few shot examples into a string. This formatter should be a `PromptTemplate` object.
Configure a formatter that will format the few-shot examples into a string. This formatter should be a `PromptTemplate` object.
```python
@ -98,7 +98,7 @@ print(example_prompt.format(**examples[0]))
### Feed examples and formatter to `FewShotPromptTemplate`
Finally, create a `FewShotPromptTemplate` object. This object takes in the few shot examples and the formatter for the few shot examples.
Finally, create a `FewShotPromptTemplate` object. This object takes in the few-shot examples and the formatter for the few-shot examples.
```python
@ -171,7 +171,7 @@ print(prompt.format(input="Who was the father of Mary Ball Washington?"))
We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the `FewShotPromptTemplate` object, we will feed them into an `ExampleSelector` object.
In this tutorial, we will use the `SemanticSimilarityExampleSelector` class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search.
In this tutorial, we will use the `SemanticSimilarityExampleSelector` class. This class selects few-shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few-shot examples, as well as a vector store to perform the nearest neighbor search.
```python
@ -224,7 +224,7 @@ for example in selected_examples:
### Feed example selector into `FewShotPromptTemplate`
Finally, create a `FewShotPromptTemplate` object. This object takes in the example selector and the formatter for the few shot examples.
Finally, create a `FewShotPromptTemplate` object. This object takes in the example selector and the formatter for the few-shot examples.
```python

View File

@ -1,4 +1,4 @@
## Partial With Strings
## Partial with strings
One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in the chain, but the `baz` value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
@ -40,7 +40,7 @@ print(prompt.format(bar="baz"))
</CodeOutputBlock>
## Partial With Functions
## Partial with functions
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.