docs: misc modelIO fixes (#9734)

Various improvements to the Model I/O section of the documentation

- Changed "Chat Model" to "chat model" in a few spots for internal
consistency
- Minor spelling & grammar fixes to improve readability & comprehension
This commit is contained in:
seamusp
2023-09-03 20:33:20 -07:00
committed by GitHub
parent c585351bdc
commit 43c4c6dfcc
28 changed files with 62 additions and 64 deletions

View File

@@ -47,7 +47,7 @@ from langchain.embeddings import integration_class_REPLACE_ME
```
## Chat Models
## Chat models
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)

View File

@@ -93,7 +93,7 @@
"metadata": {},
"source": [
"## Usage\n",
"### Using the Context callback within a Chat Model\n",
"### Using the Context callback within a chat model\n",
"\n",
"The Context callback handler can be used to directly record transcripts between users and AI assistants.\n",
"\n",

View File

@@ -11,7 +11,7 @@ pip install python-arango
## Graph QA Chain
Connect your ArangoDB Database with a Chat Model to get insights on your data.
Connect your ArangoDB Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/more/graph/graph_arangodb_qa.html).

View File

@@ -4,12 +4,12 @@
Key features of the ddtrace integration for LangChain:
- Traces: Capture LangChain requests, parameters, prompt-completions, and help visualize LangChain operations.
- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and Chat Models).
- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and chat models).
- Logs: Store prompt completion data for each LangChain operation.
- Dashboard: Combine metrics, logs, and trace data into a single plane to monitor LangChain requests.
- Monitors: Provide alerts in response to spikes in LangChain request latency or error rate.
Note: The ddtrace LangChain integration currently provides tracing for LLMs, Chat Models, Text Embedding Models, Chains, and Vectorstores.
Note: The ddtrace LangChain integration currently provides tracing for LLMs, chat models, Text Embedding Models, Chains, and Vectorstores.
## Installation and Setup

View File

@@ -5,9 +5,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Human input Chat Model\n",
"# Human input chat model\n",
"\n",
"Along with HumanInputLLM, LangChain also provides a pseudo Chat Model class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the Chat Model and simulate how a human would respond if they received the messages.\n",
"Along with HumanInputLLM, LangChain also provides a pseudo chat model class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the chat model and simulate how a human would respond if they received the messages.\n",
"\n",
"In this notebook, we go over how to use this.\n",
"\n",

View File

@@ -11,13 +11,13 @@
"\n",
"There is only one required thing that a custom LLM needs to implement:\n",
"\n",
"1. A `_call` method that takes in a string, some optional stop words, and returns a string\n",
"- A `_call` method that takes in a string, some optional stop words, and returns a string\n",
"\n",
"There is a second optional thing it can implement:\n",
"\n",
"1. An `_identifying_params` property that is used to help with printing of this class. Should return a dictionary.\n",
"- An `_identifying_params` property that is used to help with printing of this class. Should return a dictionary.\n",
"\n",
"Let's implement a very simple custom LLM that just returns the first N characters of the input."
"Let's implement a very simple custom LLM that just returns the first n characters of the input."
]
},
{

View File

@@ -6,7 +6,7 @@
"metadata": {},
"source": [
"# Fake LLM\n",
"We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.\n",
"LangChain provides a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.\n",
"\n",
"In this notebook we go over how to use this.\n",
"\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# Datetime parser\n",
"\n",
"This OutputParser shows out to parse LLM output into datetime format."
"This OutputParser can be used to parse LLM output into datetime format."
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# Enum parser\n",
"\n",
"This notebook shows how to use an Enum output parser"
"This notebook shows how to use an Enum output parser."
]
},
{

View File

@@ -10,7 +10,7 @@
"\n",
"Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie's ability already drops off dramatically. \n",
"\n",
"Use Pydantic to declare your data model. Pydantic's BaseModel like a Python dataclass, but with actual type checking + coercion."
"Use Pydantic to declare your data model. Pydantic's BaseModel is like a Python dataclass, but with actual type checking + coercion."
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# Retry parser\n",
"\n",
"While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example."
"While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example."
]
},
{
@@ -93,7 +93,7 @@
"id": "25631465",
"metadata": {},
"source": [
"If we try to parse this response as is, we will get an error"
"If we try to parse this response as is, we will get an error:"
]
},
{

View File

@@ -1,6 +1,6 @@
# Custom example selector
In this tutorial, we'll create a custom example selector that selects every alternate example from a given list of examples.
In this tutorial, we'll create a custom example selector that selects examples randomly from a given list of examples.
An `ExampleSelector` must implement two methods:
@@ -9,9 +9,8 @@ An `ExampleSelector` must implement two methods:
Let's implement a custom `ExampleSelector` that just selects two examples at random.
:::{note}
**Note:**
Take a look at the current set of example selector implementations supported in LangChain [here](/docs/modules/model_io/prompts/example_selectors/).
:::
<!-- TODO(shreya): Add the correct link. -->
@@ -52,7 +51,6 @@ examples = [
# Initialize example selector.
example_selector = CustomExampleSelector(examples)
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '2'}, {'foo': '3'}], dtype=object)

View File

@@ -30,7 +30,7 @@
" template=\"Input: {input}\\nOutput: {output}\",\n",
")\n",
"\n",
"# These are a lot of examples of a pretend task of creating antonyms.\n",
"# Examples of a pretend task of creating antonyms.\n",
"examples = [\n",
" {\"input\": \"happy\", \"output\": \"sad\"},\n",
" {\"input\": \"tall\", \"output\": \"short\"},\n",
@@ -48,13 +48,13 @@
"outputs": [],
"source": [
"example_selector = MaxMarginalRelevanceExampleSelector.from_examples(\n",
" # This is the list of examples available to select from.\n",
" # The list of examples available to select from.\n",
" examples,\n",
" # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n",
" # The embedding class used to produce embeddings which are used to measure semantic similarity.\n",
" OpenAIEmbeddings(),\n",
" # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n",
" # The VectorStore class that is used to store the embeddings and do a similarity search over.\n",
" FAISS,\n",
" # This is the number of examples to produce.\n",
" # The number of examples to produce.\n",
" k=2,\n",
")\n",
"mmr_prompt = FewShotPromptTemplate(\n",
@@ -122,13 +122,13 @@
"# Let's compare this to what we would just get if we went solely off of similarity,\n",
"# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.\n",
"example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
" # This is the list of examples available to select from.\n",
" # The list of examples available to select from.\n",
" examples,\n",
" # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n",
" # The embedding class used to produce embeddings which are used to measure semantic similarity.\n",
" OpenAIEmbeddings(),\n",
" # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n",
" # The VectorStore class that is used to store the embeddings and do a similarity search over.\n",
" FAISS,\n",
" # This is the number of examples to produce.\n",
" # The number of examples to produce.\n",
" k=2,\n",
")\n",
"similar_prompt = FewShotPromptTemplate(\n",

View File

@@ -28,7 +28,7 @@
" template=\"Input: {input}\\nOutput: {output}\",\n",
")\n",
"\n",
"# These are a lot of examples of a pretend task of creating antonyms.\n",
"# Examples of a pretend task of creating antonyms.\n",
"examples = [\n",
" {\"input\": \"happy\", \"output\": \"sad\"},\n",
" {\"input\": \"tall\", \"output\": \"short\"},\n",
@@ -45,7 +45,7 @@
"metadata": {},
"outputs": [],
"source": [
"# These are examples of a fictional translation task.\n",
"# Examples of a fictional translation task.\n",
"examples = [\n",
" {\"input\": \"See Spot run.\", \"output\": \"Ver correr a Spot.\"},\n",
" {\"input\": \"My dog barks.\", \"output\": \"Mi perro ladra.\"},\n",
@@ -65,11 +65,11 @@
" template=\"Input: {input}\\nOutput: {output}\",\n",
")\n",
"example_selector = NGramOverlapExampleSelector(\n",
" # These are the examples it has available to choose from.\n",
" # The examples it has available to choose from.\n",
" examples=examples,\n",
" # This is the PromptTemplate being used to format the examples.\n",
" # The PromptTemplate being used to format the examples.\n",
" example_prompt=example_prompt,\n",
" # This is the threshold, at which selector stops.\n",
" # The threshold, at which selector stops.\n",
" # It is set to -1.0 by default.\n",
" threshold=-1.0,\n",
" # For negative threshold:\n",

View File

@@ -1,6 +1,6 @@
# Validate template
By default, `PromptTemplate` will validate the `template` string by checking whether the `input_variables` match the variables defined in `template`. You can disable this behavior by setting `validate_template` to `False`
By default, `PromptTemplate` will validate the `template` string by checking whether the `input_variables` match the variables defined in `template`. You can disable this behavior by setting `validate_template` to `False`.
```python
template = "I am learning langchain because {reason}."