mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-06 21:43:44 +00:00
docs: misc modelIO fixes (#9734)
Various improvements to the Model I/O section of the documentation - Changed "Chat Model" to "chat model" in a few spots for internal consistency - Minor spelling & grammar fixes to improve readability & comprehension
This commit is contained in:
@@ -19,7 +19,7 @@ from langchain.chat_models import ChatOpenAI
|
||||
chat = ChatOpenAI(openai_api_key="...")
|
||||
```
|
||||
|
||||
otherwise you can initialize without any params:
|
||||
Otherwise you can initialize without any params:
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
@@ -101,7 +101,7 @@ result
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
You can recover things like token usage from this LLMResult
|
||||
You can recover things like token usage from this LLMResult:
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -1,6 +1,6 @@
|
||||
You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
|
||||
|
||||
For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
|
||||
For convenience, there is a `from_template` method defined on the template. If you were to use this template, this is what it would look like:
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -90,7 +90,7 @@ llm_result.generations[-1]
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
You can also access provider specific information that is returned. This information is NOT standardized across providers.
|
||||
You can also access provider specific information that is returned. This information is **not** standardized across providers.
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -97,8 +97,8 @@ llm.predict("Tell me a joke")
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## Optional Caching in Chains
|
||||
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.
|
||||
## Optional caching in chains
|
||||
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, it's often easier to construct the chain first, and then edit the LLM afterwards.
|
||||
|
||||
As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
|
||||
|
||||
|
@@ -7,4 +7,4 @@ class BaseExampleSelector(ABC):
|
||||
"""Select which examples to use based on the inputs."""
|
||||
```
|
||||
|
||||
The only method it needs to expose is a ``select_examples`` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let's take a look at some below.
|
||||
The only method it needs to define is a ``select_examples`` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected.
|
||||
|
@@ -4,7 +4,7 @@ from langchain.prompts import FewShotPromptTemplate
|
||||
from langchain.prompts.example_selector import LengthBasedExampleSelector
|
||||
|
||||
|
||||
# These are a lot of examples of a pretend task of creating antonyms.
|
||||
# Examples of a pretend task of creating antonyms.
|
||||
examples = [
|
||||
{"input": "happy", "output": "sad"},
|
||||
{"input": "tall", "output": "short"},
|
||||
@@ -17,14 +17,14 @@ example_prompt = PromptTemplate(
|
||||
template="Input: {input}\nOutput: {output}",
|
||||
)
|
||||
example_selector = LengthBasedExampleSelector(
|
||||
# These are the examples it has available to choose from.
|
||||
# The examples it has available to choose from.
|
||||
examples=examples,
|
||||
# This is the PromptTemplate being used to format the examples.
|
||||
# The PromptTemplate being used to format the examples.
|
||||
example_prompt=example_prompt,
|
||||
# This is the maximum length that the formatted examples should be.
|
||||
# The maximum length that the formatted examples should be.
|
||||
# Length is measured by the get_text_length function below.
|
||||
max_length=25,
|
||||
# This is the function used to get the length of a string, which is used
|
||||
# The function used to get the length of a string, which is used
|
||||
# to determine which examples to include. It is commented out because
|
||||
# it is provided as a default value if none is specified.
|
||||
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
|
||||
|
@@ -9,7 +9,7 @@ example_prompt = PromptTemplate(
|
||||
template="Input: {input}\nOutput: {output}",
|
||||
)
|
||||
|
||||
# These are a lot of examples of a pretend task of creating antonyms.
|
||||
# Examples of a pretend task of creating antonyms.
|
||||
examples = [
|
||||
{"input": "happy", "output": "sad"},
|
||||
{"input": "tall", "output": "short"},
|
||||
@@ -22,13 +22,13 @@ examples = [
|
||||
|
||||
```python
|
||||
example_selector = SemanticSimilarityExampleSelector.from_examples(
|
||||
# This is the list of examples available to select from.
|
||||
# The list of examples available to select from.
|
||||
examples,
|
||||
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
|
||||
# The embedding class used to produce embeddings which are used to measure semantic similarity.
|
||||
OpenAIEmbeddings(),
|
||||
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
|
||||
# The VectorStore class that is used to store the embeddings and do a similarity search over.
|
||||
Chroma,
|
||||
# This is the number of examples to produce.
|
||||
# The number of examples to produce.
|
||||
k=1
|
||||
)
|
||||
similar_prompt = FewShotPromptTemplate(
|
||||
|
@@ -55,7 +55,7 @@ For more information, see [Custom Prompt Templates](./custom_prompt_template.htm
|
||||
|
||||
## Chat prompt template
|
||||
|
||||
The prompt to [Chat Models](../models/chat) is a list of chat messages.
|
||||
The prompt to [chat models](../models/chat) is a list of chat messages.
|
||||
|
||||
Each chat message is associated with content, and an additional parameter called `role`.
|
||||
For example, in the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with an AI assistant, a human or a system role.
|
||||
|
Reference in New Issue
Block a user