mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-06 21:43:44 +00:00
docs: chains & memory fixes (#9895)
Various improvements to the Chains & Memory sections of the documentation including formatting, spelling, and grammar fixes to improve readability.
This commit is contained in:
@@ -19,7 +19,7 @@ llm_chain("colorful socks")
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## Additional ways of running LLM Chain
|
||||
## Additional ways of running `LLMChain`
|
||||
|
||||
Aside from `__call__` and `run` methods shared by all `Chain` object, `LLMChain` offers a few more ways of calling the chain logic:
|
||||
|
||||
@@ -139,7 +139,7 @@ llm_chain.predict_and_parse()
|
||||
|
||||
## Initialize from string
|
||||
|
||||
You can also construct an LLMChain from a string template directly.
|
||||
You can also construct an `LLMChain` from a string template directly.
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -89,7 +89,7 @@ print(review)
|
||||
## Sequential Chain
|
||||
Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.
|
||||
|
||||
Of particular importance is how we name the input/output variable names. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.
|
||||
Of particular importance is how we name the input/output variables. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.
|
||||
|
||||
|
||||
```python
|
||||
@@ -158,7 +158,7 @@ overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian Engla
|
||||
### Memory in Sequential Chains
|
||||
Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using `SimpleMemory` is a convenient way to do manage this and clean up your chains.
|
||||
|
||||
For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as `input_variables`, or we can add a `SimpleMemory` to the chain to manage this context:
|
||||
For example, using the previous playwright `SequentialChain`, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as `input_variables`, or we can add a `SimpleMemory` to the chain to manage this context:
|
||||
|
||||
|
||||
|
||||
|
@@ -1,5 +1,5 @@
|
||||
Let's take a look at how to use ConversationBufferMemory in chains.
|
||||
ConversationBufferMemory is an extremely simple form of memory that just keeps a list of chat messages in a buffer
|
||||
Let's take a look at how to use `ConversationBufferMemory` in chains.
|
||||
`ConversationBufferMemory` is an extremely simple form of memory that just keeps a list of chat messages in a buffer
|
||||
and passes those into the prompt template.
|
||||
|
||||
```python
|
||||
@@ -16,7 +16,7 @@ Each individual memory type may very well have its own parameters and concepts t
|
||||
|
||||
### What variables get returned from memory
|
||||
Before going into the chain, various variables are read from memory.
|
||||
This have specific names which need to align with the variables the chain expects.
|
||||
These have specific names which need to align with the variables the chain expects.
|
||||
You can see what these variables are by calling `memory.load_memory_variables({})`.
|
||||
Note that the empty dictionary that we pass in is just a placeholder for real variables.
|
||||
If the memory type you are using is dependent upon the input variables, you may need to pass some in.
|
||||
@@ -34,7 +34,7 @@ memory.load_memory_variables({})
|
||||
</CodeOutputBlock>
|
||||
|
||||
In this case, you can see that `load_memory_variables` returns a single key, `history`.
|
||||
This means that your chain (and likely your prompt) should expect and input named `history`.
|
||||
This means that your chain (and likely your prompt) should expect an input named `history`.
|
||||
You can usually control this variable through parameters on the memory class.
|
||||
For example, if you want the memory variables to be returned in the key `chat_history` you can do:
|
||||
|
||||
@@ -51,12 +51,12 @@ memory.chat_memory.add_ai_message("whats up?")
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
The parameter name to control these keys may vary per memory type, but it's important to understand that (1) this is controllable, (2) how to control it.
|
||||
The parameter name to control these keys may vary per memory type, but it's important to understand that (1) this is controllable, and (2) how to control it.
|
||||
|
||||
### Whether memory is a string or a list of messages
|
||||
|
||||
One of the most common types of memory involves returning a list of chat messages.
|
||||
These can either be returned as a single string, all concatenated together (useful when they will be passed in LLMs)
|
||||
These can either be returned as a single string, all concatenated together (useful when they will be passed into LLMs)
|
||||
or a list of ChatMessages (useful when passed into ChatModels).
|
||||
|
||||
By default, they are returned as a single string.
|
||||
@@ -81,13 +81,13 @@ memory.chat_memory.add_ai_message("whats up?")
|
||||
Often times chains take in or return multiple input/output keys.
|
||||
In these cases, how can we know which keys we want to save to the chat message history?
|
||||
This is generally controllable by `input_key` and `output_key` parameters on the memory types.
|
||||
These default to None - and if there is only one input/output key it is known to just use that.
|
||||
However, if there are multiple input/output keys then you MUST specify the name of which one to use
|
||||
These default to `None` - and if there is only one input/output key it is known to just use that.
|
||||
However, if there are multiple input/output keys then you MUST specify the name of which one to use.
|
||||
|
||||
### End to end example
|
||||
|
||||
Finally, let's take a look at using this in a chain.
|
||||
We'll use an LLMChain, and show working with both an LLM and a ChatModel.
|
||||
We'll use an `LLMChain`, and show working with both an LLM and a ChatModel.
|
||||
|
||||
#### Using an LLM
|
||||
|
||||
|
@@ -153,5 +153,3 @@ conversation.predict(input="Tell me about yourself.")
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
And that's it for the getting started! There are plenty of different types of memory, check out our examples to see them all
|
||||
|
@@ -62,7 +62,7 @@ memory.predict_new_summary(messages, previous_summary)
|
||||
|
||||
## Initializing with messages/existing summary
|
||||
|
||||
If you have messages outside this class, you can easily initialize the class with ChatMessageHistory. During loading, a summary will be calculated.
|
||||
If you have messages outside this class, you can easily initialize the class with `ChatMessageHistory`. During loading, a summary will be calculated.
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -7,9 +7,9 @@ from langchain.chains import ConversationChain
|
||||
from langchain.prompts import PromptTemplate
|
||||
```
|
||||
|
||||
### Initialize your VectorStore
|
||||
### Initialize your vector store
|
||||
|
||||
Depending on the store you choose, this step may look different. Consult the relevant VectorStore documentation for more details.
|
||||
Depending on the store you choose, this step may look different. Consult the relevant vector store documentation for more details.
|
||||
|
||||
|
||||
```python
|
||||
@@ -25,9 +25,9 @@ embedding_fn = OpenAIEmbeddings().embed_query
|
||||
vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {})
|
||||
```
|
||||
|
||||
### Create your the VectorStoreRetrieverMemory
|
||||
### Create your `VectorStoreRetrieverMemory`
|
||||
|
||||
The memory object is instantiated from any VectorStoreRetriever.
|
||||
The memory object is instantiated from any vector store retriever.
|
||||
|
||||
|
||||
```python
|
||||
|
Reference in New Issue
Block a user