docs: memory types menu (#9949)

The [Memory
Types](https://python.langchain.com/docs/modules/memory/types/) menu is
clogged with unnecessary wording.
I've made it more concise by simplifying titles of the example
notebooks.
As results, menu is shorter and better for comprehend.
This commit is contained in:
Bagatur
2023-08-29 15:05:23 -07:00
committed by GitHub
9 changed files with 40 additions and 16 deletions

View File

@@ -5,11 +5,17 @@
"id": "44c9933a",
"metadata": {},
"source": [
"# Conversation Knowledge Graph Memory\n",
"# Conversation Knowledge Graph\n",
"\n",
"This type of memory uses a knowledge graph to recreate memory.\n",
"\n",
"Let's first walk through how to use the utilities"
"This type of memory uses a knowledge graph to recreate memory.\n"
]
},
{
"cell_type": "markdown",
"id": "0c798006-ca04-4de3-83eb-cf167fb2bd01",
"metadata": {},
"source": [
"## Using memory with LLM"
]
},
{
@@ -162,6 +168,7 @@
"metadata": {},
"source": [
"## Using in a chain\n",
"\n",
"Let's now use this in a chain!"
]
},
@@ -348,7 +355,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -5,13 +5,22 @@
"id": "ff4be5f3",
"metadata": {},
"source": [
"# ConversationSummaryBufferMemory\n",
"# Conversation Summary Buffer\n",
"\n",
"`ConversationSummaryBufferMemory` combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.\n",
"`ConversationSummaryBufferMemory` combines the two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. \n",
"It uses token length rather than number of interactions to determine when to flush interactions.\n",
"\n",
"Let's first walk through how to use the utilities"
]
},
{
"cell_type": "markdown",
"id": "0309636e-a530-4d2a-ba07-0916ea18bb20",
"metadata": {},
"source": [
"## Using memory with LLM"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -320,7 +329,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -5,13 +5,21 @@
"id": "ff4be5f3",
"metadata": {},
"source": [
"# ConversationTokenBufferMemory\n",
"# Conversation Token Buffer\n",
"\n",
"`ConversationTokenBufferMemory` keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.\n",
"\n",
"Let's first walk through how to use the utilities"
]
},
{
"cell_type": "markdown",
"id": "0e528ef0-7b04-4a4a-8ff2-493c02027e83",
"metadata": {},
"source": [
"## Using memory with LLM"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -286,7 +294,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,