mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-27 17:08:47 +00:00
documentation (#191)
This commit is contained in:
parent
b913df3774
commit
b0feb3608b
@ -8,6 +8,4 @@ The examples here are all end-to-end agents for specific applications.
|
|||||||
:glob:
|
:glob:
|
||||||
:caption: Agents
|
:caption: Agents
|
||||||
|
|
||||||
agents/mrkl.ipynb
|
agents/*
|
||||||
agents/react.ipynb
|
|
||||||
agents/self_ask_with_search.ipynb
|
|
@ -8,8 +8,4 @@ The examples here are all end-to-end chains for specific applications.
|
|||||||
:glob:
|
:glob:
|
||||||
:caption: Chains
|
:caption: Chains
|
||||||
|
|
||||||
chains/llm_chain.ipynb
|
chains/*
|
||||||
chains/llm_math.ipynb
|
|
||||||
chains/map_reduce.ipynb
|
|
||||||
chains/sqlite.ipynb
|
|
||||||
chains/vector_db_qa.ipynb
|
|
||||||
|
@ -1,5 +1,25 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d31df93e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Memory\n",
|
||||||
|
"So far, all the chains and agents we've gone through have been stateless. But often, you may want a chain or agent to have some concept of \"memory\" so that it may remember information about its previous interactions. The most clear and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of \"short-term memory\". On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of \"long-term memory\".\n",
|
||||||
|
"\n",
|
||||||
|
"LangChain provides several specially created chains just for this purpose. This notebook walk throughs using one of those chains (the `ConversationChain`) with two different types of memory."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d051c1da",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### ConversationChain with default memory\n",
|
||||||
|
"By default, the `ConversationChain` has a simple type of memory which remebers all previes inputs/outputs and adds them to the context that is passed. Let's take a look at using this chain (setting `verbose=True` so we can see the prompt)."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 1,
|
"execution_count": 1,
|
||||||
@ -37,7 +57,6 @@
|
|||||||
],
|
],
|
||||||
"source": [
|
"source": [
|
||||||
"from langchain import OpenAI, ConversationChain\n",
|
"from langchain import OpenAI, ConversationChain\n",
|
||||||
"from langchain.chains.conversation.memory import ConversationSummaryMemory\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"llm = OpenAI(temperature=0)\n",
|
"llm = OpenAI(temperature=0)\n",
|
||||||
"conversation = ConversationChain(llm=llm, verbose=True)\n",
|
"conversation = ConversationChain(llm=llm, verbose=True)\n",
|
||||||
@ -129,9 +148,30 @@
|
|||||||
"conversation.predict(input=\"Tell me about yourself.\")"
|
"conversation.predict(input=\"Tell me about yourself.\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "4fad9448",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### ConversationChain with ConversationSummaryMemory\n",
|
||||||
|
"Now lets take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.\n",
|
||||||
|
"\n",
|
||||||
|
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 4,
|
"execution_count": 4,
|
||||||
|
"id": "f60a2fe8",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langchain.chains.conversation.memory import ConversationSummaryMemory"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 5,
|
||||||
"id": "b7274f2c",
|
"id": "b7274f2c",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@ -159,7 +199,7 @@
|
|||||||
"\"\\n\\nI'm doing well, thank you for asking. I'm currently working on a project that I'm really excited about.\""
|
"\"\\n\\nI'm doing well, thank you for asking. I'm currently working on a project that I'm really excited about.\""
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 4,
|
"execution_count": 5,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
@ -171,7 +211,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 5,
|
"execution_count": 6,
|
||||||
"id": "a6b6b88f",
|
"id": "a6b6b88f",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@ -187,7 +227,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Current conversation:\n",
|
"Current conversation:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The human greets the AI and asks how it is doing. The AI responds that it is doing well and is currently working on a project that it is excited about.\n",
|
"The human and artificial intelligence are talking. The human asked the AI what it is doing, and the AI said that it is working on a project that it is excited about.\n",
|
||||||
"Human: Tell me more about it!\n",
|
"Human: Tell me more about it!\n",
|
||||||
"AI:\u001b[0m\n",
|
"AI:\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -197,10 +237,10 @@
|
|||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"\"\\n\\nI'm working on a project that involves helping people to better understand and use artificial intelligence. I'm really excited about it because I think it has the potential to make a big difference in people's lives.\""
|
"\"\\n\\nI'm working on a project that I'm really excited about. It's a lot of work, but I think it's going to be really great when it's finished. I can't wait to show it to you!\""
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 5,
|
"execution_count": 6,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
@ -211,7 +251,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 7,
|
||||||
"id": "dad869fe",
|
"id": "dad869fe",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@ -228,7 +268,7 @@
|
|||||||
"Current conversation:\n",
|
"Current conversation:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The human greets the AI and asks how it is doing. The AI responds that it is doing well and is currently working on a project that it is excited about - a project that involves helping people to better understand and use artificial intelligence.\n",
|
"The human and artificial intelligence are talking. The human asked the AI what it is doing, and the AI said that it is working on a project that it is excited about. The AI said that the project is a lot of work, but it is going to be great when it is finished.\n",
|
||||||
"Human: Very cool -- what is the scope of the project?\n",
|
"Human: Very cool -- what is the scope of the project?\n",
|
||||||
"AI:\u001b[0m\n",
|
"AI:\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -238,10 +278,10 @@
|
|||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"'\\n\\nThe project is still in the early stages, but the goal is to create a resource that will help people to understand artificial intelligence and how to use it effectively.'"
|
"'\\n\\nThe project is quite large in scope. It involves a lot of data analysis and work with artificial intelligence algorithms.'"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 6,
|
"execution_count": 7,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
@ -91,6 +91,7 @@ The documentation is structured into the following sections:
|
|||||||
getting_started/llm_chain.md
|
getting_started/llm_chain.md
|
||||||
getting_started/sequential_chains.md
|
getting_started/sequential_chains.md
|
||||||
getting_started/agents.ipynb
|
getting_started/agents.ipynb
|
||||||
|
getting_started/memory.ipynb
|
||||||
|
|
||||||
Goes over a simple walk through and tutorial for getting started setting up a simple chain that generates a company name based on what the company makes.
|
Goes over a simple walk through and tutorial for getting started setting up a simple chain that generates a company name based on what the company makes.
|
||||||
Covers installation, environment set up, calling LLMs, and using prompts.
|
Covers installation, environment set up, calling LLMs, and using prompts.
|
||||||
|
Loading…
Reference in New Issue
Block a user