mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-29 01:48:57 +00:00
docs: update migration guide (#24835)
Move to its own section in the sidebar.
This commit is contained in:
parent
957b05b8d5
commit
c123cb2b30
@ -90,7 +90,7 @@ LCEL aims to provide consistency around behavior and customization over legacy s
|
|||||||
`ConversationalRetrievalChain`. Many of these legacy chains hide important details like prompts, and as a wider variety
|
`ConversationalRetrievalChain`. Many of these legacy chains hide important details like prompts, and as a wider variety
|
||||||
of viable models emerge, customization has become more and more important.
|
of viable models emerge, customization has become more and more important.
|
||||||
|
|
||||||
If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/how_to/migrate_chains/).
|
If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/versions/migrating_chains).
|
||||||
|
|
||||||
For guides on how to do specific tasks with LCEL, check out [the relevant how-to guides](/docs/how_to/#langchain-expression-language-lcel).
|
For guides on how to do specific tasks with LCEL, check out [the relevant how-to guides](/docs/how_to/#langchain-expression-language-lcel).
|
||||||
|
|
||||||
|
@ -31,6 +31,8 @@ This highlights functionality that is core to using LangChain.
|
|||||||
|
|
||||||
[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.
|
[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.
|
||||||
|
|
||||||
|
[**Migration guide**](/docs/versions/migrating_chains): For migrating legacy chain abstractions to LCEL.
|
||||||
|
|
||||||
- [How to: chain runnables](/docs/how_to/sequence)
|
- [How to: chain runnables](/docs/how_to/sequence)
|
||||||
- [How to: stream runnables](/docs/how_to/streaming)
|
- [How to: stream runnables](/docs/how_to/streaming)
|
||||||
- [How to: invoke runnables in parallel](/docs/how_to/parallel/)
|
- [How to: invoke runnables in parallel](/docs/how_to/parallel/)
|
||||||
@ -43,7 +45,6 @@ This highlights functionality that is core to using LangChain.
|
|||||||
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
|
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
|
||||||
- [How to: inspect runnables](/docs/how_to/inspect)
|
- [How to: inspect runnables](/docs/how_to/inspect)
|
||||||
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
|
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
|
||||||
- [How to: migrate chains to LCEL](/docs/how_to/migrate_chains)
|
|
||||||
- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)
|
- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)
|
||||||
|
|
||||||
## Components
|
## Components
|
||||||
|
262
docs/docs/versions/migrating_chains/conversation_chain.ipynb
Normal file
262
docs/docs/versions/migrating_chains/conversation_chain.ipynb
Normal file
@ -0,0 +1,262 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "030d95bc-2f9d-492b-8245-b791b866936b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"---\n",
|
||||||
|
"title: Migrating from ConversationalChain\n",
|
||||||
|
"---"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d20aeaad-b3ca-4a7d-b02d-3267503965af",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporates a memory of previous messages to sustain a stateful conversation.\n",
|
||||||
|
"\n",
|
||||||
|
"Some advantages of switching to the LCEL implementation are:\n",
|
||||||
|
"\n",
|
||||||
|
"- Innate support for threads/separate sessions. To make this work with `ConversationChain`, you'd need to instantiate a separate memory class outside the chain.\n",
|
||||||
|
"- More explicit parameters. `ConversationChain` contains a hidden default prompt, which can cause confusion.\n",
|
||||||
|
"- Streaming support. `ConversationChain` only supports streaming via callbacks.\n",
|
||||||
|
"\n",
|
||||||
|
"`RunnableWithMessageHistory` implements sessions via configuration parameters. It should be instantiated with a callable that returns a [chat message history](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html). By default, it expects this function to take a single argument `session_id`."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b99b47ec",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%pip install --upgrade --quiet langchain langchain-openai"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "717c8673",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import os\n",
|
||||||
|
"from getpass import getpass\n",
|
||||||
|
"\n",
|
||||||
|
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "00df631d-5121-4918-94aa-b88acce9b769",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
|
||||||
|
"\n",
|
||||||
|
"<ColumnContainer>\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### Legacy\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "4f2cc6dc-d70a-4c13-9258-452f14290da6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'input': 'how are you?',\n",
|
||||||
|
" 'history': '',\n",
|
||||||
|
" 'response': \"Arr matey, I be doin' well on the high seas, plunderin' and pillagin' as usual. How be ye?\"}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain.chains import ConversationChain\n",
|
||||||
|
"from langchain.memory import ConversationBufferMemory\n",
|
||||||
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||||
|
"from langchain_openai import ChatOpenAI\n",
|
||||||
|
"\n",
|
||||||
|
"template = \"\"\"\n",
|
||||||
|
"You are a pirate. Answer the following questions as best you can.\n",
|
||||||
|
"Chat history: {history}\n",
|
||||||
|
"Question: {input}\n",
|
||||||
|
"\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||||
|
"\n",
|
||||||
|
"memory = ConversationBufferMemory()\n",
|
||||||
|
"\n",
|
||||||
|
"chain = ConversationChain(\n",
|
||||||
|
" llm=ChatOpenAI(),\n",
|
||||||
|
" memory=memory,\n",
|
||||||
|
" prompt=prompt,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"chain({\"input\": \"how are you?\"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"</Column>\n",
|
||||||
|
"\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### LCEL\n",
|
||||||
|
"\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "666c92a0-b555-4418-a465-6490c1b92570",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"\"Arr, me matey! I be doin' well, sailin' the high seas and searchin' for treasure. How be ye?\""
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 3,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
|
||||||
|
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||||
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||||
|
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||||
|
"from langchain_openai import ChatOpenAI\n",
|
||||||
|
"\n",
|
||||||
|
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||||
|
" [\n",
|
||||||
|
" (\"system\", \"You are a pirate. Answer the following questions as best you can.\"),\n",
|
||||||
|
" (\"placeholder\", \"{chat_history}\"),\n",
|
||||||
|
" (\"human\", \"{input}\"),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"history = InMemoryChatMessageHistory()\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def get_history():\n",
|
||||||
|
" return history\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
|
||||||
|
"\n",
|
||||||
|
"wrapped_chain = RunnableWithMessageHistory(\n",
|
||||||
|
" chain,\n",
|
||||||
|
" get_history,\n",
|
||||||
|
" history_messages_key=\"chat_history\",\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"wrapped_chain.invoke({\"input\": \"how are you?\"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "6b386ce6-895e-442c-88f3-7bec0ab9f401",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"</Column>\n",
|
||||||
|
"</ColumnContainer>\n",
|
||||||
|
"\n",
|
||||||
|
"The above example uses the same `history` for all sessions. The example below shows how to use a different chat history for each session."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"id": "96152263-98d7-4e06-8c73-d0c0abf3e8e9",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"'Ahoy there, me hearty! What can this old pirate do for ye today?'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain_core.chat_history import BaseChatMessageHistory\n",
|
||||||
|
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||||
|
"\n",
|
||||||
|
"store = {}\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
|
||||||
|
" if session_id not in store:\n",
|
||||||
|
" store[session_id] = InMemoryChatMessageHistory()\n",
|
||||||
|
" return store[session_id]\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
|
||||||
|
"\n",
|
||||||
|
"wrapped_chain = RunnableWithMessageHistory(\n",
|
||||||
|
" chain,\n",
|
||||||
|
" get_session_history,\n",
|
||||||
|
" history_messages_key=\"chat_history\",\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"wrapped_chain.invoke(\n",
|
||||||
|
" {\"input\": \"Hello!\"},\n",
|
||||||
|
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "b2717810",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Next steps\n",
|
||||||
|
"\n",
|
||||||
|
"See [this tutorial](/docs/tutorials/chatbot) for a more end-to-end guide on building with [`RunnableWithMessageHistory`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html).\n",
|
||||||
|
"\n",
|
||||||
|
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
@ -0,0 +1,289 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "9e279999-6bf0-4a48-9e06-539b916dc705",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"---\n",
|
||||||
|
"title: Migrating from ConversationalRetrievalChain\n",
|
||||||
|
"---"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "292a3c83-44a9-4426-bbec-f1a778d00d93",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n",
|
||||||
|
"\n",
|
||||||
|
"Advantages of switching to the LCEL implementation are similar to the `RetrievalQA` section above:\n",
|
||||||
|
"\n",
|
||||||
|
"- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n",
|
||||||
|
" - This means the class contains two sets of configurable prompts, LLMs, etc.\n",
|
||||||
|
"- More easily return source documents.\n",
|
||||||
|
"- Support for runnable methods like streaming and async operations.\n",
|
||||||
|
"\n",
|
||||||
|
"Here are side-by-side implementations with custom prompts. We'll reuse the loaded documents and vector store from the previous section:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b99b47ec",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%pip install --upgrade --quiet langchain-community langchain langchain-openai faiss-cpu"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "717c8673",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import os\n",
|
||||||
|
"from getpass import getpass\n",
|
||||||
|
"\n",
|
||||||
|
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "44119498-5a98-4077-9e2f-c75500e7eace",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Load docs\n",
|
||||||
|
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||||
|
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||||||
|
"from langchain_community.vectorstores import FAISS\n",
|
||||||
|
"from langchain_openai.chat_models import ChatOpenAI\n",
|
||||||
|
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
|
||||||
|
"\n",
|
||||||
|
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
|
||||||
|
"data = loader.load()\n",
|
||||||
|
"\n",
|
||||||
|
"# Split\n",
|
||||||
|
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
|
||||||
|
"all_splits = text_splitter.split_documents(data)\n",
|
||||||
|
"\n",
|
||||||
|
"# Store splits\n",
|
||||||
|
"vectorstore = FAISS.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())\n",
|
||||||
|
"\n",
|
||||||
|
"# LLM\n",
|
||||||
|
"llm = ChatOpenAI()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "8bc06416",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
|
||||||
|
"\n",
|
||||||
|
"<ColumnContainer>\n",
|
||||||
|
"\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### Legacy"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 5,
|
||||||
|
"id": "8b471e7d-3ccb-4ab3-bc09-304c4b14a908",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'question': 'What are autonomous agents?',\n",
|
||||||
|
" 'chat_history': '',\n",
|
||||||
|
" 'answer': 'Autonomous agents are entities empowered with capabilities like planning, task decomposition, and memory to perform complex tasks independently. These agents can leverage tools like browsing the internet, reading documentation, executing code, and calling APIs to achieve their objectives. They are designed to handle tasks like scientific discovery and experimentation autonomously.'}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 5,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||||
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||||
|
"\n",
|
||||||
|
"condense_question_template = \"\"\"\n",
|
||||||
|
"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n",
|
||||||
|
"\n",
|
||||||
|
"Chat History:\n",
|
||||||
|
"{chat_history}\n",
|
||||||
|
"Follow Up Input: {question}\n",
|
||||||
|
"Standalone question:\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
"condense_question_prompt = ChatPromptTemplate.from_template(condense_question_template)\n",
|
||||||
|
"\n",
|
||||||
|
"qa_template = \"\"\"\n",
|
||||||
|
"You are an assistant for question-answering tasks.\n",
|
||||||
|
"Use the following pieces of retrieved context to answer\n",
|
||||||
|
"the question. If you don't know the answer, say that you\n",
|
||||||
|
"don't know. Use three sentences maximum and keep the\n",
|
||||||
|
"answer concise.\n",
|
||||||
|
"\n",
|
||||||
|
"Chat History:\n",
|
||||||
|
"{chat_history}\n",
|
||||||
|
"\n",
|
||||||
|
"Other context:\n",
|
||||||
|
"{context}\n",
|
||||||
|
"\n",
|
||||||
|
"Question: {question}\n",
|
||||||
|
"\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
"qa_prompt = ChatPromptTemplate.from_template(qa_template)\n",
|
||||||
|
"\n",
|
||||||
|
"convo_qa_chain = ConversationalRetrievalChain.from_llm(\n",
|
||||||
|
" llm,\n",
|
||||||
|
" vectorstore.as_retriever(),\n",
|
||||||
|
" condense_question_prompt=condense_question_prompt,\n",
|
||||||
|
" combine_docs_chain_kwargs={\n",
|
||||||
|
" \"prompt\": qa_prompt,\n",
|
||||||
|
" },\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"convo_qa_chain(\n",
|
||||||
|
" {\n",
|
||||||
|
" \"question\": \"What are autonomous agents?\",\n",
|
||||||
|
" \"chat_history\": \"\",\n",
|
||||||
|
" }\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "43a8a23c",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"</Column>\n",
|
||||||
|
"\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### LCEL\n",
|
||||||
|
"\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 7,
|
||||||
|
"id": "35657a13-ad67-4af1-b1f9-f58606ae43b4",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'input': 'What are autonomous agents?',\n",
|
||||||
|
" 'chat_history': [],\n",
|
||||||
|
" 'context': [Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:'),\n",
|
||||||
|
" Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. Lil’Log. https://lilianweng.github.io/posts/2023-06-23-agent/.'),\n",
|
||||||
|
" Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#'),\n",
|
||||||
|
" Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\")],\n",
|
||||||
|
" 'answer': 'Autonomous agents are entities that can act independently to achieve specific goals or tasks without direct human intervention. These agents have the ability to perceive their environment, make decisions, and take actions based on their programming or learning. They can perform tasks such as planning, execution, and problem-solving autonomously.'}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 7,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n",
|
||||||
|
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
|
||||||
|
"\n",
|
||||||
|
"condense_question_system_template = (\n",
|
||||||
|
" \"Given a chat history and the latest user question \"\n",
|
||||||
|
" \"which might reference context in the chat history, \"\n",
|
||||||
|
" \"formulate a standalone question which can be understood \"\n",
|
||||||
|
" \"without the chat history. Do NOT answer the question, \"\n",
|
||||||
|
" \"just reformulate it if needed and otherwise return it as is.\"\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"condense_question_prompt = ChatPromptTemplate.from_messages(\n",
|
||||||
|
" [\n",
|
||||||
|
" (\"system\", condense_question_system_template),\n",
|
||||||
|
" (\"placeholder\", \"{chat_history}\"),\n",
|
||||||
|
" (\"human\", \"{input}\"),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"history_aware_retriever = create_history_aware_retriever(\n",
|
||||||
|
" llm, vectorstore.as_retriever(), condense_question_prompt\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"system_prompt = (\n",
|
||||||
|
" \"You are an assistant for question-answering tasks. \"\n",
|
||||||
|
" \"Use the following pieces of retrieved context to answer \"\n",
|
||||||
|
" \"the question. If you don't know the answer, say that you \"\n",
|
||||||
|
" \"don't know. Use three sentences maximum and keep the \"\n",
|
||||||
|
" \"answer concise.\"\n",
|
||||||
|
" \"\\n\\n\"\n",
|
||||||
|
" \"{context}\"\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"qa_prompt = ChatPromptTemplate.from_messages(\n",
|
||||||
|
" [\n",
|
||||||
|
" (\"system\", system_prompt),\n",
|
||||||
|
" (\"placeholder\", \"{chat_history}\"),\n",
|
||||||
|
" (\"human\", \"{input}\"),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"qa_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
|
||||||
|
"\n",
|
||||||
|
"convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)\n",
|
||||||
|
"\n",
|
||||||
|
"convo_qa_chain.invoke(\n",
|
||||||
|
" {\n",
|
||||||
|
" \"input\": \"What are autonomous agents?\",\n",
|
||||||
|
" \"chat_history\": [],\n",
|
||||||
|
" }\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "b2717810",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"</Column>\n",
|
||||||
|
"\n",
|
||||||
|
"</ColumnContainer>\n",
|
||||||
|
"\n",
|
||||||
|
"## Next steps\n",
|
||||||
|
"\n",
|
||||||
|
"You've now seen how to migrate existing usage of some legacy chains to LCEL.\n",
|
||||||
|
"\n",
|
||||||
|
"Next, check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
34
docs/docs/versions/migrating_chains/index.mdx
Normal file
34
docs/docs/versions/migrating_chains/index.mdx
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
# How to migrate chains to LCEL
|
||||||
|
|
||||||
|
:::info Prerequisites
|
||||||
|
|
||||||
|
This guide assumes familiarity with the following concepts:
|
||||||
|
- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
LCEL is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:
|
||||||
|
|
||||||
|
1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.
|
||||||
|
2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.
|
||||||
|
|
||||||
|
LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL primitives. Doing so confers some general advantages:
|
||||||
|
|
||||||
|
- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;
|
||||||
|
- The chains may be more easily extended or modified;
|
||||||
|
- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.
|
||||||
|
|
||||||
|
The LCEL implementations can be slightly more verbose, but there are significant benefits in transparency and customizability.
|
||||||
|
|
||||||
|
The below pages assist with migration from various specific chains to LCEL:
|
||||||
|
|
||||||
|
- [LLMChain](/docs/versions/migrating_chains/llm_chain)
|
||||||
|
- [ConversationChain](/docs/versions/migrating_chains/conversation_chain)
|
||||||
|
- [RetrievalQA](/docs/versions/migrating_chains/retrieval_qa)
|
||||||
|
- [ConversationalRetrievalChain](/docs/versions/migrating_chains/conversation_retrieval_chain)
|
||||||
|
|
||||||
|
Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information.
|
213
docs/docs/versions/migrating_chains/llm_chain.ipynb
Normal file
213
docs/docs/versions/migrating_chains/llm_chain.ipynb
Normal file
@ -0,0 +1,213 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"---\n",
|
||||||
|
"title: Migrating from LLMChain\n",
|
||||||
|
"---"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n",
|
||||||
|
"\n",
|
||||||
|
"Some advantages of switching to the LCEL implementation are:\n",
|
||||||
|
"\n",
|
||||||
|
"- Clarity around contents and parameters. The legacy `LLMChain` contains a default output parser and other options.\n",
|
||||||
|
"- Easier streaming. `LLMChain` only supports streaming via callbacks.\n",
|
||||||
|
"- Easier access to raw message outputs if desired. `LLMChain` only exposes these via a parameter or via callback."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b99b47ec",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%pip install --upgrade --quiet langchain-openai"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "717c8673",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import os\n",
|
||||||
|
"from getpass import getpass\n",
|
||||||
|
"\n",
|
||||||
|
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
|
||||||
|
"\n",
|
||||||
|
"<ColumnContainer>\n",
|
||||||
|
"\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### Legacy\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'adjective': 'funny',\n",
|
||||||
|
" 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain.chains import LLMChain\n",
|
||||||
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||||
|
"from langchain_openai import ChatOpenAI\n",
|
||||||
|
"\n",
|
||||||
|
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||||
|
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
|
||||||
|
"\n",
|
||||||
|
"chain({\"adjective\": \"funny\"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"</Column>\n",
|
||||||
|
"\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### LCEL\n",
|
||||||
|
"\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "f0903025-9aa8-4a53-8336-074341c00e59",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"'Why was the math book sad?\\n\\nBecause it had too many problems.'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 3,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||||
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||||
|
"from langchain_openai import ChatOpenAI\n",
|
||||||
|
"\n",
|
||||||
|
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||||
|
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
|
||||||
|
"\n",
|
||||||
|
"chain.invoke({\"adjective\": \"funny\"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"</Column>\n",
|
||||||
|
"</ColumnContainer>\n",
|
||||||
|
"\n",
|
||||||
|
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"id": "20f11321-834a-485a-a8ad-85734d572902",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'adjective': 'funny',\n",
|
||||||
|
" 'text': 'Why did the scarecrow win an award? Because he was outstanding in his field!'}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||||
|
"\n",
|
||||||
|
"outer_chain = RunnablePassthrough().assign(text=chain)\n",
|
||||||
|
"\n",
|
||||||
|
"outer_chain.invoke({\"adjective\": \"funny\"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "b2717810",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Next steps\n",
|
||||||
|
"\n",
|
||||||
|
"See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers.\n",
|
||||||
|
"\n",
|
||||||
|
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
261
docs/docs/versions/migrating_chains/retrieval_qa.ipynb
Normal file
261
docs/docs/versions/migrating_chains/retrieval_qa.ipynb
Normal file
@ -0,0 +1,261 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "eddcd5c1-cbe9-4a7d-8903-7d1ab29f9094",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"---\n",
|
||||||
|
"title: Migrating from RetrievalQA\n",
|
||||||
|
"---"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "b2d37868-dd01-4814-a76a-256f36cf66f7",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"The [`RetrievalQA`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html) chain performed natural-language question answering over a data source using retrieval-augmented generation.\n",
|
||||||
|
"\n",
|
||||||
|
"Some advantages of switching to the LCEL implementation are:\n",
|
||||||
|
"\n",
|
||||||
|
"- Easier customizability. Details such as the prompt and how documents are formatted are only configurable via specific parameters in the `RetrievalQA` chain.\n",
|
||||||
|
"- More easily return source documents.\n",
|
||||||
|
"- Support for runnable methods like streaming and async operations.\n",
|
||||||
|
"\n",
|
||||||
|
"Now let's look at them side-by-side. We'll use the same ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b99b47ec",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%pip install --upgrade --quiet langchain-community langchain langchain-openai faiss-cpu"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "717c8673",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import os\n",
|
||||||
|
"from getpass import getpass\n",
|
||||||
|
"\n",
|
||||||
|
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "1efbe16e",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Load docs\n",
|
||||||
|
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||||
|
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||||||
|
"from langchain_community.vectorstores import FAISS\n",
|
||||||
|
"from langchain_openai.chat_models import ChatOpenAI\n",
|
||||||
|
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
|
||||||
|
"\n",
|
||||||
|
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
|
||||||
|
"data = loader.load()\n",
|
||||||
|
"\n",
|
||||||
|
"# Split\n",
|
||||||
|
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
|
||||||
|
"all_splits = text_splitter.split_documents(data)\n",
|
||||||
|
"\n",
|
||||||
|
"# Store splits\n",
|
||||||
|
"vectorstore = FAISS.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())\n",
|
||||||
|
"\n",
|
||||||
|
"# LLM\n",
|
||||||
|
"llm = ChatOpenAI()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c7e16438",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
|
||||||
|
"\n",
|
||||||
|
"<ColumnContainer>\n",
|
||||||
|
"\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### Legacy"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"id": "2d0ddc98-75e5-4c1c-a1b5-7ef612516dc9",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'query': 'What are autonomous agents?',\n",
|
||||||
|
" 'result': 'Autonomous agents are LLM-empowered agents capable of handling autonomous design, planning, and performance of complex scientific experiments. These agents can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other LLMs. They can generate reasoning steps, such as developing a novel anticancer drug, based on requested tasks.'}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain import hub\n",
|
||||||
|
"from langchain.chains import RetrievalQA\n",
|
||||||
|
"\n",
|
||||||
|
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
|
||||||
|
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
|
||||||
|
"\n",
|
||||||
|
"qa_chain = RetrievalQA.from_llm(\n",
|
||||||
|
" llm, retriever=vectorstore.as_retriever(), prompt=prompt\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"qa_chain(\"What are autonomous agents?\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "081948e5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"</Column>\n",
|
||||||
|
"\n",
|
||||||
|
"<Column>\n",
|
||||||
|
"\n",
|
||||||
|
"#### LCEL\n",
|
||||||
|
"\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 5,
|
||||||
|
"id": "91ae87cc-7b2f-4d0e-a6ae-a7a4c8c5ba41",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"'Autonomous agents are agents empowered by large language models (LLMs) that can handle autonomous design, planning, and performance of complex tasks such as scientific experiments. These agents can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other LLMs for their tasks. The model can come up with reasoning steps when given a specific task, such as developing a novel anticancer drug.'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 5,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain import hub\n",
|
||||||
|
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||||
|
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||||
|
"\n",
|
||||||
|
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
|
||||||
|
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def format_docs(docs):\n",
|
||||||
|
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"qa_chain = (\n",
|
||||||
|
" {\n",
|
||||||
|
" \"context\": vectorstore.as_retriever() | format_docs,\n",
|
||||||
|
" \"question\": RunnablePassthrough(),\n",
|
||||||
|
" }\n",
|
||||||
|
" | prompt\n",
|
||||||
|
" | llm\n",
|
||||||
|
" | StrOutputParser()\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"qa_chain.invoke(\"What are autonomous agents?\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d6f44fe8",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"</Column>\n",
|
||||||
|
"</ColumnContainer>\n",
|
||||||
|
"\n",
|
||||||
|
"The LCEL implementation exposes the internals of what's happening around retrieving, formatting documents, and passing them through a prompt to the LLM, but it is more verbose. You can customize and wrap this composition logic in a helper function, or use the higher-level [`create_retrieval_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) and [`create_stuff_documents_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) helper method:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 6,
|
||||||
|
"id": "c448a74c-1f0a-445b-b629-51bc151ab620",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'input': 'What are autonomous agents?',\n",
|
||||||
|
" 'context': [Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:'),\n",
|
||||||
|
" Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. Lil’Log. https://lilianweng.github.io/posts/2023-06-23-agent/.'),\n",
|
||||||
|
" Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#'),\n",
|
||||||
|
" Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Or\\n@article{weng2023agent,\\n title = \"LLM-powered Autonomous Agents\",\\n author = \"Weng, Lilian\",\\n journal = \"lilianweng.github.io\",\\n year = \"2023\",\\n month = \"Jun\",\\n url = \"https://lilianweng.github.io/posts/2023-06-23-agent/\"\\n}\\nReferences#\\n[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022\\n[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).')],\n",
|
||||||
|
" 'answer': 'Autonomous agents are entities capable of operating independently to perform tasks or make decisions without direct human intervention. In the context provided, autonomous agents empowered by Large Language Models (LLMs) are used for scientific discovery, including tasks like autonomous design, planning, and executing complex scientific experiments.'}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 6,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain import hub\n",
|
||||||
|
"from langchain.chains import create_retrieval_chain\n",
|
||||||
|
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
|
||||||
|
"\n",
|
||||||
|
"# See full prompt at https://smith.langchain.com/hub/langchain-ai/retrieval-qa-chat\n",
|
||||||
|
"retrieval_qa_chat_prompt = hub.pull(\"langchain-ai/retrieval-qa-chat\")\n",
|
||||||
|
"\n",
|
||||||
|
"combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)\n",
|
||||||
|
"rag_chain = create_retrieval_chain(vectorstore.as_retriever(), combine_docs_chain)\n",
|
||||||
|
"\n",
|
||||||
|
"rag_chain.invoke({\"input\": \"What are autonomous agents?\"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "b2717810",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Next steps\n",
|
||||||
|
"\n",
|
||||||
|
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
@ -568,7 +568,7 @@ Removal: 0.3.0
|
|||||||
|
|
||||||
Alternative: [RunnableSequence](/docs/how_to/sequence/), e.g., `prompt | llm`
|
Alternative: [RunnableSequence](/docs/how_to/sequence/), e.g., `prompt | llm`
|
||||||
|
|
||||||
This [migration guide](/docs/how_to/migrate_chains/#llmchain) has a side-by-side comparison.
|
This [migration guide](/docs/versions/migrating_chains/llm_chain) has a side-by-side comparison.
|
||||||
|
|
||||||
|
|
||||||
#### LLMSingleActionAgent
|
#### LLMSingleActionAgent
|
||||||
@ -756,7 +756,7 @@ Removal: 0.3.0
|
|||||||
|
|
||||||
|
|
||||||
Alternative: [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain)
|
Alternative: [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain)
|
||||||
This [migration guide](/docs/how_to/migrate_chains/#retrievalqa) has a side-by-side comparison.
|
This [migration guide](/docs/versions/migrating_chains/retrieval_qa) has a side-by-side comparison.
|
||||||
|
|
||||||
|
|
||||||
#### load_agent_from_config
|
#### load_agent_from_config
|
||||||
@ -823,7 +823,7 @@ Removal: 0.3.0
|
|||||||
|
|
||||||
|
|
||||||
Alternative: [create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) together with [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) (see example in docstring)
|
Alternative: [create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) together with [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) (see example in docstring)
|
||||||
This [migration guide](/docs/how_to/migrate_chains/#conversationalretrievalchain) has a side-by-side comparison.
|
This [migration guide](/docs/versions/migrating_chains/conversation_retrieval_chain) has a side-by-side comparison.
|
||||||
|
|
||||||
|
|
||||||
#### create_extraction_chain_pydantic
|
#### create_extraction_chain_pydantic
|
||||||
|
@ -11,7 +11,7 @@ LangChain v0.2 was released in May 2024. This release includes a number of [brea
|
|||||||
:::note Reference
|
:::note Reference
|
||||||
|
|
||||||
- [Breaking Changes & Deprecations](/docs/versions/v0_2/deprecations)
|
- [Breaking Changes & Deprecations](/docs/versions/v0_2/deprecations)
|
||||||
- [Migrating legacy chains to LCEL](/docs/how_to/migrate_chains/)
|
- [Migrating legacy chains to LCEL](/docs/versions/migrating_chains)
|
||||||
- [Migrating to Astream Events v2](/docs/versions/v0_2/migrating_astream_events)
|
- [Migrating to Astream Events v2](/docs/versions/v0_2/migrating_astream_events)
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
@ -92,6 +92,18 @@ module.exports = {
|
|||||||
className: 'hidden',
|
className: 'hidden',
|
||||||
}],
|
}],
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
type: "category",
|
||||||
|
label: "Migrating to LCEL",
|
||||||
|
link: {type: 'doc', id: 'versions/migrating_chains/index'},
|
||||||
|
collapsible: false,
|
||||||
|
collapsed: false,
|
||||||
|
items: [{
|
||||||
|
type: 'autogenerated',
|
||||||
|
dirName: 'versions/migrating_chains',
|
||||||
|
className: 'hidden',
|
||||||
|
}],
|
||||||
|
},
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
"security"
|
"security"
|
||||||
|
@ -65,6 +65,10 @@
|
|||||||
{
|
{
|
||||||
"source": "/docs/integrations/toolkits/document_comparison_toolkit(/?)",
|
"source": "/docs/integrations/toolkits/document_comparison_toolkit(/?)",
|
||||||
"destination": "/docs/tutorials/rag/"
|
"destination": "/docs/tutorials/rag/"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": "/docs/how_to/migrate_chains(/?)",
|
||||||
|
"destination": "/docs/versions/migrating_chains"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
Loading…
Reference in New Issue
Block a user