docs: migration guide nits (#25600)

This commit is contained in:
Erick Friis 2024-08-21 21:24:34 -07:00 committed by GitHub
parent 3981d736df
commit e958f76160
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
14 changed files with 175 additions and 177 deletions

View File

@ -1,20 +1,12 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
"metadata": {},
"source": [
"---\n",
"title: Migrating from ConstitutionalChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f", "id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Migrating from ConstitutionalChain\n",
"\n",
"[ConstitutionalChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html) allowed for a LLM to critique and revise generations based on [principles](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html), structured as combinations of critique and revision requests. For example, a principle might include a request to identify harmful content, and a request to rewrite the content.\n", "[ConstitutionalChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html) allowed for a LLM to critique and revise generations based on [principles](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html), structured as combinations of critique and revision requests. For example, a principle might include a request to identify harmful content, and a request to rewrite the content.\n",
"\n", "\n",
"`Constitutional AI principles` are based on the [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/pdf/2212.08073) paper.\n", "`Constitutional AI principles` are based on the [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/pdf/2212.08073) paper.\n",

View File

@ -1,21 +1,13 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "030d95bc-2f9d-492b-8245-b791b866936b",
"metadata": {},
"source": [
"---\n",
"title: Migrating from ConversationalChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "d20aeaad-b3ca-4a7d-b02d-3267503965af", "id": "d20aeaad-b3ca-4a7d-b02d-3267503965af",
"metadata": {}, "metadata": {},
"source": [ "source": [
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporates a memory of previous messages to sustain a stateful conversation.\n", "# Migrating from ConversationalChain\n",
"\n",
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporated a memory of previous messages to sustain a stateful conversation.\n",
"\n", "\n",
"Some advantages of switching to the LCEL implementation are:\n", "Some advantages of switching to the LCEL implementation are:\n",
"\n", "\n",

View File

@ -1,30 +1,29 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "9e279999-6bf0-4a48-9e06-539b916dc705",
"metadata": {},
"source": [
"---\n",
"title: Migrating from ConversationalRetrievalChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "292a3c83-44a9-4426-bbec-f1a778d00d93", "id": "292a3c83-44a9-4426-bbec-f1a778d00d93",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Migrating from ConversationalRetrievalChain\n",
"\n",
"The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n", "The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n",
"\n", "\n",
"Advantages of switching to the LCEL implementation are similar to the `RetrievalQA` section above:\n", "Advantages of switching to the LCEL implementation are similar to the [`RetrievalQA` migration guide](./retrieval_qa.ipynb):\n",
"\n", "\n",
"- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n", "- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n",
" - This means the class contains two sets of configurable prompts, LLMs, etc.\n", " - This means the class contains two sets of configurable prompts, LLMs, etc.\n",
"- More easily return source documents.\n", "- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n", "- Support for runnable methods like streaming and async operations.\n",
"\n", "\n",
"Here are side-by-side implementations with custom prompts. We'll reuse the loaded documents and vector store from the previous section:" "Here are equivalent implementations with custom prompts.\n",
"We'll use the following ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:\n",
"\n",
"## Shared setup\n",
"\n",
"For both versions, we'll need to load the data with the `WebBaseLoader` document loader, split it with `RecursiveCharacterTextSplitter`, and add it to an in-memory `FAISS` vector store.\n",
"\n",
"We will also instantiate a chat model to use."
] ]
}, },
{ {

View File

@ -0,0 +1,89 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to migrate from v0.0 chains\n",
"\n",
"LangChain has evolved since its initial release, and many of the original \"Chain\" classes \n",
"have been deprecated in favor of the more flexible and powerful frameworks of LCEL and LangGraph. \n",
"\n",
"This guide will help you migrate your existing v0.0 chains to the new abstractions.\n",
"\n",
":::info How deprecated implementations work\n",
"Even though many of these implementations are deprecated, they are **still supported** in the codebase. \n",
"However, they are not recommended for new development, and we recommend re-implementing them using the following guides!\n",
"\n",
"To see the planned removal version for each deprecated implementation, check their API reference.\n",
":::\n",
"\n",
":::info Prerequisites\n",
"\n",
"These guides assume some familiarity with the following concepts:\n",
"- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n",
"- [LangGraph](https://langchain-ai.github.io/langgraph/)\n",
":::\n",
"\n",
"LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL and LangGraph primitives.\n",
"\n",
"### LCEL\n",
"[LCEL](/docs/concepts/#langchain-expression-language-lcel) is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n",
"\n",
"1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n",
"2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n",
"\n",
"### LangGraph\n",
"[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of LCEL, allows for performant orchestrations of application components while maintaining concise and readable code. It includes built-in persistence, support for cycles, and prioritizes controllability.\n",
"If LCEL grows unwieldy for larger or more complex chains, they may benefit from a LangGraph implementation.\n",
"\n",
"### Advantages\n",
"Using these frameworks for existing v0.0 chains confers some advantages:\n",
"\n",
"- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;\n",
"- The chains may be more easily extended or modified;\n",
"- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.\n",
"- If using LangGraph, the chain supports built-in persistence, allowing for conversational experiences via a \"memory\" of the chat history.\n",
"- If using LangGraph, the steps of the chain can be streamed, allowing for greater control and customizability.\n",
"\n",
"\n",
"The below pages assist with migration from various specific chains to LCEL and LangGraph:\n",
"\n",
"- [LLMChain](./llm_chain.ipynb)\n",
"- [ConversationChain](./conversation_chain.ipynb)\n",
"- [RetrievalQA](./retrieval_qa.ipynb)\n",
"- [ConversationalRetrievalChain](./conversation_retrieval_chain.ipynb)\n",
"- [StuffDocumentsChain](./stuff_docs_chain.ipynb)\n",
"- [MapReduceDocumentsChain](./map_reduce_chain.ipynb)\n",
"- [MapRerankDocumentsChain](./map_rerank_docs_chain.ipynb)\n",
"- [RefineDocumentsChain](./refine_docs_chain.ipynb)\n",
"- [LLMRouterChain](./llm_router_chain.ipynb)\n",
"- [MultiPromptChain](./multi_prompt_chain.ipynb)\n",
"- [LLMMathChain](./llm_math_chain.ipynb)\n",
"- [ConstitutionalChain](./constitutional_chain.ipynb)\n",
"\n",
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,51 +0,0 @@
---
sidebar_position: 1
---
# How to migrate from v0.0 chains
:::info Prerequisites
This guide assumes familiarity with the following concepts:
- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)
- [LangGraph](https://langchain-ai.github.io/langgraph/)
:::
LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL and LangGraph primitives.
### LCEL
[LCEL](/docs/concepts/#langchain-expression-language-lcel) is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:
1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.
2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.
### LangGraph
[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of LCEL, allows for performant orchestrations of application components while maintaining concise and readable code. It includes built-in persistence, support for cycles, and prioritizes controllability.
If LCEL grows unwieldy for larger or more complex chains, they may benefit from a LangGraph implementation.
### Advantages
Using these frameworks for existing v0.0 chains confers some advantages:
- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;
- The chains may be more easily extended or modified;
- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.
- If using LangGraph, the chain supports built-in persistence, allowing for conversational experiences via a "memory" of the chat history.
- If using LangGraph, the steps of the chain can be streamed, allowing for greater control and customizability.
The below pages assist with migration from various specific chains to LCEL and LangGraph:
- [LLMChain](/docs/versions/migrating_chains/llm_chain)
- [ConversationChain](/docs/versions/migrating_chains/conversation_chain)
- [RetrievalQA](/docs/versions/migrating_chains/retrieval_qa)
- [ConversationalRetrievalChain](/docs/versions/migrating_chains/conversation_retrieval_chain)
- [StuffDocumentsChain](/docs/versions/migrating_chains/stuff_docs_chain)
- [MapReduceDocumentsChain](/docs/versions/migrating_chains/map_reduce_chain)
- [MapRerankDocumentsChain](/docs/versions/migrating_chains/map_rerank_docs_chain)
- [RefineDocumentsChain](/docs/versions/migrating_chains/refine_docs_chain)
- [LLMRouterChain](/docs/versions/migrating_chains/llm_router_chain)
- [MultiPromptChain](/docs/versions/migrating_chains/multi_prompt_chain)
- [LLMMathChain](/docs/versions/migrating_chains/llm_math_chain)
- [ConstitutionalChain](/docs/versions/migrating_chains/constitutional_chain)
Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information.

View File

@ -1,20 +1,12 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
"metadata": {},
"source": [
"---\n",
"title: Migrating from LLMChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f", "id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Migrating from LLMChain\n",
"\n",
"[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n", "[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n",
"\n", "\n",
"Some advantages of switching to the LCEL implementation are:\n", "Some advantages of switching to the LCEL implementation are:\n",
@ -36,7 +28,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 1,
"id": "717c8673", "id": "717c8673",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -44,7 +36,8 @@
"import os\n", "import os\n",
"from getpass import getpass\n", "from getpass import getpass\n",
"\n", "\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()" "if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
] ]
}, },
{ {
@ -59,7 +52,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 5,
"id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883", "id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
@ -70,7 +63,7 @@
" 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}" " 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}"
] ]
}, },
"execution_count": 2, "execution_count": 5,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -84,9 +77,39 @@
" [(\"user\", \"Tell me a {adjective} joke\")],\n", " [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n", ")\n",
"\n", "\n",
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n", "legacy_chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
"\n", "\n",
"chain({\"adjective\": \"funny\"})" "legacy_result = legacy_chain({\"adjective\": \"funny\"})\n",
"legacy_result"
]
},
{
"cell_type": "markdown",
"id": "9f89e97b",
"metadata": {},
"source": [
"Note that `LLMChain` by default returned a `dict` containing both the input and the output from `StrOutputParser`, so to extract the output, you need to access the `\"text\"` key."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c7fa1618",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\""
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"legacy_result[\"text\"]"
] ]
}, },
{ {
@ -137,7 +160,7 @@
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f", "id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):" "If you'd like to mimic the `dict` packaging of input and output in `LLMChain`, you can use a [`RunnablePassthrough.assign`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) like:"
] ]
}, },
{ {
@ -197,7 +220,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.4" "version": "3.11.4"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -1,20 +1,12 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
"metadata": {},
"source": [
"---\n",
"title: Migrating from LLMMathChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f", "id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Migrating from LLMMathChain\n",
"\n",
"[`LLMMathChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html) enabled the evaluation of mathematical expressions generated by a LLM. Instructions for generating the expressions were formatted into the prompt, and the expressions were parsed out of the string response before evaluation using the [numexpr](https://numexpr.readthedocs.io/en/latest/user_guide.html) library.\n", "[`LLMMathChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html) enabled the evaluation of mathematical expressions generated by a LLM. Instructions for generating the expressions were formatted into the prompt, and the expressions were parsed out of the string response before evaluation using the [numexpr](https://numexpr.readthedocs.io/en/latest/user_guide.html) library.\n",
"\n", "\n",
"This is more naturally achieved via [tool calling](/docs/concepts/#functiontool-calling). We can equip a chat model with a simple calculator tool leveraging `numexpr` and construct a simple chain around it using [LangGraph](https://langchain-ai.github.io/langgraph/). Some advantages of this approach include:\n", "This is more naturally achieved via [tool calling](/docs/concepts/#functiontool-calling). We can equip a chat model with a simple calculator tool leveraging `numexpr` and construct a simple chain around it using [LangGraph](https://langchain-ai.github.io/langgraph/). Some advantages of this approach include:\n",

View File

@ -1,20 +1,12 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "575befea-4d98-4941-8e55-1581b169a674",
"metadata": {},
"source": [
"---\n",
"title: Migrating from LLMRouterChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "14625d35-efca-41cf-b203-be9f4c375700", "id": "14625d35-efca-41cf-b203-be9f4c375700",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Migrating from LLMRouterChain\n",
"\n",
"The [`LLMRouterChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html) routed an input query to one of multiple destinations-- that is, given an input query, it used a LLM to select from a list of destination chains, and passed its inputs to the selected chain.\n", "The [`LLMRouterChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html) routed an input query to one of multiple destinations-- that is, given an input query, it used a LLM to select from a list of destination chains, and passed its inputs to the selected chain.\n",
"\n", "\n",
"`LLMRouterChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling). Under the hood, `LLMRouterChain` routes a query by instructing the LLM to generate JSON-formatted text, and parsing out the intended destination.\n", "`LLMRouterChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling). Under the hood, `LLMRouterChain` routes a query by instructing the LLM to generate JSON-formatted text, and parsing out the intended destination.\n",

View File

@ -1,20 +1,12 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "3270b34b-8958-425c-886a-ea4b9e26b475",
"metadata": {},
"source": [
"---\n",
"title: Migrating from MapReduceDocumentsChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "2c7bdc91-9b89-4e59-bc27-89508b024635", "id": "2c7bdc91-9b89-4e59-bc27-89508b024635",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Migrating from MapReduceDocumentsChain\n",
"\n",
"[MapReduceDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html) implements a map-reduce strategy over (potentially long) texts. The strategy is as follows:\n", "[MapReduceDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html) implements a map-reduce strategy over (potentially long) texts. The strategy is as follows:\n",
"\n", "\n",
"- Split a text into smaller documents;\n", "- Split a text into smaller documents;\n",
@ -37,11 +29,9 @@
"\n", "\n",
"Let's first load a chat model:\n", "Let's first load a chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />"
"```"
] ]
}, },
{ {
@ -66,7 +56,7 @@
"source": [ "source": [
"## Basic example (short documents)\n", "## Basic example (short documents)\n",
"\n", "\n",
"Let's generate some simple documents for illustrative purposes." "Let's use the following 3 documents for illustrative purposes."
] ]
}, },
{ {
@ -206,7 +196,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"pip install -qU langgraph" "% pip install -qU langgraph"
] ]
}, },
{ {

View File

@ -5,9 +5,7 @@
"id": "9db5ad7a-857e-46ea-9d0c-ba3fbe62fc81", "id": "9db5ad7a-857e-46ea-9d0c-ba3fbe62fc81",
"metadata": {}, "metadata": {},
"source": [ "source": [
"---\n", "# Migrating from MapRerankDocumentsChain\n",
"title: Migrating from MapRerankDocumentsChain\n",
"---\n",
"\n", "\n",
"[MapRerankDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html) implements a strategy for analyzing long texts. The strategy is as follows:\n", "[MapRerankDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html) implements a strategy for analyzing long texts. The strategy is as follows:\n",
"\n", "\n",
@ -27,7 +25,7 @@
"source": [ "source": [
"## Example\n", "## Example\n",
"\n", "\n",
"Let's go through an example where we analyze a set of documents. We first generate some simple documents for illustrative purposes:" "Let's go through an example where we analyze a set of documents. Let's use the following 3 documents:"
] ]
}, },
{ {

View File

@ -1,20 +1,12 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "575befea-4d98-4941-8e55-1581b169a674",
"metadata": {},
"source": [
"---\n",
"title: Migrating from MultiPromptChain\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "14625d35-efca-41cf-b203-be9f4c375700", "id": "14625d35-efca-41cf-b203-be9f4c375700",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Migrating from MultiPromptChain\n",
"\n",
"The [`MultiPromptChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html) routed an input query to one of multiple LLMChains-- that is, given an input query, it used a LLM to select from a list of prompts, formatted the query into the prompt, and generated a response.\n", "The [`MultiPromptChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html) routed an input query to one of multiple LLMChains-- that is, given an input query, it used a LLM to select from a list of prompts, formatted the query into the prompt, and generated a response.\n",
"\n", "\n",
"`MultiPromptChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling).\n", "`MultiPromptChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling).\n",
@ -321,7 +313,7 @@
"\n", "\n",
"## Overview:\n", "## Overview:\n",
"\n", "\n",
"- Under the hood, `MultiPromptChain` routes the query by instructing the LLM to generate JSON-formatted text, and parses out the intended destination. It takes a registry of string prompt templates as input.\n", "- Under the hood, `MultiPromptChain` routed the query by instructing the LLM to generate JSON-formatted text, and parses out the intended destination. It took a registry of string prompt templates as input.\n",
"- The LangGraph implementation, implemented above via lower-level primitives, uses tool-calling to route to arbitrary chains. In this example, the chains include chat model templates and chat models." "- The LangGraph implementation, implemented above via lower-level primitives, uses tool-calling to route to arbitrary chains. In this example, the chains include chat model templates and chat models."
] ]
}, },

View File

@ -5,9 +5,7 @@
"id": "32eee276-7847-45d8-b303-dccc330c8a1a", "id": "32eee276-7847-45d8-b303-dccc330c8a1a",
"metadata": {}, "metadata": {},
"source": [ "source": [
"---\n", "# Migrating from RefineDocumentsChain\n",
"title: Migrating from RefineDocumentsChain\n",
"---\n",
"\n", "\n",
"[RefineDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html) implements a strategy for analyzing long texts. The strategy is as follows:\n", "[RefineDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html) implements a strategy for analyzing long texts. The strategy is as follows:\n",
"\n", "\n",
@ -28,11 +26,9 @@
"\n", "\n",
"Let's first load a chat model:\n", "Let's first load a chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />"
"```"
] ]
}, },
{ {

View File

@ -1,21 +1,13 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "eddcd5c1-cbe9-4a7d-8903-7d1ab29f9094",
"metadata": {},
"source": [
"---\n",
"title: Migrating from RetrievalQA\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "b2d37868-dd01-4814-a76a-256f36cf66f7", "id": "b2d37868-dd01-4814-a76a-256f36cf66f7",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The [`RetrievalQA`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html) chain performed natural-language question answering over a data source using retrieval-augmented generation.\n", "# Migrating from RetrievalQA\n",
"\n",
"The [`RetrievalQA` chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html) performed natural-language question answering over a data source using retrieval-augmented generation.\n",
"\n", "\n",
"Some advantages of switching to the LCEL implementation are:\n", "Some advantages of switching to the LCEL implementation are:\n",
"\n", "\n",
@ -23,7 +15,13 @@
"- More easily return source documents.\n", "- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n", "- Support for runnable methods like streaming and async operations.\n",
"\n", "\n",
"Now let's look at them side-by-side. We'll use the same ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:" "Now let's look at them side-by-side. We'll use the following ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:\n",
"\n",
"## Shared setup\n",
"\n",
"For both versions, we'll need to load the data with the `WebBaseLoader` document loader, split it with `RecursiveCharacterTextSplitter`, and add it to an in-memory `FAISS` vector store.\n",
"\n",
"We will also instantiate a chat model to use."
] ]
}, },
{ {
@ -227,7 +225,7 @@
"\n", "\n",
"## Next steps\n", "## Next steps\n",
"\n", "\n",
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information." "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information on the LangChain expression language."
] ]
} }
], ],

View File

@ -5,9 +5,7 @@
"id": "ed78c53c-55ad-4ea2-9cc2-a39a1963c098", "id": "ed78c53c-55ad-4ea2-9cc2-a39a1963c098",
"metadata": {}, "metadata": {},
"source": [ "source": [
"---\n", "# Migrating from StuffDocumentsChain\n",
"title: Migrating from StuffDocumentsChain\n",
"---\n",
"\n", "\n",
"[StuffDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html) combines documents by concatenating them into a single context window. It is a straightforward and effective strategy for combining documents for question-answering, summarization, and other purposes.\n", "[StuffDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html) combines documents by concatenating them into a single context window. It is a straightforward and effective strategy for combining documents for question-answering, summarization, and other purposes.\n",
"\n", "\n",
@ -17,11 +15,9 @@
"\n", "\n",
"Let's first load a chat model:\n", "Let's first load a chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />"
"```"
] ]
}, },
{ {