langchain[patch]: add deprecations (#24792)

This commit is contained in:
ccurme
2024-08-09 10:34:43 -04:00
committed by GitHub
parent 02300471be
commit 4825dc0d76
38 changed files with 3314 additions and 108 deletions

View File

@@ -54,12 +54,9 @@
"id": "00df631d-5121-4918-94aa-b88acce9b769",
"metadata": {},
"source": [
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"## Legacy\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Legacy\n"
"<details open>"
]
},
{
@@ -111,12 +108,11 @@
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"</Column>\n",
"</details>\n",
"\n",
"<Column>\n",
"## LCEL\n",
"\n",
"#### LCEL\n",
"\n"
"<details open>"
]
},
{
@@ -174,10 +170,6 @@
"id": "6b386ce6-895e-442c-88f3-7bec0ab9f401",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The above example uses the same `history` for all sessions. The example below shows how to use a different chat history for each session."
]
},
@@ -230,6 +222,8 @@
"id": "b2717810",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## Next steps\n",
"\n",
"See [this tutorial](/docs/tutorials/chatbot) for a more end-to-end guide on building with [`RunnableWithMessageHistory`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html).\n",

View File

@@ -83,13 +83,9 @@
"id": "8bc06416",
"metadata": {},
"source": [
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"## Legacy\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
"<details open>"
]
},
{
@@ -165,12 +161,11 @@
"id": "43a8a23c",
"metadata": {},
"source": [
"</Column>\n",
"</details>\n",
"\n",
"<Column>\n",
"## LCEL\n",
"\n",
"#### LCEL\n",
"\n"
"<details open>"
]
},
{
@@ -253,9 +248,7 @@
"id": "b2717810",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"</ColumnContainer>\n",
"</details>\n",
"\n",
"## Next steps\n",
"\n",
@@ -263,6 +256,14 @@
"\n",
"Next, check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7bfc38bd-0ff8-40ee-83a3-9d7553364fd7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -2,33 +2,48 @@
sidebar_position: 1
---
# How to migrate chains to LCEL
# How to migrate from v0.0 chains
:::info Prerequisites
This guide assumes familiarity with the following concepts:
- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)
- [LangGraph](https://langchain-ai.github.io/langgraph/)
:::
LCEL is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:
LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL and LangGraph primitives.
### LCEL
[LCEL](/docs/concepts/#langchain-expression-language-lcel) is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:
1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.
2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.
LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL primitives. Doing so confers some general advantages:
### LangGraph
[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of LCEL, allows for performant orchestrations of application components while maintaining concise and readable code. It includes built-in persistence, support for cycles, and prioritizes controllability.
If LCEL grows unwieldy for larger or more complex chains, they may benefit from a LangGraph implementation.
### Advantages
Using these frameworks for existing v0.0 chains confers some advantages:
- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;
- The chains may be more easily extended or modified;
- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.
- If using LangGraph, the chain supports built-in persistence, allowing for conversational experiences via a "memory" of the chat history.
- If using LangGraph, the steps of the chain can be streamed, allowing for greater control and customizability.
The LCEL implementations can be slightly more verbose, but there are significant benefits in transparency and customizability.
The below pages assist with migration from various specific chains to LCEL:
The below pages assist with migration from various specific chains to LCEL and LangGraph:
- [LLMChain](/docs/versions/migrating_chains/llm_chain)
- [ConversationChain](/docs/versions/migrating_chains/conversation_chain)
- [RetrievalQA](/docs/versions/migrating_chains/retrieval_qa)
- [ConversationalRetrievalChain](/docs/versions/migrating_chains/conversation_retrieval_chain)
- [StuffDocumentsChain](/docs/versions/migrating_chains/stuff_docs_chain)
- [MapReduceDocumentsChain](/docs/versions/migrating_chains/map_reduce_chain)
- [MapRerankDocumentsChain](/docs/versions/migrating_chains/map_rerank_docs_chain)
- [RefineDocumentsChain](/docs/versions/migrating_chains/refine_docs_chain)
- [LLMRouterChain](/docs/versions/migrating_chains/llm_router_chain)
- [MultiPromptChain](/docs/versions/migrating_chains/multi_prompt_chain)
Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information.
Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information.

View File

@@ -52,13 +52,9 @@
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
"metadata": {},
"source": [
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"## Legacy\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy\n"
"<details open>"
]
},
{
@@ -98,13 +94,11 @@
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"</details>\n",
"\n",
"</Column>\n",
"## LCEL\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
"<details open>"
]
},
{
@@ -143,10 +137,6 @@
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):"
]
},
@@ -181,6 +171,8 @@
"id": "b2717810",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## Next steps\n",
"\n",
"See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers.\n",

View File

@@ -0,0 +1,283 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "575befea-4d98-4941-8e55-1581b169a674",
"metadata": {},
"source": [
"---\n",
"title: Migrating from LLMRouterChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "14625d35-efca-41cf-b203-be9f4c375700",
"metadata": {},
"source": [
"The [`LLMRouterChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html) routed an input query to one of multiple destinations-- that is, given an input query, it used a LLM to select from a list of destination chains, and passed its inputs to the selected chain.\n",
"\n",
"`LLMRouterChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling). Under the hood, `LLMRouterChain` routes a query by instructing the LLM to generate JSON-formatted text, and parsing out the intended destination.\n",
"\n",
"Consider an example from a [MultiPromptChain](/docs/versions/migrating_chains/multi_prompt_chain), which uses `LLMRouterChain`. Below is an (example) default prompt:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "364814a5-d15c-41bb-bf3f-581df51a4721",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Given a raw text input to a language model select the model prompt best suited for the input. You will be given the names of the available prompts and a description of what the prompt is best suited for. You may also revise the original input if you think that revising it will ultimately lead to a better response from the language model.\n",
"\n",
"<< FORMATTING >>\n",
"Return a markdown code snippet with a JSON object formatted to look like:\n",
"'''json\n",
"{{\n",
" \"destination\": string \\ name of the prompt to use or \"DEFAULT\"\n",
" \"next_inputs\": string \\ a potentially modified version of the original input\n",
"}}\n",
"'''\n",
"\n",
"REMEMBER: \"destination\" MUST be one of the candidate prompt names specified below OR it can be \"DEFAULT\" if the input is not well suited for any of the candidate prompts.\n",
"REMEMBER: \"next_inputs\" can just be the original input if you don't think any modifications are needed.\n",
"\n",
"<< CANDIDATE PROMPTS >>\n",
"\n",
"animals: prompt for animal expert\n",
"vegetables: prompt for a vegetable expert\n",
"\n",
"\n",
"<< INPUT >>\n",
"{input}\n",
"\n",
"<< OUTPUT (must include '''json at the start of the response) >>\n",
"<< OUTPUT (must end with ''') >>\n",
"\n"
]
}
],
"source": [
"from langchain.chains.router.multi_prompt import MULTI_PROMPT_ROUTER_TEMPLATE\n",
"\n",
"destinations = \"\"\"\n",
"animals: prompt for animal expert\n",
"vegetables: prompt for a vegetable expert\n",
"\"\"\"\n",
"\n",
"router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations)\n",
"\n",
"print(router_template.replace(\"`\", \"'\")) # for rendering purposes"
]
},
{
"cell_type": "markdown",
"id": "934937d1-fc0a-4d3f-b297-29f96e6a8f5e",
"metadata": {},
"source": [
"Most of the behavior is determined via a single natural language prompt. Chat models that support [tool calling](/docs/how_to/tool_calling/) features confer a number of advantages for this task:\n",
"\n",
"- Supports chat prompt templates, including messages with `system` and other roles;\n",
"- Tool-calling models are fine-tuned to generate structured output;\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Now let's look at `LLMRouterChain` side-by-side with an LCEL implementation that uses tool-calling. Note that for this guide we will `langchain-openai >= 0.1.20`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ed12b22b-5452-4776-aee3-b67d9f965082",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-core langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0edbba1-a497-49ef-ade7-4fe7967360eb",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "5d4dc41c-3fdc-4093-ba5e-31a9ebb54e13",
"metadata": {},
"source": [
"## Legacy\n",
"\n",
"<details open>"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c58c9269-5a1d-4234-88b5-7168944618bf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"router_prompt = PromptTemplate(\n",
" # Note: here we use the prompt template from above. Generally this would need\n",
" # to be customized.\n",
" template=router_template,\n",
" input_variables=[\"input\"],\n",
" output_parser=RouterOutputParser(),\n",
")\n",
"\n",
"chain = LLMRouterChain.from_llm(llm, router_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a22ebdca-5f53-459e-9cff-a97b2354ffe0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"vegetables\n"
]
}
],
"source": [
"result = chain.invoke({\"input\": \"What color are carrots?\"})\n",
"\n",
"print(result[\"destination\"])"
]
},
{
"cell_type": "markdown",
"id": "6fd48120-056f-4c58-a04f-da5198c23068",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## LCEL\n",
"\n",
"<details open>"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "5bbebac2-df19-4f59-8a69-f61cd7286e59",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"from typing import Literal\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"from typing_extensions import TypedDict\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"route_system = \"Route the user's query to either the animal or vegetable expert.\"\n",
"route_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", route_system),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"\n",
"# Define schema for output:\n",
"class RouteQuery(TypedDict):\n",
" \"\"\"Route query to destination expert.\"\"\"\n",
"\n",
" destination: Literal[\"animal\", \"vegetable\"]\n",
"\n",
"\n",
"# Instead of writing formatting instructions into the prompt, we\n",
"# leverage .with_structured_output to coerce the output into a simple\n",
"# schema.\n",
"chain = route_prompt | llm.with_structured_output(RouteQuery)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "88012e10-8def-44fa-833f-989935824182",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"vegetable\n"
]
}
],
"source": [
"result = chain.invoke({\"input\": \"What color are carrots?\"})\n",
"\n",
"print(result[\"destination\"])"
]
},
{
"cell_type": "markdown",
"id": "baf7ba9e-65b4-48af-8a39-453c01a7b7cb",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## Next steps\n",
"\n",
"See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers.\n",
"\n",
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "353e4bab-3b8a-4e89-89e2-200a8d8eb8dd",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -82,13 +82,9 @@
"id": "c7e16438",
"metadata": {},
"source": [
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"## Legacy\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
"<details open>"
]
},
{
@@ -128,12 +124,11 @@
"id": "081948e5",
"metadata": {},
"source": [
"</Column>\n",
"</details>\n",
"\n",
"<Column>\n",
"## LCEL\n",
"\n",
"#### LCEL\n",
"\n"
"<details open>"
]
},
{
@@ -184,9 +179,6 @@
"id": "d6f44fe8",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The LCEL implementation exposes the internals of what's happening around retrieving, formatting documents, and passing them through a prompt to the LLM, but it is more verbose. You can customize and wrap this composition logic in a helper function, or use the higher-level [`create_retrieval_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) and [`create_stuff_documents_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) helper method:"
]
},
@@ -231,6 +223,8 @@
"id": "b2717810",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## Next steps\n",
"\n",
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."

View File

@@ -0,0 +1,281 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ed78c53c-55ad-4ea2-9cc2-a39a1963c098",
"metadata": {},
"source": [
"---\n",
"title: Migrating from StuffDocumentsChain\n",
"---\n",
"\n",
"[StuffDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html) combines documents by concatenating them into a single context window. It is a straightforward and effective strategy for combining documents for question-answering, summarization, and other purposes.\n",
"\n",
"[create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) is the recommended alternative. It functions the same as `StuffDocumentsChain`, with better support for streaming and batch functionality. Because it is a simple combination of [LCEL primitives](/docs/concepts/#langchain-expression-language-lcel), it is also easier to extend and incorporate into other LangChain applications.\n",
"\n",
"Below we will go through both `StuffDocumentsChain` and `create_stuff_documents_chain` on a simple example for illustrative purposes.\n",
"\n",
"Let's first load a chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "dac0bef2-9453-46f2-a893-f7569b6a0170",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "d4022d03-7b5e-4c81-98ff-5b82a2a4eaae",
"metadata": {},
"source": [
"## Example\n",
"\n",
"Let's go through an example where we analyze a set of documents. We first generate some simple documents for illustrative purposes:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "24fa0ba9-e245-47d1-bc2e-6286dd884117",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"\n",
"documents = [\n",
" Document(page_content=\"Apples are red\", metadata={\"title\": \"apple_book\"}),\n",
" Document(page_content=\"Blueberries are blue\", metadata={\"title\": \"blueberry_book\"}),\n",
" Document(page_content=\"Bananas are yelow\", metadata={\"title\": \"banana_book\"}),\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "3a769128-205f-417d-a25d-519e7cb03be7",
"metadata": {},
"source": [
"### Legacy\n",
"\n",
"<details open>\n",
"\n",
"Below we show an implementation with `StuffDocumentsChain`. We define the prompt template for a summarization task and instantiate a [LLMChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) object for this purpose. We define how documents are formatted into the prompt and ensure consistency among the keys in the various prompts."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "9734c0f3-64e7-4ae6-8578-df03b3dabb26",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain, StuffDocumentsChain\n",
"from langchain_core.prompts import ChatPromptTemplate, PromptTemplate\n",
"\n",
"# This controls how each document will be formatted. Specifically,\n",
"# it will be passed to `format_document` - see that function for more\n",
"# details.\n",
"document_prompt = PromptTemplate(\n",
" input_variables=[\"page_content\"], template=\"{page_content}\"\n",
")\n",
"document_variable_name = \"context\"\n",
"# The prompt here should take as an input variable the\n",
"# `document_variable_name`\n",
"prompt = ChatPromptTemplate.from_template(\"Summarize this content: {context}\")\n",
"\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"chain = StuffDocumentsChain(\n",
" llm_chain=llm_chain,\n",
" document_prompt=document_prompt,\n",
" document_variable_name=document_variable_name,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0cb733bf-eb71-4fae-a8f4-d522924020cb",
"metadata": {},
"source": [
"We can now invoke our chain:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d7d1ce10-bbee-4cb0-879d-7de4f69191c4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'This content describes the colors of different fruits: apples are red, blueberries are blue, and bananas are yellow.'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = chain.invoke(documents)\n",
"result[\"output_text\"]"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "79b10d40-1521-433b-9026-6ec836ffeeb3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'input_documents': [Document(metadata={'title': 'apple_book'}, page_content='Apples are red'), Document(metadata={'title': 'blueberry_book'}, page_content='Blueberries are blue'), Document(metadata={'title': 'banana_book'}, page_content='Bananas are yelow')], 'output_text': 'This content describes the colors of different fruits: apples are red, blueberries are blue, and bananas are yellow.'}\n"
]
}
],
"source": [
"for chunk in chain.stream(documents):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "b4cb6a5b-37ea-48cc-a096-b948d3ff7e9f",
"metadata": {},
"source": [
"</details>\n",
"\n",
"### LCEL\n",
"\n",
"<details open>\n",
"\n",
"Below we show an implementation using `create_stuff_documents_chain`:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "de38f27a-c648-44be-8c37-0a458c2920a9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Summarize this content: {context}\")\n",
"chain = create_stuff_documents_chain(llm, prompt)"
]
},
{
"cell_type": "markdown",
"id": "9d0e6996-9bf8-4097-9c1a-1c539eac3ed1",
"metadata": {},
"source": [
"Invoking the chain, we obtain a similar result as before:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "f2d2bdfb-3a6a-464b-b4c2-e4252b2e53a0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'This content describes the colors of different fruits: apples are red, blueberries are blue, and bananas are yellow.'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = chain.invoke({\"context\": documents})\n",
"result"
]
},
{
"cell_type": "markdown",
"id": "493e6270-c61d-46c5-91b3-0cf7740a88f9",
"metadata": {},
"source": [
"Note that this implementation supports streaming of output tokens:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "b5adcabd-9bc1-4c91-a12b-7be82d64e457",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" | This | content | describes | the | colors | of | different | fruits | : | apples | are | red | , | blue | berries | are | blue | , | and | bananas | are | yellow | . | | "
]
}
],
"source": [
"for chunk in chain.stream({\"context\": documents}):\n",
" print(chunk, end=\" | \")"
]
},
{
"cell_type": "markdown",
"id": "181c5633-38ea-4692-a869-32f4f78398e4",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## Next steps\n",
"\n",
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information.\n",
"\n",
"See these [how-to guides](/docs/how_to/#qa-with-rag) for more on question-answering tasks with RAG.\n",
"\n",
"See [this tutorial](/docs/tutorials/summarization/) for more LLM-based summarization strategies."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -93,7 +93,7 @@ module.exports = {
},
{
type: "category",
label: "Migrating to LCEL",
label: "Migrating from v0.0 chains",
link: {type: 'doc', id: 'versions/migrating_chains/index'},
collapsible: false,
collapsed: false,