mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-12 02:26:23 +00:00
``` https://api\.python\.langchain\.com/en/latest/([^/]*)/langchain_([^.]*)\.(.*)\.html([^"]*) https://python.langchain.com/v0.2/api_reference/$2/$1/langchain_$2.$3.html$4 ``` --------- Co-authored-by: Bagatur <baskaryan@gmail.com>
362 lines
11 KiB
Plaintext
362 lines
11 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "raw",
|
||
"id": "e2596041-9b76-4e74-836f-e6235086bbf0",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"sidebar_position: 1\n",
|
||
"keywords: [RunnableParallel, RunnableMap, LCEL]\n",
|
||
"---"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
|
||
"metadata": {},
|
||
"source": [
|
||
"# How to invoke runnables in parallel\n",
|
||
"\n",
|
||
":::info Prerequisites\n",
|
||
"\n",
|
||
"This guide assumes familiarity with the following concepts:\n",
|
||
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n",
|
||
"- [Chaining runnables](/docs/how_to/sequence)\n",
|
||
"\n",
|
||
":::\n",
|
||
"\n",
|
||
"The [`RunnableParallel`](https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.base.RunnableParallel.html) primitive is essentially a dict whose values are runnables (or things that can be coerced to runnables, like functions). It runs all of its values in parallel, and each value is called with the overall input of the `RunnableParallel`. The final return value is a dict with the results of each value under its appropriate key.\n",
|
||
"\n",
|
||
"## Formatting with `RunnableParallels`\n",
|
||
"\n",
|
||
"`RunnableParallels` are useful for parallelizing operations, but can also be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence. You can use them to split or fork the chain so that multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n",
|
||
"\n",
|
||
"```text\n",
|
||
" Input\n",
|
||
" / \\\n",
|
||
" / \\\n",
|
||
" Branch1 Branch2\n",
|
||
" \\ /\n",
|
||
" \\ /\n",
|
||
" Combine\n",
|
||
"```\n",
|
||
"\n",
|
||
"Below, the input to prompt is expected to be a map with keys `\"context\"` and `\"question\"`. The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the `\"question\"` key.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "2627ffd7",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# | output: false\n",
|
||
"# | echo: false\n",
|
||
"\n",
|
||
"%pip install -qU langchain langchain_openai\n",
|
||
"\n",
|
||
"import os\n",
|
||
"from getpass import getpass\n",
|
||
"\n",
|
||
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'Harrison worked at Kensho.'"
|
||
]
|
||
},
|
||
"execution_count": 2,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_community.vectorstores import FAISS\n",
|
||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
|
||
"\n",
|
||
"vectorstore = FAISS.from_texts(\n",
|
||
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
|
||
")\n",
|
||
"retriever = vectorstore.as_retriever()\n",
|
||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||
"{context}\n",
|
||
"\n",
|
||
"Question: {question}\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"# The prompt expects input with keys for \"context\" and \"question\"\n",
|
||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||
"\n",
|
||
"model = ChatOpenAI()\n",
|
||
"\n",
|
||
"retrieval_chain = (\n",
|
||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||
" | prompt\n",
|
||
" | model\n",
|
||
" | StrOutputParser()\n",
|
||
")\n",
|
||
"\n",
|
||
"retrieval_chain.invoke(\"where did harrison work?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
|
||
"metadata": {},
|
||
"source": [
|
||
"::: {.callout-tip}\n",
|
||
"Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n",
|
||
":::\n",
|
||
"\n",
|
||
"```\n",
|
||
"{\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||
"```\n",
|
||
"\n",
|
||
"```\n",
|
||
"RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
|
||
"```\n",
|
||
"\n",
|
||
"```\n",
|
||
"RunnableParallel(context=retriever, question=RunnablePassthrough())\n",
|
||
"```\n",
|
||
"\n",
|
||
"See the section on [coercion for more](/docs/how_to/sequence/#coercion)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "7c1b8baa-3a80-44f0-bb79-d22f79815d3d",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using itemgetter as shorthand\n",
|
||
"\n",
|
||
"Note that you can use Python's `itemgetter` as shorthand to extract data from the map when combining with `RunnableParallel`. You can find more information about itemgetter in the [Python Documentation](https://docs.python.org/3/library/operator.html#operator.itemgetter). \n",
|
||
"\n",
|
||
"In the example below, we use itemgetter to extract specific keys from the map:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "84fc49e1-2daf-4700-ae33-a0a6ed47d5f6",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'Harrison ha lavorato a Kensho.'"
|
||
]
|
||
},
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from operator import itemgetter\n",
|
||
"\n",
|
||
"from langchain_community.vectorstores import FAISS\n",
|
||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
|
||
"\n",
|
||
"vectorstore = FAISS.from_texts(\n",
|
||
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
|
||
")\n",
|
||
"retriever = vectorstore.as_retriever()\n",
|
||
"\n",
|
||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||
"{context}\n",
|
||
"\n",
|
||
"Question: {question}\n",
|
||
"\n",
|
||
"Answer in the following language: {language}\n",
|
||
"\"\"\"\n",
|
||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||
"\n",
|
||
"chain = (\n",
|
||
" {\n",
|
||
" \"context\": itemgetter(\"question\") | retriever,\n",
|
||
" \"question\": itemgetter(\"question\"),\n",
|
||
" \"language\": itemgetter(\"language\"),\n",
|
||
" }\n",
|
||
" | prompt\n",
|
||
" | model\n",
|
||
" | StrOutputParser()\n",
|
||
")\n",
|
||
"\n",
|
||
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bc2f9847-39aa-4fe4-9049-3a8969bc4bce",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Parallelize steps\n",
|
||
"\n",
|
||
"RunnableParallels make it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "31f18442-f837-463f-bef4-8729368f5f8b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"{'joke': AIMessage(content=\"Why don't bears like fast food? Because they can't catch it!\", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 13, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-fe024170-c251-4b7a-bfd4-64a3737c67f2-0'),\n",
|
||
" 'poem': AIMessage(content='In the quiet of the forest, the bear roams free\\nMajestic and wild, a sight to see.', response_metadata={'token_usage': {'completion_tokens': 24, 'prompt_tokens': 15, 'total_tokens': 39}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-2707913e-a743-4101-b6ec-840df4568a76-0')}"
|
||
]
|
||
},
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_core.runnables import RunnableParallel\n",
|
||
"from langchain_openai import ChatOpenAI\n",
|
||
"\n",
|
||
"model = ChatOpenAI()\n",
|
||
"joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
||
"poem_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n",
|
||
")\n",
|
||
"\n",
|
||
"map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)\n",
|
||
"\n",
|
||
"map_chain.invoke({\"topic\": \"bear\"})"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "833da249-c0d4-4e5b-b3f8-cab549f0f7e1",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Parallelism\n",
|
||
"\n",
|
||
"RunnableParallel are also useful for running independent processes in parallel, since each Runnable in the map is executed in parallel. For example, we can see our earlier `joke_chain`, `poem_chain` and `map_chain` all have about the same runtime, even though `map_chain` executes both of the other two."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "38e47834-45af-4281-991f-86f150001510",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"610 ms ± 64 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"%%timeit\n",
|
||
"\n",
|
||
"joke_chain.invoke({\"topic\": \"bear\"})"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "d0cd40de-b37e-41fa-a2f6-8aaa49f368d6",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"599 ms ± 73.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"%%timeit\n",
|
||
"\n",
|
||
"poem_chain.invoke({\"topic\": \"bear\"})"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "799894e1-8e18-4a73-b466-f6aea6af3920",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"643 ms ± 77.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"%%timeit\n",
|
||
"\n",
|
||
"map_chain.invoke({\"topic\": \"bear\"})"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "7d4492e1",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Next steps\n",
|
||
"\n",
|
||
"You now know some ways to format and parallelize chain steps with `RunnableParallel`.\n",
|
||
"\n",
|
||
"To learn more, see the other how-to guides on runnables in this section."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "4af8bebd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.9.1"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|