mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-21 06:33:41 +00:00
1411 lines
55 KiB
Plaintext
1411 lines
55 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"sidebar_position: 1\n",
|
|
"title: Runnable interface\n",
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9a9acd2e",
|
|
"metadata": {},
|
|
"source": [
|
|
|
|
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about [in this section](/docs/expression_language/primitives).\n",
|
|
"\n",
|
|
"This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n",
|
|
"The standard interface includes:\n",
|
|
"\n",
|
|
"- [`stream`](#stream): stream back chunks of the response\n",
|
|
"- [`invoke`](#invoke): call the chain on an input\n",
|
|
"- [`batch`](#batch): call the chain on a list of inputs\n",
|
|
"\n",
|
|
"These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency:\n",
|
|
"\n",
|
|
"- [`astream`](#async-stream): stream back chunks of the response async\n",
|
|
"- [`ainvoke`](#async-invoke): call the chain on an input async\n",
|
|
"- [`abatch`](#async-batch): call the chain on a list of inputs async\n",
|
|
"- [`astream_log`](#async-stream-intermediate-steps): stream back intermediate steps as they happen, in addition to the final response\n",
|
|
"- [`astream_events`](#async-stream-events): **beta** stream events as they happen in the chain (introduced in `langchain-core` 0.1.14)\n",
|
|
"\n",
|
|
"The **input type** and **output type** varies by component:\n",
|
|
"\n",
|
|
"| Component | Input Type | Output Type |\n",
|
|
"| --- | --- | --- |\n",
|
|
"| Prompt | Dictionary | PromptValue |\n",
|
|
"| ChatModel | Single string, list of chat messages or a PromptValue | ChatMessage |\n",
|
|
"| LLM | Single string, list of chat messages or a PromptValue | String |\n",
|
|
"| OutputParser | The output of an LLM or ChatModel | Depends on the parser |\n",
|
|
"| Retriever | Single string | List of Documents |\n",
|
|
"| Tool | Single string or dictionary, depending on the tool | Depends on the tool |\n",
|
|
"\n",
|
|
"\n",
|
|
"All runnables expose input and output **schemas** to inspect the inputs and outputs:\n",
|
|
"- [`input_schema`](#input-schema): an input Pydantic model auto-generated from the structure of the Runnable\n",
|
|
"- [`output_schema`](#output-schema): an output Pydantic model auto-generated from the structure of the Runnable\n",
|
|
"\n",
|
|
"Let's take a look at these methods. To do so, we'll create a super simple PromptTemplate + ChatModel chain."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "57768739",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%pip install --upgrade --quiet langchain-core langchain-community langchain-openai"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "466b65b3",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
|
"from langchain_openai import ChatOpenAI\n",
|
|
"\n",
|
|
"model = ChatOpenAI()\n",
|
|
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\n",
|
|
"chain = prompt | model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5cccdf0b-2d89-4f74-9530-bf499610e9a5",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Input Schema\n",
|
|
"\n",
|
|
"A description of the inputs accepted by a Runnable.\n",
|
|
"This is a Pydantic model dynamically generated from the structure of any Runnable.\n",
|
|
"You can call `.schema()` on it to obtain a JSONSchema representation."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "25e146d4-60da-40a2-9026-b5dfee106a3f",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'title': 'PromptInput',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}}"
|
|
]
|
|
},
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# The input schema of the chain is the input schema of its first part, the prompt.\n",
|
|
"chain.input_schema.schema()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "ad130546-4c14-4f6c-95af-c56ea19b12ac",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'title': 'PromptInput',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}}"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt.input_schema.schema()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "49d34744-d6db-4fdf-a0d6-261522b7f251",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'title': 'ChatOpenAIInput',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'$ref': '#/definitions/StringPromptValue'},\n",
|
|
" {'$ref': '#/definitions/ChatPromptValueConcrete'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'$ref': '#/definitions/AIMessage'},\n",
|
|
" {'$ref': '#/definitions/HumanMessage'},\n",
|
|
" {'$ref': '#/definitions/ChatMessage'},\n",
|
|
" {'$ref': '#/definitions/SystemMessage'},\n",
|
|
" {'$ref': '#/definitions/FunctionMessage'},\n",
|
|
" {'$ref': '#/definitions/ToolMessage'}]}}],\n",
|
|
" 'definitions': {'StringPromptValue': {'title': 'StringPromptValue',\n",
|
|
" 'description': 'String prompt value.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'text': {'title': 'Text', 'type': 'string'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'StringPromptValue',\n",
|
|
" 'enum': ['StringPromptValue'],\n",
|
|
" 'type': 'string'}},\n",
|
|
" 'required': ['text']},\n",
|
|
" 'AIMessage': {'title': 'AIMessage',\n",
|
|
" 'description': 'A Message from an AI.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'ai',\n",
|
|
" 'enum': ['ai'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'HumanMessage': {'title': 'HumanMessage',\n",
|
|
" 'description': 'A Message from a human.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'human',\n",
|
|
" 'enum': ['human'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'ChatMessage': {'title': 'ChatMessage',\n",
|
|
" 'description': 'A Message that can be assigned an arbitrary speaker (i.e. role).',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'chat',\n",
|
|
" 'enum': ['chat'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'role': {'title': 'Role', 'type': 'string'}},\n",
|
|
" 'required': ['content', 'role']},\n",
|
|
" 'SystemMessage': {'title': 'SystemMessage',\n",
|
|
" 'description': 'A Message for priming AI behavior, usually passed in as the first of a sequence\\nof input messages.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'system',\n",
|
|
" 'enum': ['system'],\n",
|
|
" 'type': 'string'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'FunctionMessage': {'title': 'FunctionMessage',\n",
|
|
" 'description': 'A Message for passing the result of executing a function back to a model.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'function',\n",
|
|
" 'enum': ['function'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'name': {'title': 'Name', 'type': 'string'}},\n",
|
|
" 'required': ['content', 'name']},\n",
|
|
" 'ToolMessage': {'title': 'ToolMessage',\n",
|
|
" 'description': 'A Message for passing the result of executing a tool back to a model.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'tool',\n",
|
|
" 'enum': ['tool'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'}},\n",
|
|
" 'required': ['content', 'tool_call_id']},\n",
|
|
" 'ChatPromptValueConcrete': {'title': 'ChatPromptValueConcrete',\n",
|
|
" 'description': 'Chat prompt value which explicitly lists out the message types it accepts.\\nFor use in external schemas.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'messages': {'title': 'Messages',\n",
|
|
" 'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'$ref': '#/definitions/AIMessage'},\n",
|
|
" {'$ref': '#/definitions/HumanMessage'},\n",
|
|
" {'$ref': '#/definitions/ChatMessage'},\n",
|
|
" {'$ref': '#/definitions/SystemMessage'},\n",
|
|
" {'$ref': '#/definitions/FunctionMessage'},\n",
|
|
" {'$ref': '#/definitions/ToolMessage'}]}},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'ChatPromptValueConcrete',\n",
|
|
" 'enum': ['ChatPromptValueConcrete'],\n",
|
|
" 'type': 'string'}},\n",
|
|
" 'required': ['messages']}}}"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"model.input_schema.schema()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5059a5dc-d544-4add-85bd-78a3f2b78b9a",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Output Schema\n",
|
|
"\n",
|
|
"A description of the outputs produced by a Runnable.\n",
|
|
"This is a Pydantic model dynamically generated from the structure of any Runnable.\n",
|
|
"You can call `.schema()` on it to obtain a JSONSchema representation."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "a0e41fd3-77d8-4911-af6a-d4d3aad5f77b",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'title': 'ChatOpenAIOutput',\n",
|
|
" 'anyOf': [{'$ref': '#/definitions/AIMessage'},\n",
|
|
" {'$ref': '#/definitions/HumanMessage'},\n",
|
|
" {'$ref': '#/definitions/ChatMessage'},\n",
|
|
" {'$ref': '#/definitions/SystemMessage'},\n",
|
|
" {'$ref': '#/definitions/FunctionMessage'},\n",
|
|
" {'$ref': '#/definitions/ToolMessage'}],\n",
|
|
" 'definitions': {'AIMessage': {'title': 'AIMessage',\n",
|
|
" 'description': 'A Message from an AI.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'ai',\n",
|
|
" 'enum': ['ai'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'HumanMessage': {'title': 'HumanMessage',\n",
|
|
" 'description': 'A Message from a human.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'human',\n",
|
|
" 'enum': ['human'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'example': {'title': 'Example', 'default': False, 'type': 'boolean'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'ChatMessage': {'title': 'ChatMessage',\n",
|
|
" 'description': 'A Message that can be assigned an arbitrary speaker (i.e. role).',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'chat',\n",
|
|
" 'enum': ['chat'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'role': {'title': 'Role', 'type': 'string'}},\n",
|
|
" 'required': ['content', 'role']},\n",
|
|
" 'SystemMessage': {'title': 'SystemMessage',\n",
|
|
" 'description': 'A Message for priming AI behavior, usually passed in as the first of a sequence\\nof input messages.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'system',\n",
|
|
" 'enum': ['system'],\n",
|
|
" 'type': 'string'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'FunctionMessage': {'title': 'FunctionMessage',\n",
|
|
" 'description': 'A Message for passing the result of executing a function back to a model.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'function',\n",
|
|
" 'enum': ['function'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'name': {'title': 'Name', 'type': 'string'}},\n",
|
|
" 'required': ['content', 'name']},\n",
|
|
" 'ToolMessage': {'title': 'ToolMessage',\n",
|
|
" 'description': 'A Message for passing the result of executing a tool back to a model.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content',\n",
|
|
" 'anyOf': [{'type': 'string'},\n",
|
|
" {'type': 'array',\n",
|
|
" 'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]}}]},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'tool',\n",
|
|
" 'enum': ['tool'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'}},\n",
|
|
" 'required': ['content', 'tool_call_id']}}}"
|
|
]
|
|
},
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# The output schema of the chain is the output schema of its last part, in this case a ChatModel, which outputs a ChatMessage\n",
|
|
"chain.output_schema.schema()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "daf2b2b2",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Stream"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "bea9639d",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Sure, here's a bear-themed joke for you:\n",
|
|
"\n",
|
|
"Why don't bears wear shoes?\n",
|
|
"\n",
|
|
"Because they already have bear feet!"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"for s in chain.stream({\"topic\": \"bears\"}):\n",
|
|
" print(s.content, end=\"\", flush=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cbf1c782",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Invoke"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "470e483f",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"Why don't bears wear shoes? \\n\\nBecause they have bear feet!\")"
|
|
]
|
|
},
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.invoke({\"topic\": \"bears\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "88f0c279",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Batch"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "9685de67",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content=\"Sure, here's a bear joke for you:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they already have bear feet!\"),\n",
|
|
" AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\")]"
|
|
]
|
|
},
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2434ab15",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can set the number of concurrent requests by using the `max_concurrency` parameter"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "a08522f6",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n",
|
|
" AIMessage(content=\"Why don't cats play poker in the wild? Too many cheetahs!\")]"
|
|
]
|
|
},
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}], config={\"max_concurrency\": 5})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b960cbfe",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Stream"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "ea35eee4",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Why don't bears wear shoes?\n",
|
|
"\n",
|
|
"Because they have bear feet!"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
|
|
" print(s.content, end=\"\", flush=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "04cb3324",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Invoke"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "ef8c9b20",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"Why don't bears ever wear shoes?\\n\\nBecause they already have bear feet!\")"
|
|
]
|
|
},
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"await chain.ainvoke({\"topic\": \"bears\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3da288d5",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Batch"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "eba2a103",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\")]"
|
|
]
|
|
},
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"await chain.abatch([{\"topic\": \"bears\"}])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c2d58e3f-2b2e-4dac-820b-5e9c263b1868",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Stream Events (beta)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "53d365e5-dc14-4bb7-aa6a-7762c3af16a4",
|
|
"metadata": {},
|
|
"source": [
|
|
"Event Streaming is a **beta** API, and may change a bit based on feedback.\n",
|
|
"\n",
|
|
"Note: Introduced in langchain-core 0.2.0\n",
|
|
"\n",
|
|
"For now, when using the astream_events API, for everything to work properly please:\n",
|
|
"\n",
|
|
"* Use `async` throughout the code (including async tools etc)\n",
|
|
"* Propagate callbacks if defining custom functions / runnables. \n",
|
|
"* Whenever using runnables without LCEL, make sure to call `.astream()` on LLMs rather than `.ainvoke` to force the LLM to stream tokens.\n",
|
|
"\n",
|
|
"### Event Reference\n",
|
|
"\n",
|
|
"\n",
|
|
"Here is a reference table that shows some events that might be emitted by the various Runnable objects.\n",
|
|
"Definitions for some of the Runnable are included after the table.\n",
|
|
"\n",
|
|
"⚠️ When streaming the inputs for the runnable will not be available until the input stream has been entirely consumed This means that the inputs will be available at for the corresponding `end` hook rather than `start` event.\n",
|
|
"\n",
|
|
"\n",
|
|
"| event | name | chunk | input | output |\n",
|
|
"|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|\n",
|
|
"| on_chat_model_start | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | |\n",
|
|
"| on_chat_model_stream | [model name] | AIMessageChunk(content=\"hello\") | | |\n",
|
|
"| on_chat_model_end | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | {\"generations\": [...], \"llm_output\": None, ...} |\n",
|
|
"| on_llm_start | [model name] | | {'input': 'hello'} | |\n",
|
|
"| on_llm_stream | [model name] | 'Hello' | | |\n",
|
|
"| on_llm_end | [model name] | | 'Hello human!' |\n",
|
|
"| on_chain_start | format_docs | | | |\n",
|
|
"| on_chain_stream | format_docs | \"hello world!, goodbye world!\" | | |\n",
|
|
"| on_chain_end | format_docs | | [Document(...)] | \"hello world!, goodbye world!\" |\n",
|
|
"| on_tool_start | some_tool | | {\"x\": 1, \"y\": \"2\"} | |\n",
|
|
"| on_tool_stream | some_tool | {\"x\": 1, \"y\": \"2\"} | | |\n",
|
|
"| on_tool_end | some_tool | | | {\"x\": 1, \"y\": \"2\"} |\n",
|
|
"| on_retriever_start | [retriever name] | | {\"query\": \"hello\"} | |\n",
|
|
"| on_retriever_chunk | [retriever name] | {documents: [...]} | | |\n",
|
|
"| on_retriever_end | [retriever name] | | {\"query\": \"hello\"} | {documents: [...]} |\n",
|
|
"| on_prompt_start | [template_name] | | {\"question\": \"hello\"} | |\n",
|
|
"| on_prompt_end | [template_name] | | {\"question\": \"hello\"} | ChatPromptValue(messages: [SystemMessage, ...]) |\n",
|
|
"\n",
|
|
"\n",
|
|
"Here are declarations associated with the events shown above:\n",
|
|
"\n",
|
|
"`format_docs`:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"def format_docs(docs: List[Document]) -> str:\n",
|
|
" '''Format the docs.'''\n",
|
|
" return \", \".join([doc.page_content for doc in docs])\n",
|
|
"\n",
|
|
"format_docs = RunnableLambda(format_docs)\n",
|
|
"```\n",
|
|
"\n",
|
|
"`some_tool`:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"@tool\n",
|
|
"def some_tool(x: int, y: str) -> dict:\n",
|
|
" '''Some_tool.'''\n",
|
|
" return {\"x\": x, \"y\": y}\n",
|
|
"```\n",
|
|
"\n",
|
|
"`prompt`:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"template = ChatPromptTemplate.from_messages(\n",
|
|
" [(\"system\", \"You are Cat Agent 007\"), (\"human\", \"{question}\")]\n",
|
|
").with_config({\"run_name\": \"my_template\", \"tags\": [\"my_template\"]})\n",
|
|
"```\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "108cf792-a372-4626-bbef-9d7be23dde33",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's define a new chain to make it more interesting to show off the `astream_events` interface (and later the `astream_log` interface)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "92eeb4da-0aae-457b-bd8f-8c35a024d4d1",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_community.vectorstores import FAISS\n",
|
|
"from langchain_core.output_parsers import StrOutputParser\n",
|
|
"from langchain_core.runnables import RunnablePassthrough\n",
|
|
"from langchain_openai import OpenAIEmbeddings\n",
|
|
"\n",
|
|
"template = \"\"\"Answer the question based only on the following context:\n",
|
|
"{context}\n",
|
|
"\n",
|
|
"Question: {question}\n",
|
|
"\"\"\"\n",
|
|
"prompt = ChatPromptTemplate.from_template(template)\n",
|
|
"\n",
|
|
"vectorstore = FAISS.from_texts(\n",
|
|
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
|
|
")\n",
|
|
"retriever = vectorstore.as_retriever()\n",
|
|
"\n",
|
|
"retrieval_chain = (\n",
|
|
" {\n",
|
|
" \"context\": retriever.with_config(run_name=\"Docs\"),\n",
|
|
" \"question\": RunnablePassthrough(),\n",
|
|
" }\n",
|
|
" | prompt\n",
|
|
" | model.with_config(run_name=\"my_llm\")\n",
|
|
" | StrOutputParser()\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1167e8f2-cab7-45b4-8922-7518b58a7d8d",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's use `astream_events` to get events from the retriever and the LLM."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"id": "0742d723-5b00-4a44-961e-dd4a3ec6d557",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.\n",
|
|
" warn_beta(\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"--\n",
|
|
"Retrieved the following documents:\n",
|
|
"[Document(page_content='harrison worked at kensho')]\n",
|
|
"\n",
|
|
"Streaming LLM:\n",
|
|
"|H|arrison| worked| at| Kens|ho|.||\n",
|
|
"Done streaming LLM.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"async for event in retrieval_chain.astream_events(\n",
|
|
" \"where did harrison work?\", version=\"v1\", include_names=[\"Docs\", \"my_llm\"]\n",
|
|
"):\n",
|
|
" kind = event[\"event\"]\n",
|
|
" if kind == \"on_chat_model_stream\":\n",
|
|
" print(event[\"data\"][\"chunk\"].content, end=\"|\")\n",
|
|
" elif kind in {\"on_chat_model_start\"}:\n",
|
|
" print()\n",
|
|
" print(\"Streaming LLM:\")\n",
|
|
" elif kind in {\"on_chat_model_end\"}:\n",
|
|
" print()\n",
|
|
" print(\"Done streaming LLM.\")\n",
|
|
" elif kind == \"on_retriever_end\":\n",
|
|
" print(\"--\")\n",
|
|
" print(\"Retrieved the following documents:\")\n",
|
|
" print(event[\"data\"][\"output\"][\"documents\"])\n",
|
|
" elif kind == \"on_tool_end\":\n",
|
|
" print(f\"Ended tool: {event['name']}\")\n",
|
|
" else:\n",
|
|
" pass"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f9cef104",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Stream Intermediate Steps\n",
|
|
"\n",
|
|
"All runnables also have a method `.astream_log()` which is used to stream (as they happen) all or part of the intermediate steps of your chain/sequence. \n",
|
|
"\n",
|
|
"This is useful to show progress to the user, to use intermediate results, or to debug your chain.\n",
|
|
"\n",
|
|
"You can stream all steps (default) or include/exclude steps by name, tags or metadata.\n",
|
|
"\n",
|
|
"This method yields [JSONPatch](https://jsonpatch.com) ops that when applied in the same order as received build up the RunState.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"class LogEntry(TypedDict):\n",
|
|
" id: str\n",
|
|
" \"\"\"ID of the sub-run.\"\"\"\n",
|
|
" name: str\n",
|
|
" \"\"\"Name of the object being run.\"\"\"\n",
|
|
" type: str\n",
|
|
" \"\"\"Type of the object being run, eg. prompt, chain, llm, etc.\"\"\"\n",
|
|
" tags: List[str]\n",
|
|
" \"\"\"List of tags for the run.\"\"\"\n",
|
|
" metadata: Dict[str, Any]\n",
|
|
" \"\"\"Key-value pairs of metadata for the run.\"\"\"\n",
|
|
" start_time: str\n",
|
|
" \"\"\"ISO-8601 timestamp of when the run started.\"\"\"\n",
|
|
"\n",
|
|
" streamed_output_str: List[str]\n",
|
|
" \"\"\"List of LLM tokens streamed by this run, if applicable.\"\"\"\n",
|
|
" final_output: Optional[Any]\n",
|
|
" \"\"\"Final output of this run.\n",
|
|
" Only available after the run has finished successfully.\"\"\"\n",
|
|
" end_time: Optional[str]\n",
|
|
" \"\"\"ISO-8601 timestamp of when the run ended.\n",
|
|
" Only available after the run has finished.\"\"\"\n",
|
|
"\n",
|
|
"\n",
|
|
"class RunState(TypedDict):\n",
|
|
" id: str\n",
|
|
" \"\"\"ID of the run.\"\"\"\n",
|
|
" streamed_output: List[Any]\n",
|
|
" \"\"\"List of output chunks streamed by Runnable.stream()\"\"\"\n",
|
|
" final_output: Optional[Any]\n",
|
|
" \"\"\"Final output of the run, usually the result of aggregating (`+`) streamed_output.\n",
|
|
" Only available after the run has finished successfully.\"\"\"\n",
|
|
"\n",
|
|
" logs: Dict[str, LogEntry]\n",
|
|
" \"\"\"Map of run names to sub-runs. If filters were supplied, this list will\n",
|
|
" contain only the runs that matched the filters.\"\"\"\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a146a5df-25be-4fa2-a7e4-df8ebe55a35e",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Streaming JSONPatch chunks\n",
|
|
"\n",
|
|
"This is useful eg. to stream the `JSONPatch` in an HTTP server, and then apply the ops on the client to rebuild the run state there. See [LangServe](https://github.com/langchain-ai/langserve) for tooling to make it easier to build a webserver from any Runnable."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"id": "21c9019e",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'replace',\n",
|
|
" 'path': '',\n",
|
|
" 'value': {'final_output': None,\n",
|
|
" 'id': '82e9b4b1-3dd6-4732-8db9-90e79c4da48c',\n",
|
|
" 'logs': {},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'type': 'chain'}})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add',\n",
|
|
" 'path': '/logs/Docs',\n",
|
|
" 'value': {'end_time': None,\n",
|
|
" 'final_output': None,\n",
|
|
" 'id': '9206e94a-57bd-48ee-8c5e-fdd1c52a6da2',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:55.902+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add',\n",
|
|
" 'path': '/logs/Docs/final_output',\n",
|
|
" 'value': {'documents': [Document(page_content='harrison worked at kensho')]}},\n",
|
|
" {'op': 'add',\n",
|
|
" 'path': '/logs/Docs/end_time',\n",
|
|
" 'value': '2024-01-19T22:33:56.064+00:00'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''},\n",
|
|
" {'op': 'replace', 'path': '/final_output', 'value': ''})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'H'},\n",
|
|
" {'op': 'replace', 'path': '/final_output', 'value': 'H'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'arrison'},\n",
|
|
" {'op': 'replace', 'path': '/final_output', 'value': 'Harrison'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' worked'},\n",
|
|
" {'op': 'replace', 'path': '/final_output', 'value': 'Harrison worked'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' at'},\n",
|
|
" {'op': 'replace', 'path': '/final_output', 'value': 'Harrison worked at'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' Kens'},\n",
|
|
" {'op': 'replace', 'path': '/final_output', 'value': 'Harrison worked at Kens'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'ho'},\n",
|
|
" {'op': 'replace',\n",
|
|
" 'path': '/final_output',\n",
|
|
" 'value': 'Harrison worked at Kensho'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': '.'},\n",
|
|
" {'op': 'replace',\n",
|
|
" 'path': '/final_output',\n",
|
|
" 'value': 'Harrison worked at Kensho.'})\n",
|
|
"----------------------------------------\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''})\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"async for chunk in retrieval_chain.astream_log(\n",
|
|
" \"where did harrison work?\", include_names=[\"Docs\"]\n",
|
|
"):\n",
|
|
" print(\"-\" * 40)\n",
|
|
" print(chunk)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "19570f36-7126-4fe2-b209-0cc6178b4582",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Streaming the incremental RunState\n",
|
|
"\n",
|
|
"You can simply pass `diff=False` to get incremental values of `RunState`. \n",
|
|
"You get more verbose output with more repetitive parts."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"id": "5c26b731-b4eb-4967-a42a-dec813249ecb",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': None,\n",
|
|
" 'final_output': None,\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': '',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': [''],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'H',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['', 'H'],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'Harrison',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['', 'H', 'arrison'],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'Harrison worked',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked'],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'Harrison worked at',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at'],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'Harrison worked at Kens',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens'],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'Harrison worked at Kensho',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho'],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'Harrison worked at Kensho.',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho', '.'],\n",
|
|
" 'type': 'chain'})\n",
|
|
"----------------------------------------------------------------------\n",
|
|
"RunLog({'final_output': 'Harrison worked at Kensho.',\n",
|
|
" 'id': '431d1c55-7c50-48ac-b3a2-2f5ba5f35172',\n",
|
|
" 'logs': {'Docs': {'end_time': '2024-01-19T22:33:57.120+00:00',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '8de10b49-d6af-4cb7-a4e7-fbadf6efa01e',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2024-01-19T22:33:56.939+00:00',\n",
|
|
" 'streamed_output': [],\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS', 'OpenAIEmbeddings'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'name': 'RunnableSequence',\n",
|
|
" 'streamed_output': ['',\n",
|
|
" 'H',\n",
|
|
" 'arrison',\n",
|
|
" ' worked',\n",
|
|
" ' at',\n",
|
|
" ' Kens',\n",
|
|
" 'ho',\n",
|
|
" '.',\n",
|
|
" ''],\n",
|
|
" 'type': 'chain'})\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"async for chunk in retrieval_chain.astream_log(\n",
|
|
" \"where did harrison work?\", include_names=[\"Docs\"], diff=False\n",
|
|
"):\n",
|
|
" print(\"-\" * 70)\n",
|
|
" print(chunk)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7006f1aa",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Parallelism\n",
|
|
"\n",
|
|
"Let's take a look at how LangChain Expression Language supports parallel requests. \n",
|
|
"For example, when using a `RunnableParallel` (often written as a dictionary) it executes each element in parallel."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"id": "0a1c409d",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.runnables import RunnableParallel\n",
|
|
"\n",
|
|
"chain1 = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
|
"chain2 = (\n",
|
|
" ChatPromptTemplate.from_template(\"write a short (2 line) poem about {topic}\")\n",
|
|
" | model\n",
|
|
")\n",
|
|
"combined = RunnableParallel(joke=chain1, poem=chain2)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"id": "08044c0a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 18 ms, sys: 1.27 ms, total: 19.3 ms\n",
|
|
"Wall time: 692 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they already have bear feet!\")"
|
|
]
|
|
},
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"chain1.invoke({\"topic\": \"bears\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"id": "22c56804",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 10.5 ms, sys: 166 µs, total: 10.7 ms\n",
|
|
"Wall time: 579 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"In forest's embrace,\\nMajestic bears pace.\")"
|
|
]
|
|
},
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"chain2.invoke({\"topic\": \"bears\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 20,
|
|
"id": "4fff4cbb",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 32 ms, sys: 2.59 ms, total: 34.6 ms\n",
|
|
"Wall time: 816 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'joke': AIMessage(content=\"Sure, here's a bear-related joke for you:\\n\\nWhy did the bear bring a ladder to the bar?\\n\\nBecause he heard the drinks were on the house!\"),\n",
|
|
" 'poem': AIMessage(content=\"In wilderness they roam,\\nMajestic strength, nature's throne.\")}"
|
|
]
|
|
},
|
|
"execution_count": 20,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"combined.invoke({\"topic\": \"bears\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "80164216-0abd-439b-8407-409539e104b6",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Parallelism on batches\n",
|
|
"\n",
|
|
"Parallelism can be combined with other runnables.\n",
|
|
"Let's try to use parallelism with batches."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"id": "f67d2268-c766-441b-8d64-57b8219ccb34",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 17.3 ms, sys: 4.84 ms, total: 22.2 ms\n",
|
|
"Wall time: 628 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n",
|
|
" AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\")]"
|
|
]
|
|
},
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"chain1.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 22,
|
|
"id": "83c8d511-9563-403e-9c06-cae986cf5dee",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 15.8 ms, sys: 3.83 ms, total: 19.7 ms\n",
|
|
"Wall time: 718 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content='In the wild, bears roam,\\nMajestic guardians of ancient home.'),\n",
|
|
" AIMessage(content='Whiskers grace, eyes gleam,\\nCats dance through the moonbeam.')]"
|
|
]
|
|
},
|
|
"execution_count": 22,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"chain2.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"id": "07a81230-8db8-4b96-bdcb-99ae1d171f2f",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 44.8 ms, sys: 3.17 ms, total: 48 ms\n",
|
|
"Wall time: 721 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'joke': AIMessage(content=\"Sure, here's a bear joke for you:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n",
|
|
" 'poem': AIMessage(content=\"Majestic bears roam,\\nNature's strength, beauty shown.\")},\n",
|
|
" {'joke': AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\"),\n",
|
|
" 'poem': AIMessage(content=\"Whiskers dance, eyes aglow,\\nCats embrace the night's gentle flow.\")}]"
|
|
]
|
|
},
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"combined.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|