mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-17 21:11:40 +00:00
<!-- Thank you for contributing to LangChain! Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes (if applicable), - **Dependencies:** any dependencies required for this change, - **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below), - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/extras` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. -->
937 lines
33 KiB
Plaintext
937 lines
33 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"sidebar_position: 0\n",
|
|
"title: Interface\n",
|
|
"---\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9a9acd2e",
|
|
"metadata": {},
|
|
"source": [
|
|
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
|
|
"\n",
|
|
"- [`stream`](#stream): stream back chunks of the response\n",
|
|
"- [`invoke`](#invoke): call the chain on an input\n",
|
|
"- [`batch`](#batch): call the chain on a list of inputs\n",
|
|
"\n",
|
|
"These also have corresponding async methods:\n",
|
|
"\n",
|
|
"- [`astream`](#async-stream): stream back chunks of the response async\n",
|
|
"- [`ainvoke`](#async-invoke): call the chain on an input async\n",
|
|
"- [`abatch`](#async-batch): call the chain on a list of inputs async\n",
|
|
"- [`astream_log`](#async-stream-intermediate-steps): stream back intermediate steps as they happen, in addition to the final response\n",
|
|
"\n",
|
|
"The type of the input varies by component:\n",
|
|
"\n",
|
|
"| Component | Input Type |\n",
|
|
"| --- | --- |\n",
|
|
"|Prompt|Dictionary|\n",
|
|
"|Retriever|Single string|\n",
|
|
"|LLM, ChatModel| Single string, list of chat messages or a PromptValue|\n",
|
|
"|Tool|Single string, or dictionary, depending on the tool|\n",
|
|
"|OutputParser|The output of an LLM or ChatModel|\n",
|
|
"\n",
|
|
"The output type also varies by component:\n",
|
|
"\n",
|
|
"| Component | Output Type |\n",
|
|
"| --- | --- |\n",
|
|
"| LLM | String |\n",
|
|
"| ChatModel | ChatMessage |\n",
|
|
"| Prompt | PromptValue |\n",
|
|
"| Retriever | List of documents |\n",
|
|
"| Tool | Depends on the tool |\n",
|
|
"| OutputParser | Depends on the parser |\n",
|
|
"\n",
|
|
"All runnables expose properties to inspect the input and output types:\n",
|
|
"- [`input_schema`](#input-schema): an input Pydantic model auto-generated from the structure of the Runnable\n",
|
|
"- [`output_schema`](#output-schema): an output Pydantic model auto-generated from the structure of the Runnable\n",
|
|
"\n",
|
|
"Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "466b65b3",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.prompts import ChatPromptTemplate\n",
|
|
"from langchain.chat_models import ChatOpenAI\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "3c634ef0",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"model = ChatOpenAI()\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "d1850a1f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "56d0669f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"chain = prompt | model\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5cccdf0b-2d89-4f74-9530-bf499610e9a5",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Input Schema\n",
|
|
"\n",
|
|
"A description of the inputs accepted by a Runnable.\n",
|
|
"This is a Pydantic model dynamically generated from the structure of any Runnable.\n",
|
|
"You can call `.schema()` on it to obtain a JSONSchema representation."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "25e146d4-60da-40a2-9026-b5dfee106a3f",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'title': 'PromptInput',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}}"
|
|
]
|
|
},
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# The input schema of the chain is the input schema of its first part, the prompt.\n",
|
|
"chain.input_schema.schema()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5059a5dc-d544-4add-85bd-78a3f2b78b9a",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Output Schema\n",
|
|
"\n",
|
|
"A description of the outputs produced by a Runnable.\n",
|
|
"This is a Pydantic model dynamically generated from the structure of any Runnable.\n",
|
|
"You can call `.schema()` on it to obtain a JSONSchema representation."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "a0e41fd3-77d8-4911-af6a-d4d3aad5f77b",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'title': 'ChatOpenAIOutput',\n",
|
|
" 'anyOf': [{'$ref': '#/definitions/HumanMessageChunk'},\n",
|
|
" {'$ref': '#/definitions/AIMessageChunk'},\n",
|
|
" {'$ref': '#/definitions/ChatMessageChunk'},\n",
|
|
" {'$ref': '#/definitions/FunctionMessageChunk'},\n",
|
|
" {'$ref': '#/definitions/SystemMessageChunk'}],\n",
|
|
" 'definitions': {'HumanMessageChunk': {'title': 'HumanMessageChunk',\n",
|
|
" 'description': 'A Human Message chunk.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content', 'type': 'string'},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'human',\n",
|
|
" 'enum': ['human'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'example': {'title': 'Example', 'default': False, 'type': 'boolean'},\n",
|
|
" 'is_chunk': {'title': 'Is Chunk',\n",
|
|
" 'default': True,\n",
|
|
" 'enum': [True],\n",
|
|
" 'type': 'boolean'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'AIMessageChunk': {'title': 'AIMessageChunk',\n",
|
|
" 'description': 'A Message chunk from an AI.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content', 'type': 'string'},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'ai',\n",
|
|
" 'enum': ['ai'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'example': {'title': 'Example', 'default': False, 'type': 'boolean'},\n",
|
|
" 'is_chunk': {'title': 'Is Chunk',\n",
|
|
" 'default': True,\n",
|
|
" 'enum': [True],\n",
|
|
" 'type': 'boolean'}},\n",
|
|
" 'required': ['content']},\n",
|
|
" 'ChatMessageChunk': {'title': 'ChatMessageChunk',\n",
|
|
" 'description': 'A Chat Message chunk.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content', 'type': 'string'},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'chat',\n",
|
|
" 'enum': ['chat'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'role': {'title': 'Role', 'type': 'string'},\n",
|
|
" 'is_chunk': {'title': 'Is Chunk',\n",
|
|
" 'default': True,\n",
|
|
" 'enum': [True],\n",
|
|
" 'type': 'boolean'}},\n",
|
|
" 'required': ['content', 'role']},\n",
|
|
" 'FunctionMessageChunk': {'title': 'FunctionMessageChunk',\n",
|
|
" 'description': 'A Function Message chunk.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content', 'type': 'string'},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'function',\n",
|
|
" 'enum': ['function'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'name': {'title': 'Name', 'type': 'string'},\n",
|
|
" 'is_chunk': {'title': 'Is Chunk',\n",
|
|
" 'default': True,\n",
|
|
" 'enum': [True],\n",
|
|
" 'type': 'boolean'}},\n",
|
|
" 'required': ['content', 'name']},\n",
|
|
" 'SystemMessageChunk': {'title': 'SystemMessageChunk',\n",
|
|
" 'description': 'A System Message chunk.',\n",
|
|
" 'type': 'object',\n",
|
|
" 'properties': {'content': {'title': 'Content', 'type': 'string'},\n",
|
|
" 'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
|
|
" 'type': {'title': 'Type',\n",
|
|
" 'default': 'system',\n",
|
|
" 'enum': ['system'],\n",
|
|
" 'type': 'string'},\n",
|
|
" 'is_chunk': {'title': 'Is Chunk',\n",
|
|
" 'default': True,\n",
|
|
" 'enum': [True],\n",
|
|
" 'type': 'boolean'}},\n",
|
|
" 'required': ['content']}}}"
|
|
]
|
|
},
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# The output schema of the chain is the output schema of its last part, in this case a ChatModel, which outputs a ChatMessage\n",
|
|
"chain.output_schema.schema()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "daf2b2b2",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Stream"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "bea9639d",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Why don't bears wear shoes? \n",
|
|
"\n",
|
|
"Because they have bear feet!"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"for s in chain.stream({\"topic\": \"bears\"}):\n",
|
|
" print(s.content, end=\"\", flush=True)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cbf1c782",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Invoke"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "470e483f",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\")"
|
|
]
|
|
},
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.invoke({\"topic\": \"bears\"})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "88f0c279",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Batch"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "9685de67",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n",
|
|
" AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\")]"
|
|
]
|
|
},
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2434ab15",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can set the number of concurrent requests by using the `max_concurrency` parameter"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "a08522f6",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n",
|
|
" AIMessage(content=\"Sure, here's a cat joke for you:\\n\\nWhy don't cats play poker in the wild?\\n\\nToo many cheetahs!\")]"
|
|
]
|
|
},
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}], config={\"max_concurrency\": 5})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b960cbfe",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Stream"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "ea35eee4",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Sure, here's a bear joke for you:\n",
|
|
"\n",
|
|
"Why don't bears wear shoes?\n",
|
|
"\n",
|
|
"Because they have bear feet!"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
|
|
" print(s.content, end=\"\", flush=True)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "04cb3324",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Invoke"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "ef8c9b20",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"Why don't bears wear shoes? \\n\\nBecause they have bear feet!\")"
|
|
]
|
|
},
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"await chain.ainvoke({\"topic\": \"bears\"})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3da288d5",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Batch"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "eba2a103",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\")]"
|
|
]
|
|
},
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"await chain.abatch([{\"topic\": \"bears\"}])\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f9cef104",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Async Stream Intermediate Steps\n",
|
|
"\n",
|
|
"All runnables also have a method `.astream_log()` which can be used to stream (as they happen) all or part of the intermediate steps of your chain/sequence. \n",
|
|
"\n",
|
|
"This is useful eg. to show progress to the user, to use intermediate results, or even just to debug your chain.\n",
|
|
"\n",
|
|
"You can choose to stream all steps (default), or include/exclude steps by name, tags or metadata.\n",
|
|
"\n",
|
|
"This method yields [JSONPatch](https://jsonpatch.com) ops that when applied in the same order as received build up the RunState.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"class LogEntry(TypedDict):\n",
|
|
" id: str\n",
|
|
" \"\"\"ID of the sub-run.\"\"\"\n",
|
|
" name: str\n",
|
|
" \"\"\"Name of the object being run.\"\"\"\n",
|
|
" type: str\n",
|
|
" \"\"\"Type of the object being run, eg. prompt, chain, llm, etc.\"\"\"\n",
|
|
" tags: List[str]\n",
|
|
" \"\"\"List of tags for the run.\"\"\"\n",
|
|
" metadata: Dict[str, Any]\n",
|
|
" \"\"\"Key-value pairs of metadata for the run.\"\"\"\n",
|
|
" start_time: str\n",
|
|
" \"\"\"ISO-8601 timestamp of when the run started.\"\"\"\n",
|
|
"\n",
|
|
" streamed_output_str: List[str]\n",
|
|
" \"\"\"List of LLM tokens streamed by this run, if applicable.\"\"\"\n",
|
|
" final_output: Optional[Any]\n",
|
|
" \"\"\"Final output of this run.\n",
|
|
" Only available after the run has finished successfully.\"\"\"\n",
|
|
" end_time: Optional[str]\n",
|
|
" \"\"\"ISO-8601 timestamp of when the run ended.\n",
|
|
" Only available after the run has finished.\"\"\"\n",
|
|
"\n",
|
|
"\n",
|
|
"class RunState(TypedDict):\n",
|
|
" id: str\n",
|
|
" \"\"\"ID of the run.\"\"\"\n",
|
|
" streamed_output: List[Any]\n",
|
|
" \"\"\"List of output chunks streamed by Runnable.stream()\"\"\"\n",
|
|
" final_output: Optional[Any]\n",
|
|
" \"\"\"Final output of the run, usually the result of aggregating (`+`) streamed_output.\n",
|
|
" Only available after the run has finished successfully.\"\"\"\n",
|
|
"\n",
|
|
" logs: Dict[str, LogEntry]\n",
|
|
" \"\"\"Map of run names to sub-runs. If filters were supplied, this list will\n",
|
|
" contain only the runs that matched the filters.\"\"\"\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a146a5df-25be-4fa2-a7e4-df8ebe55a35e",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Streaming JSONPatch chunks\n",
|
|
"\n",
|
|
"This is useful eg. to stream the JSONPatch in an HTTP server, and then apply the ops on the client to rebuild the run state there. See [LangServe](https://github.com/langchain-ai/langserve) for tooling to make it easier to build a webserver from any Runnable."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"id": "21c9019e",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"RunLogPatch({'op': 'replace',\n",
|
|
" 'path': '',\n",
|
|
" 'value': {'final_output': None,\n",
|
|
" 'id': 'fd6fcf62-c92c-4edf-8713-0fc5df000f62',\n",
|
|
" 'logs': {},\n",
|
|
" 'streamed_output': []}})\n",
|
|
"RunLogPatch({'op': 'add',\n",
|
|
" 'path': '/logs/Docs',\n",
|
|
" 'value': {'end_time': None,\n",
|
|
" 'final_output': None,\n",
|
|
" 'id': '8c998257-1ec8-4546-b744-c3fdb9728c41',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:35.668',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}})\n",
|
|
"RunLogPatch({'op': 'add',\n",
|
|
" 'path': '/logs/Docs/final_output',\n",
|
|
" 'value': {'documents': [Document(page_content='harrison worked at kensho')]}},\n",
|
|
" {'op': 'add',\n",
|
|
" 'path': '/logs/Docs/end_time',\n",
|
|
" 'value': '2023-10-05T12:52:36.033'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'H'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'arrison'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' worked'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' at'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' Kens'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'ho'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': '.'})\n",
|
|
"RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''})\n",
|
|
"RunLogPatch({'op': 'replace',\n",
|
|
" 'path': '/final_output',\n",
|
|
" 'value': {'output': 'Harrison worked at Kensho.'}})\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.embeddings import OpenAIEmbeddings\n",
|
|
"from langchain.schema.output_parser import StrOutputParser\n",
|
|
"from langchain.schema.runnable import RunnablePassthrough\n",
|
|
"from langchain.vectorstores import FAISS\n",
|
|
"\n",
|
|
"template = \"\"\"Answer the question based only on the following context:\n",
|
|
"{context}\n",
|
|
"\n",
|
|
"Question: {question}\n",
|
|
"\"\"\"\n",
|
|
"prompt = ChatPromptTemplate.from_template(template)\n",
|
|
"\n",
|
|
"vectorstore = FAISS.from_texts([\"harrison worked at kensho\"], embedding=OpenAIEmbeddings())\n",
|
|
"retriever = vectorstore.as_retriever()\n",
|
|
"\n",
|
|
"retrieval_chain = (\n",
|
|
" {\"context\": retriever.with_config(run_name='Docs'), \"question\": RunnablePassthrough()}\n",
|
|
" | prompt \n",
|
|
" | model \n",
|
|
" | StrOutputParser()\n",
|
|
")\n",
|
|
"\n",
|
|
"async for chunk in retrieval_chain.astream_log(\"where did harrison work?\", include_names=['Docs']):\n",
|
|
" print(chunk)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "19570f36-7126-4fe2-b209-0cc6178b4582",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Streaming the incremental RunState\n",
|
|
"\n",
|
|
"You can simply pass diff=False to get incremental values of RunState."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"id": "5c26b731-b4eb-4967-a42a-dec813249ecb",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {},\n",
|
|
" 'streamed_output': []})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': None,\n",
|
|
" 'final_output': None,\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': []})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': []})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['', 'H']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['', 'H', 'arrison']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['', 'H', 'arrison', ' worked', ' at', ' Kens', 'ho', '.']})\n",
|
|
"RunLog({'final_output': None,\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['',\n",
|
|
" 'H',\n",
|
|
" 'arrison',\n",
|
|
" ' worked',\n",
|
|
" ' at',\n",
|
|
" ' Kens',\n",
|
|
" 'ho',\n",
|
|
" '.',\n",
|
|
" '']})\n",
|
|
"RunLog({'final_output': {'output': 'Harrison worked at Kensho.'},\n",
|
|
" 'id': 'f95ccb87-31f1-48ea-a51c-d2dadde44185',\n",
|
|
" 'logs': {'Docs': {'end_time': '2023-10-05T12:52:37.217',\n",
|
|
" 'final_output': {'documents': [Document(page_content='harrison worked at kensho')]},\n",
|
|
" 'id': '621597dd-d716-4532-938d-debc21a453d1',\n",
|
|
" 'metadata': {},\n",
|
|
" 'name': 'Docs',\n",
|
|
" 'start_time': '2023-10-05T12:52:36.935',\n",
|
|
" 'streamed_output_str': [],\n",
|
|
" 'tags': ['map:key:context', 'FAISS'],\n",
|
|
" 'type': 'retriever'}},\n",
|
|
" 'streamed_output': ['',\n",
|
|
" 'H',\n",
|
|
" 'arrison',\n",
|
|
" ' worked',\n",
|
|
" ' at',\n",
|
|
" ' Kens',\n",
|
|
" 'ho',\n",
|
|
" '.',\n",
|
|
" '']})\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"async for chunk in retrieval_chain.astream_log(\"where did harrison work?\", include_names=['Docs'], diff=False):\n",
|
|
" print(chunk)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7006f1aa",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Parallelism\n",
|
|
"\n",
|
|
"Let's take a look at how LangChain Expression Language support parallel requests as much as possible. For example, when using a RunnableMap (often written as a dictionary) it executes each element in parallel."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "0a1c409d",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.schema.runnable import RunnableMap\n",
|
|
"chain1 = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
|
"chain2 = ChatPromptTemplate.from_template(\"write a short (2 line) poem about {topic}\") | model\n",
|
|
"combined = RunnableMap({\n",
|
|
" \"joke\": chain1,\n",
|
|
" \"poem\": chain2,\n",
|
|
"})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "08044c0a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 31.7 ms, sys: 8.59 ms, total: 40.3 ms\n",
|
|
"Wall time: 1.05 s\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"Why don't bears like fast food?\\n\\nBecause they can't catch it!\", additional_kwargs={}, example=False)"
|
|
]
|
|
},
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"chain1.invoke({\"topic\": \"bears\"})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "22c56804",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 42.9 ms, sys: 10.2 ms, total: 53 ms\n",
|
|
"Wall time: 1.93 s\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"In forest's embrace, bears roam free,\\nSilent strength, nature's majesty.\", additional_kwargs={}, example=False)"
|
|
]
|
|
},
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"chain2.invoke({\"topic\": \"bears\"})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "4fff4cbb",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 96.3 ms, sys: 20.4 ms, total: 117 ms\n",
|
|
"Wall time: 1.1 s\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'joke': AIMessage(content=\"Why don't bears wear socks?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
|
|
" 'poem': AIMessage(content=\"In forest's embrace,\\nMajestic bears leave their trace.\", additional_kwargs={}, example=False)}"
|
|
]
|
|
},
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"combined.invoke({\"topic\": \"bears\"})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "fab75d1d",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.5"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|