mirror of
				https://github.com/hwchase17/langchain.git
				synced 2025-10-31 16:08:59 +00:00 
			
		
		
		
	
		
			
				
	
	
		
			447 lines
		
	
	
		
			10 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			447 lines
		
	
	
		
			10 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
| {
 | |
|  "cells": [
 | |
|   {
 | |
|    "cell_type": "raw",
 | |
|    "id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "---\n",
 | |
|     "sidebar_position: 0\n",
 | |
|     "title: Interface\n",
 | |
|     "---"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "9a9acd2e",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
 | |
|     "\n",
 | |
|     "- `stream`: stream back chunks of the response\n",
 | |
|     "- `invoke`: call the chain on an input\n",
 | |
|     "- `batch`: call the chain on a list of inputs\n",
 | |
|     "\n",
 | |
|     "These also have corresponding async methods:\n",
 | |
|     "\n",
 | |
|     "- `astream`: stream back chunks of the response async\n",
 | |
|     "- `ainvoke`: call the chain on an input async\n",
 | |
|     "- `abatch`: call the chain on a list of inputs async\n",
 | |
|     "\n",
 | |
|     "The type of the input varies by component:\n",
 | |
|     "\n",
 | |
|     "| Component | Input Type |\n",
 | |
|     "| --- | --- |\n",
 | |
|     "|Prompt|Dictionary|\n",
 | |
|     "|Retriever|Single string|\n",
 | |
|     "|Model| Single string, list of chat messages or a PromptValue|\n",
 | |
|     "\n",
 | |
|     "The output type also varies by component:\n",
 | |
|     "\n",
 | |
|     "| Component | Output Type |\n",
 | |
|     "| --- | --- |\n",
 | |
|     "| LLM | String |\n",
 | |
|     "| ChatModel | ChatMessage |\n",
 | |
|     "| Prompt | PromptValue |\n",
 | |
|     "| Retriever | List of documents |\n",
 | |
|     "\n",
 | |
|     "Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 1,
 | |
|    "id": "466b65b3",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "from langchain.prompts import ChatPromptTemplate\n",
 | |
|     "from langchain.chat_models import ChatOpenAI"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 2,
 | |
|    "id": "3c634ef0",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "model = ChatOpenAI()"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 3,
 | |
|    "id": "d1850a1f",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 4,
 | |
|    "id": "56d0669f",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "chain = prompt | model"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "daf2b2b2",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Stream"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 8,
 | |
|    "id": "bea9639d",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "Sure, here's a bear-themed joke for you:\n",
 | |
|       "\n",
 | |
|       "Why don't bears wear shoes?\n",
 | |
|       "\n",
 | |
|       "Because they have bear feet!"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "for s in chain.stream({\"topic\": \"bears\"}):\n",
 | |
|     "    print(s.content, end=\"\", flush=True)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "cbf1c782",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Invoke"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 9,
 | |
|    "id": "470e483f",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they already have bear feet!\", additional_kwargs={}, example=False)"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 9,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "chain.invoke({\"topic\": \"bears\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "88f0c279",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Batch"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 19,
 | |
|    "id": "9685de67",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "[AIMessage(content=\"Why don't bears ever wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
 | |
|        " AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\", additional_kwargs={}, example=False)]"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 19,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "2434ab15",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "You can set the number of concurrent requests by using the `max_concurrency` parameter"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 5,
 | |
|    "id": "a08522f6",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
 | |
|        " AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\", additional_kwargs={}, example=False)]"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 5,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}], config={\"max_concurrency\": 5})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "b960cbfe",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Async Stream"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 13,
 | |
|    "id": "ea35eee4",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "Why don't bears wear shoes?\n",
 | |
|       "\n",
 | |
|       "Because they have bear feet!"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "async for s in chain.astream({\"topic\": \"bears\"}):\n",
 | |
|     "    print(s.content, end=\"\", flush=True)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "04cb3324",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Async Invoke"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 16,
 | |
|    "id": "ef8c9b20",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "AIMessage(content=\"Sure, here you go:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 16,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "await chain.ainvoke({\"topic\": \"bears\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "3da288d5",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Async Batch"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 18,
 | |
|    "id": "eba2a103",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)]"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 18,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "await chain.abatch([{\"topic\": \"bears\"}])"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "0a1c409d",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Parallelism\n",
 | |
|     "\n",
 | |
|     "Let's take a look at how LangChain Expression Language support parralel requests as much as possible. For example, when using a RunnableMapping (often written as a dictionary) it executes each element in parralel."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 7,
 | |
|    "id": "e3014c7a",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "from langchain.schema.runnable import RunnableMap\n",
 | |
|     "chain1 = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
 | |
|     "chain2 = ChatPromptTemplate.from_template(\"write a short (2 line) poem about {topic}\") | model\n",
 | |
|     "combined = RunnableMap({\n",
 | |
|     "    \"joke\": chain1,\n",
 | |
|     "    \"poem\": chain2,\n",
 | |
|     "})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 11,
 | |
|    "id": "08044c0a",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "CPU times: user 31.7 ms, sys: 8.59 ms, total: 40.3 ms\n",
 | |
|       "Wall time: 1.05 s\n"
 | |
|      ]
 | |
|     },
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "AIMessage(content=\"Why don't bears like fast food?\\n\\nBecause they can't catch it!\", additional_kwargs={}, example=False)"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 11,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "%%time\n",
 | |
|     "chain1.invoke({\"topic\": \"bears\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 12,
 | |
|    "id": "22c56804",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "CPU times: user 42.9 ms, sys: 10.2 ms, total: 53 ms\n",
 | |
|       "Wall time: 1.93 s\n"
 | |
|      ]
 | |
|     },
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "AIMessage(content=\"In forest's embrace, bears roam free,\\nSilent strength, nature's majesty.\", additional_kwargs={}, example=False)"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 12,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "%%time\n",
 | |
|     "chain2.invoke({\"topic\": \"bears\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 13,
 | |
|    "id": "4fff4cbb",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "CPU times: user 96.3 ms, sys: 20.4 ms, total: 117 ms\n",
 | |
|       "Wall time: 1.1 s\n"
 | |
|      ]
 | |
|     },
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "{'joke': AIMessage(content=\"Why don't bears wear socks?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
 | |
|        " 'poem': AIMessage(content=\"In forest's embrace,\\nMajestic bears leave their trace.\", additional_kwargs={}, example=False)}"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 13,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "%%time\n",
 | |
|     "combined.invoke({\"topic\": \"bears\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": null,
 | |
|    "id": "fab75d1d",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": []
 | |
|   }
 | |
|  ],
 | |
|  "metadata": {
 | |
|   "kernelspec": {
 | |
|    "display_name": "Python 3 (ipykernel)",
 | |
|    "language": "python",
 | |
|    "name": "python3"
 | |
|   },
 | |
|   "language_info": {
 | |
|    "codemirror_mode": {
 | |
|     "name": "ipython",
 | |
|     "version": 3
 | |
|    },
 | |
|    "file_extension": ".py",
 | |
|    "mimetype": "text/x-python",
 | |
|    "name": "python",
 | |
|    "nbconvert_exporter": "python",
 | |
|    "pygments_lexer": "ipython3",
 | |
|    "version": "3.9.1"
 | |
|   }
 | |
|  },
 | |
|  "nbformat": 4,
 | |
|  "nbformat_minor": 5
 | |
| }
 |