diff --git a/docs/docs/expression_language/how_to/configure.ipynb b/docs/docs/expression_language/how_to/configure.ipynb index a4edaf959de..4afe3b05290 100644 --- a/docs/docs/expression_language/how_to/configure.ipynb +++ b/docs/docs/expression_language/how_to/configure.ipynb @@ -5,7 +5,7 @@ "id": "39eaf61b", "metadata": {}, "source": [ - "# Configuration\n", + "# Configure chain internals at runtime\n", "\n", "Oftentimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things.\n", "In order to make this experience as easy as possible, we have defined two methods.\n", @@ -594,7 +594,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.1" + "version": "3.9.1" } }, "nbformat": 4, diff --git a/docs/docs/expression_language/how_to/functions.ipynb b/docs/docs/expression_language/how_to/functions.ipynb index f1fba6744b5..fea416cf30c 100644 --- a/docs/docs/expression_language/how_to/functions.ipynb +++ b/docs/docs/expression_language/how_to/functions.ipynb @@ -5,7 +5,7 @@ "id": "fbc4bf6e", "metadata": {}, "source": [ - "# Run arbitrary functions\n", + "# Run custom functions\n", "\n", "You can use arbitrary functions in the pipeline\n", "\n", @@ -175,7 +175,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.1" + "version": "3.9.1" } }, "nbformat": 4, diff --git a/docs/docs/expression_language/how_to/generators.ipynb b/docs/docs/expression_language/how_to/generators.ipynb index 4c53428865e..bf70f3a94e2 100644 --- a/docs/docs/expression_language/how_to/generators.ipynb +++ b/docs/docs/expression_language/how_to/generators.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Custom generator functions\n", + "# Stream custom generator functions\n", "\n", "You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a LCEL pipeline.\n", "\n", @@ -21,15 +21,7 @@ "cell_type": "code", "execution_count": 1, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "lion, tiger, wolf, gorilla, panda\n" - ] - } - ], + "outputs": [], "source": [ "from typing import Iterator, List\n", "\n", @@ -43,16 +35,51 @@ ")\n", "model = ChatOpenAI(temperature=0.0)\n", "\n", - "\n", - "str_chain = prompt | model | StrOutputParser()\n", - "\n", - "print(str_chain.invoke({\"animal\": \"bear\"}))" + "str_chain = prompt | model | StrOutputParser()" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "lion, tiger, wolf, gorilla, panda" + ] + } + ], + "source": [ + "for chunk in str_chain.stream({\"animal\": \"bear\"}):\n", + " print(chunk, end=\"\", flush=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'lion, tiger, wolf, gorilla, panda'" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "str_chain.invoke({\"animal\": \"bear\"})" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, "outputs": [], "source": [ "# This is a custom parser that splits an iterator of llm tokens\n", @@ -77,22 +104,61 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "list_chain = str_chain | split_into_list" + ] + }, + { + "cell_type": "code", + "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "['lion', 'tiger', 'wolf', 'gorilla', 'panda']\n" + "['lion']\n", + "['tiger']\n", + "['wolf']\n", + "['gorilla']\n", + "['panda']\n" ] } ], "source": [ - "list_chain = str_chain | split_into_list\n", - "\n", - "print(list_chain.invoke({\"animal\": \"bear\"}))" + "for chunk in list_chain.stream({\"animal\": \"bear\"}):\n", + " print(chunk, flush=True)" ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "['lion', 'tiger', 'wolf', 'gorilla', 'panda']" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "list_chain.invoke({\"animal\": \"bear\"})" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -111,9 +177,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.5" + "version": "3.9.1" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/docs/docs/expression_language/how_to/map.ipynb b/docs/docs/expression_language/how_to/map.ipynb index b56f52671b5..8c5aaa1cd65 100644 --- a/docs/docs/expression_language/how_to/map.ipynb +++ b/docs/docs/expression_language/how_to/map.ipynb @@ -5,7 +5,7 @@ "id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2", "metadata": {}, "source": [ - "# Use RunnableParallel/RunnableMap\n", + "# Parallelize steps\n", "\n", "RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map." ] @@ -195,7 +195,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.1" + "version": "3.9.1" } }, "nbformat": 4, diff --git a/docs/docs/expression_language/how_to/routing.ipynb b/docs/docs/expression_language/how_to/routing.ipynb index 6d0fe067984..63c321a3dcd 100644 --- a/docs/docs/expression_language/how_to/routing.ipynb +++ b/docs/docs/expression_language/how_to/routing.ipynb @@ -5,7 +5,7 @@ "id": "4b47436a", "metadata": {}, "source": [ - "# Route between multiple Runnables\n", + "# Dynamically route logic based on input\n", "\n", "This notebook covers how to do routing in the LangChain Expression Language.\n", "\n", diff --git a/docs/docs/expression_language/interface.ipynb b/docs/docs/expression_language/interface.ipynb index 0acf8013c04..8ffc08a45b8 100644 --- a/docs/docs/expression_language/interface.ipynb +++ b/docs/docs/expression_language/interface.ipynb @@ -8,7 +8,7 @@ "---\n", "sidebar_position: 0\n", "title: Interface\n", - "---\n" + "---" ] }, { @@ -31,26 +31,17 @@ "- [`abatch`](#async-batch): call the chain on a list of inputs async\n", "- [`astream_log`](#async-stream-intermediate-steps): stream back intermediate steps as they happen, in addition to the final response\n", "\n", - "The **input type** varies by component:\n", + "The **input type** and **output type** varies by component:\n", "\n", - "| Component | Input Type |\n", - "| --- | --- |\n", - "|Prompt|Dictionary|\n", - "|Retriever|Single string|\n", - "|LLM, ChatModel| Single string, list of chat messages or a PromptValue|\n", - "|Tool|Single string, or dictionary, depending on the tool|\n", - "|OutputParser|The output of an LLM or ChatModel|\n", + "| Component | Input Type | Output Type |\n", + "| --- | --- | --- |\n", + "| Prompt | Dictionary | PromptValue |\n", + "| ChatModel | Single string, list of chat messages or a PromptValue | ChatMessage |\n", + "| LLM | Single string, list of chat messages or a PromptValue | String |\n", + "| OutputParser | The output of an LLM or ChatModel | Depends on the parser |\n", + "| Retriever | Single string | List of Documents |\n", + "| Tool | Single string or dictionary, depending on the tool | Depends on the tool |\n", "\n", - "The **output type** also varies by component:\n", - "\n", - "| Component | Output Type |\n", - "| --- | --- |\n", - "| LLM | String |\n", - "| ChatModel | ChatMessage |\n", - "| Prompt | PromptValue |\n", - "| Retriever | List of documents |\n", - "| Tool | Depends on the tool |\n", - "| OutputParser | Depends on the parser |\n", "\n", "All runnables expose input and output **schemas** to inspect the inputs and outputs:\n", "- [`input_schema`](#input-schema): an input Pydantic model auto-generated from the structure of the Runnable\n", @@ -1161,7 +1152,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.12" + "version": "3.9.1" } }, "nbformat": 4,