mirror of
				https://github.com/hwchase17/langchain.git
				synced 2025-10-24 20:20:50 +00:00 
			
		
		
		
	# docs cleaning Changed docs to consistent format (probably, we need an official doc integration template): - ClearML - added product descriptions; changed title/headers - Rebuff - added product descriptions; changed title/headers - WhyLabs - added product descriptions; changed title/headers - Docugami - changed title/headers/structure - Airbyte - fixed title - Wolfram Alpha - added descriptions, fixed title - OpenWeatherMap - - added product descriptions; changed title/headers - Unstructured - changed description ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 @dev2049
		
			
				
	
	
		
			602 lines
		
	
	
		
			17 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			602 lines
		
	
	
		
			17 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
| {
 | |
|  "cells": [
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "# Getting Started\n",
 | |
|     "\n",
 | |
|     "In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it.\n",
 | |
|     "\n",
 | |
|     "In this tutorial, we will cover:\n",
 | |
|     "- Using a simple LLM chain\n",
 | |
|     "- Creating sequential chains\n",
 | |
|     "- Creating a custom chain\n",
 | |
|     "\n",
 | |
|     "## Why do we need chains?\n",
 | |
|     "\n",
 | |
|     "Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.\n"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Quick start: Using `LLMChain`\n",
 | |
|     "\n",
 | |
|     "The `LLMChain` is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.\n",
 | |
|     "\n",
 | |
|     "\n",
 | |
|     "To use the `LLMChain`, first create a prompt template."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 1,
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "from langchain.prompts import PromptTemplate\n",
 | |
|     "from langchain.llms import OpenAI\n",
 | |
|     "\n",
 | |
|     "llm = OpenAI(temperature=0.9)\n",
 | |
|     "prompt = PromptTemplate(\n",
 | |
|     "    input_variables=[\"product\"],\n",
 | |
|     "    template=\"What is a good name for a company that makes {product}?\",\n",
 | |
|     ")"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 2,
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "\n",
 | |
|       "\n",
 | |
|       "Colorful Toes Co.\n"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "from langchain.chains import LLMChain\n",
 | |
|     "chain = LLMChain(llm=llm, prompt=prompt)\n",
 | |
|     "\n",
 | |
|     "# Run the chain only specifying the input variable.\n",
 | |
|     "print(chain.run(\"colorful socks\"))"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "If there are multiple variables, you can input them all at once using a dictionary."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 3,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "\n",
 | |
|       "\n",
 | |
|       "Socktopia Colourful Creations.\n"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "prompt = PromptTemplate(\n",
 | |
|     "    input_variables=[\"company\", \"product\"],\n",
 | |
|     "    template=\"What is a good name for {company} that makes {product}?\",\n",
 | |
|     ")\n",
 | |
|     "chain = LLMChain(llm=llm, prompt=prompt)\n",
 | |
|     "print(chain.run({\n",
 | |
|     "    'company': \"ABC Startup\",\n",
 | |
|     "    'product': \"colorful socks\"\n",
 | |
|     "    }))"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "You can use a chat model in an `LLMChain` as well:"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 4,
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "Rainbow Socks Co.\n"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "from langchain.chat_models import ChatOpenAI\n",
 | |
|     "from langchain.prompts.chat import (\n",
 | |
|     "    ChatPromptTemplate,\n",
 | |
|     "    HumanMessagePromptTemplate,\n",
 | |
|     ")\n",
 | |
|     "human_message_prompt = HumanMessagePromptTemplate(\n",
 | |
|     "        prompt=PromptTemplate(\n",
 | |
|     "            template=\"What is a good name for a company that makes {product}?\",\n",
 | |
|     "            input_variables=[\"product\"],\n",
 | |
|     "        )\n",
 | |
|     "    )\n",
 | |
|     "chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\n",
 | |
|     "chat = ChatOpenAI(temperature=0.9)\n",
 | |
|     "chain = LLMChain(llm=chat, prompt=chat_prompt_template)\n",
 | |
|     "print(chain.run(\"colorful socks\"))"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Different ways of calling chains\n",
 | |
|     "\n",
 | |
|     "All classes inherited from `Chain` offer a few ways of running chain logic. The most direct one is by using `__call__`:"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 5,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "{'adjective': 'corny',\n",
 | |
|        " 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 5,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "chat = ChatOpenAI(temperature=0)\n",
 | |
|     "prompt_template = \"Tell me a {adjective} joke\"\n",
 | |
|     "llm_chain = LLMChain(\n",
 | |
|     "    llm=chat,\n",
 | |
|     "    prompt=PromptTemplate.from_template(prompt_template)\n",
 | |
|     ")\n",
 | |
|     "\n",
 | |
|     "llm_chain(inputs={\"adjective\":\"corny\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "By default, `__call__` returns both the input and output key values. You can configure it to only return output key values by setting `return_only_outputs` to `True`."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 6,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 6,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "llm_chain(\"corny\", return_only_outputs=True)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "If the `Chain` only outputs one output key (i.e. only has one element in its `output_keys`), you can  use `run` method. Note that `run` outputs a string instead of a dictionary."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 7,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "['text']"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 7,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "# llm_chain only has one output key, so we can use run\n",
 | |
|     "llm_chain.output_keys"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 8,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "'Why did the tomato turn red? Because it saw the salad dressing!'"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 8,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "llm_chain.run({\"adjective\":\"corny\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "In the case of one input key, you can input the string directly without specifying the input mapping."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 9,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "{'adjective': 'corny',\n",
 | |
|        " 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 9,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "# These two are equivalent\n",
 | |
|     "llm_chain.run({\"adjective\":\"corny\"})\n",
 | |
|     "llm_chain.run(\"corny\")\n",
 | |
|     "\n",
 | |
|     "# These two are also equivalent\n",
 | |
|     "llm_chain(\"corny\")\n",
 | |
|     "llm_chain({\"adjective\":\"corny\"})"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](../agents/tools/custom_tools.ipynb)."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Add memory to chains\n",
 | |
|     "\n",
 | |
|     "`Chain` supports taking a `BaseMemory` object as its `memory` argument, allowing `Chain` object to persist data across multiple calls. In other words, it makes `Chain` a stateful object."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 10,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "'The next four colors of a rainbow are green, blue, indigo, and violet.'"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 10,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "from langchain.chains import ConversationChain\n",
 | |
|     "from langchain.memory import ConversationBufferMemory\n",
 | |
|     "\n",
 | |
|     "conversation = ConversationChain(\n",
 | |
|     "    llm=chat,\n",
 | |
|     "    memory=ConversationBufferMemory()\n",
 | |
|     ")\n",
 | |
|     "\n",
 | |
|     "conversation.run(\"Answer briefly. What are the first 3 colors of a rainbow?\")\n",
 | |
|     "# -> The first three colors of a rainbow are red, orange, and yellow.\n",
 | |
|     "conversation.run(\"And the next 4?\")\n",
 | |
|     "# -> The next four colors of a rainbow are green, blue, indigo, and violet."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "Essentially, `BaseMemory` defines an interface of how `langchain` stores memory. It allows reading of stored data through `load_memory_variables` method and storing new data through `save_context` method. You can learn more about it in [Memory](../memory/getting_started.ipynb) section."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Debug Chain\n",
 | |
|     "\n",
 | |
|     "It can be hard to debug `Chain` object solely from its output as most `Chain` objects involve a fair amount of input prompt preprocessing and LLM output post-processing. Setting `verbose` to `True` will print out some internal states of the `Chain` object while it is being ran."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 11,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "\n",
 | |
|       "\n",
 | |
|       "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
 | |
|       "Prompt after formatting:\n",
 | |
|       "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
 | |
|       "\n",
 | |
|       "Current conversation:\n",
 | |
|       "\n",
 | |
|       "Human: What is ChatGPT?\n",
 | |
|       "AI:\u001b[0m\n",
 | |
|       "\n",
 | |
|       "\u001b[1m> Finished chain.\u001b[0m\n"
 | |
|      ]
 | |
|     },
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 11,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "conversation = ConversationChain(\n",
 | |
|     "    llm=chat,\n",
 | |
|     "    memory=ConversationBufferMemory(),\n",
 | |
|     "    verbose=True\n",
 | |
|     ")\n",
 | |
|     "conversation.run(\"What is ChatGPT?\")"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Combine chains with the `SequentialChain`\n",
 | |
|     "\n",
 | |
|     "The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the `SimpleSequentialChain`. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next.\n",
 | |
|     "\n",
 | |
|     "In this tutorial, our sequential chain will:\n",
 | |
|     "1. First, create a company name for a product. We will reuse the `LLMChain` we'd previously initialized to create this company name.\n",
 | |
|     "2. Then, create a catchphrase for the product. We will initialize a new `LLMChain` to create this catchphrase, as shown below."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 12,
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "second_prompt = PromptTemplate(\n",
 | |
|     "    input_variables=[\"company_name\"],\n",
 | |
|     "    template=\"Write a catchphrase for the following company: {company_name}\",\n",
 | |
|     ")\n",
 | |
|     "chain_two = LLMChain(llm=llm, prompt=second_prompt)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "Now we can combine the two LLMChains, so that we can create a company name and a catchphrase in a single step."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 13,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "\n",
 | |
|       "\n",
 | |
|       "\u001b[1m> Entering new SimpleSequentialChain chain...\u001b[0m\n",
 | |
|       "\u001b[36;1m\u001b[1;3mRainbow Socks Co.\u001b[0m\n",
 | |
|       "\u001b[33;1m\u001b[1;3m\n",
 | |
|       "\n",
 | |
|       "\"Put a little rainbow in your step!\"\u001b[0m\n",
 | |
|       "\n",
 | |
|       "\u001b[1m> Finished chain.\u001b[0m\n",
 | |
|       "\n",
 | |
|       "\n",
 | |
|       "\"Put a little rainbow in your step!\"\n"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "from langchain.chains import SimpleSequentialChain\n",
 | |
|     "overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True)\n",
 | |
|     "\n",
 | |
|     "# Run the chain specifying only the input variable for the first chain.\n",
 | |
|     "catchphrase = overall_chain.run(\"colorful socks\")\n",
 | |
|     "print(catchphrase)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Create a custom chain with the `Chain` class\n",
 | |
|     "\n",
 | |
|     "LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 `LLMChain`s.\n",
 | |
|     "\n",
 | |
|     "In order to create a custom chain:\n",
 | |
|     "1. Start by subclassing the `Chain` class,\n",
 | |
|     "2. Fill out the `input_keys` and `output_keys` properties,\n",
 | |
|     "3. Add the `_call` method that shows how to execute the chain.\n",
 | |
|     "\n",
 | |
|     "These steps are demonstrated in the example below:"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 14,
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "from langchain.chains import LLMChain\n",
 | |
|     "from langchain.chains.base import Chain\n",
 | |
|     "\n",
 | |
|     "from typing import Dict, List\n",
 | |
|     "\n",
 | |
|     "\n",
 | |
|     "class ConcatenateChain(Chain):\n",
 | |
|     "    chain_1: LLMChain\n",
 | |
|     "    chain_2: LLMChain\n",
 | |
|     "\n",
 | |
|     "    @property\n",
 | |
|     "    def input_keys(self) -> List[str]:\n",
 | |
|     "        # Union of the input keys of the two chains.\n",
 | |
|     "        all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))\n",
 | |
|     "        return list(all_input_vars)\n",
 | |
|     "\n",
 | |
|     "    @property\n",
 | |
|     "    def output_keys(self) -> List[str]:\n",
 | |
|     "        return ['concat_output']\n",
 | |
|     "\n",
 | |
|     "    def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:\n",
 | |
|     "        output_1 = self.chain_1.run(inputs)\n",
 | |
|     "        output_2 = self.chain_2.run(inputs)\n",
 | |
|     "        return {'concat_output': output_1 + output_2}"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "Now, we can try running the chain that we called.\n",
 | |
|     "\n"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 15,
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "Concatenated output:\n",
 | |
|       "\n",
 | |
|       "\n",
 | |
|       "Funky Footwear Company\n",
 | |
|       "\n",
 | |
|       "\"Brighten Up Your Day with Our Colorful Socks!\"\n"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "prompt_1 = PromptTemplate(\n",
 | |
|     "    input_variables=[\"product\"],\n",
 | |
|     "    template=\"What is a good name for a company that makes {product}?\",\n",
 | |
|     ")\n",
 | |
|     "chain_1 = LLMChain(llm=llm, prompt=prompt_1)\n",
 | |
|     "\n",
 | |
|     "prompt_2 = PromptTemplate(\n",
 | |
|     "    input_variables=[\"product\"],\n",
 | |
|     "    template=\"What is a good slogan for a company that makes {product}?\",\n",
 | |
|     ")\n",
 | |
|     "chain_2 = LLMChain(llm=llm, prompt=prompt_2)\n",
 | |
|     "\n",
 | |
|     "concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)\n",
 | |
|     "concat_output = concat_chain.run(\"colorful socks\")\n",
 | |
|     "print(f\"Concatenated output:\\n{concat_output}\")"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "That's it! For more details about how to do cool things with Chains, check out the [how-to guide](how_to_guides.rst) for chains."
 | |
|    ]
 | |
|   }
 | |
|  ],
 | |
|  "metadata": {
 | |
|   "kernelspec": {
 | |
|    "display_name": "Python 3 (ipykernel)",
 | |
|    "language": "python",
 | |
|    "name": "python3"
 | |
|   },
 | |
|   "language_info": {
 | |
|    "codemirror_mode": {
 | |
|     "name": "ipython",
 | |
|     "version": 3
 | |
|    },
 | |
|    "file_extension": ".py",
 | |
|    "mimetype": "text/x-python",
 | |
|    "name": "python",
 | |
|    "nbconvert_exporter": "python",
 | |
|    "pygments_lexer": "ipython3",
 | |
|    "version": "3.10.6"
 | |
|   },
 | |
|   "vscode": {
 | |
|    "interpreter": {
 | |
|     "hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
 | |
|    }
 | |
|   }
 | |
|  },
 | |
|  "nbformat": 4,
 | |
|  "nbformat_minor": 4
 | |
| }
 |