Compare commits

...

4 Commits

Author SHA1 Message Date
jacoblee93
70e28dc8c8 Update routing cookbook to include a RunnableBranch example 2023-09-18 17:09:01 -07:00
jacoblee93
e550ac7954 Merge branch 'master' of https://github.com/hwchase17/langchain into jacob/routing_cookbook 2023-09-18 16:07:50 -07:00
jacoblee93
995fd9e586 Fix lint 2023-09-18 15:53:01 -07:00
jacoblee93
0050709969 Allow 3.5-turbo instruct models in the OpenAI LLM class 2023-09-18 15:51:02 -07:00

View File

@@ -7,24 +7,42 @@
"source": [
"# Route between multiple Runnables\n",
"\n",
"This notebook covers how to do routing in the LangChain Expression Language\n",
"This notebook covers how to do routing in the LangChain Expression Language.\n",
"\n",
"Right now, the easiest way to do it is to write a function that will take in the input of a previous step and return a **runnable**. Importantly, this should return a **runnable** and NOT actually execute.\n",
"Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing helps provide structure and consistency around interactions with LLMs.\n",
"\n",
"Let's take a look at this with a simple example. We will create a simple example where we will first classify whether the user input is a question about LangChain, OpenAI, or other, and route to a corresponding prompt chain."
"There are two ways to perform routing:\n",
"\n",
"1. Using a `RunnableBranch`.\n",
"2. Writing custom factory function that takes the input of a previous step and returns a **runnable**. Importantly, this should return a **runnable** and NOT actually execute.\n",
"\n",
"We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about `LangChain`, `Anthropic`, or `Other`, then routes to a corresponding prompt chain."
]
},
{
"cell_type": "markdown",
"id": "f885113d",
"metadata": {},
"source": [
"## Using a RunnableBranch\n",
"\n",
"A `RunnableBranch` is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input. \n",
"\n",
"If no provided conditions match, it runs the default runnable.\n",
"\n",
"Here's an example of what it looks like in action:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 1,
"id": "1aa13c1d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableLambda"
"from langchain.chat_models import ChatAnthropic\n",
"from langchain.schema.output_parser import StrOutputParser"
]
},
{
@@ -32,44 +50,46 @@
"id": "ed84c59a",
"metadata": {},
"source": [
"First, lets create a dummy chain that will return either 1 or 0, randomly"
"First, let's create a chain that will identify incoming questions as being about `LangChain`, `Anthropic`, or `Other`:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 2,
"id": "3ec03886",
"metadata": {},
"outputs": [],
"source": [
"chain = PromptTemplate.from_template(\"\"\"Given the user question below, classify it as either being about `LangChain`, `OpenAI`, or `Other`.\n",
"chain = PromptTemplate.from_template(\"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n",
" \n",
"Do not respond with more than one word.\n",
"\n",
"<question>\n",
"{question}\n",
"</question>\n",
"\n",
"Classification:\"\"\") | ChatOpenAI() | StrOutputParser()"
"Classification:\"\"\") | ChatAnthropic() | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 3,
"id": "87ae7c1c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='OpenAI', additional_kwargs={}, example=False)"
"' Anthropic'"
]
},
"execution_count": 17,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"how do I call openAI?\"})"
"chain.invoke({\"question\": \"how do I call Anthropic?\"})"
]
},
{
@@ -82,7 +102,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 4,
"id": "d479962a",
"metadata": {},
"outputs": [],
@@ -92,84 +112,82 @@
"Respond to the following question:\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\") | ChatOpenAI()\n",
"openai_chain = PromptTemplate.from_template(\"\"\"You are an expert in openai. \\\n",
"Always answer questions starting with \"As Sam Altman told me\". \\\n",
"Answer:\"\"\") | ChatAnthropic()\n",
"anthropic_chain = PromptTemplate.from_template(\"\"\"You are an expert in anthropic. \\\n",
"Always answer questions starting with \"As Dario Amodei told me\". \\\n",
"Respond to the following question:\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\") | ChatOpenAI()\n",
"Answer:\"\"\") | ChatAnthropic()\n",
"general_chain = PromptTemplate.from_template(\"\"\"Respond to the following question:\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\") | ChatOpenAI()"
"Answer:\"\"\") | ChatAnthropic()"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "687492da",
"execution_count": 5,
"id": "593eab06",
"metadata": {},
"outputs": [],
"source": [
"def route(info):\n",
" inputs = {\"question\": lambda x: x[\"question\"]}\n",
" if info[\"topic\"] == \"OpenAI\":\n",
" return inputs | openai_chain\n",
"from langchain.schema.runnable import RunnableBranch\n",
"\n",
" elif info[\"topic\"] == \"LangChain\":\n",
" return inputs | langchain_chain\n",
" else:\n",
" return inputs | general_chain"
"branch = RunnableBranch(\n",
" (lambda x: \"anthropic\" in x[\"topic\"].lower(), (lambda x: {\"question\": x[\"question\"]}) | anthropic_chain),\n",
" (lambda x: \"langchain\" in x[\"topic\"].lower(), (lambda x: {\"question\": x[\"question\"]}) | langchain_chain),\n",
" (lambda x: {\"question\": x[\"question\"]}) | general_chain\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "02a33c86",
"execution_count": 6,
"id": "752c732e",
"metadata": {},
"outputs": [],
"source": [
"full_chain = {\n",
" \"topic\": chain,\n",
" \"question\": lambda x: x[\"question\"]\n",
"} | RunnableLambda(route)"
"} | branch"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "c2e977a4",
"execution_count": 7,
"id": "29231bb8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"As Sam Altman told me, to use OpenAI, you can start by visiting the OpenAI website and exploring the available tools and resources. OpenAI offers a range of products that you can utilize, such as the GPT-3 language model or the Codex API. You can sign up for an account, read the documentation, and access the relevant APIs to integrate OpenAI's technologies into your applications. Additionally, you can join the OpenAI community to stay updated on the latest developments and connect with other users.\", additional_kwargs={}, example=False)"
"AIMessage(content=\" As Dario Amodei told me, here is how you can use Anthropic:\\n\\n1. Sign up for an account on the Anthropic website. This will allow you to access Anthropic's AI assistant Claude and start interacting with it.\\n\\n2. Try asking Claude general knowledge questions or having natural conversations to see its conversational abilities. You can ask Claude about topics like sports, movies, current events, math problems, etc.\\n\\n3. Use Claude for writing assistance. Describe what you want written and Claude will generate draft text for you. You can give it prompts for emails, articles, stories and more. \\n\\n4. Leverage Claude's capabilities for content generation. Provide it with a topic and Claude can create original text around it. This can be helpful for research, articles, presentations and more.\\n\\n5. Integrate Claude into your own applications using Anthropic's API. This allows you to build customized AI solutions with Claude's natural language processing skills.\\n\\n6. Check out Anthropic's open source Constitutional AI framework. This is their approach to building safe AI that respects human values.\\n\\n7. Follow Anthropic's blog and social media channels to stay up to date on new\", additional_kwargs={}, example=False)"
]
},
"execution_count": 35,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use OpenAI?\"})"
"full_chain.invoke({\"question\": \"how do I use Anthropic?\"})"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "48913dc6",
"execution_count": 8,
"id": "c67d8733",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"As Harrison Chase told me, to use LangChain, you will need to follow these steps:\\n\\n1. First, download and install the LangChain application on your device. It is available for both iOS and Android.\\n\\n2. Once installed, open the LangChain app and create an account. You will need to provide your email address and set a secure password.\\n\\n3. After creating your account, you will be prompted to select the languages you want to learn and the languages you already know. This will help tailor the learning experience to your specific needs.\\n\\n4. Once the initial setup is complete, you can start using LangChain to learn languages. The app offers various features such as interactive lessons, vocabulary exercises, and language exchange opportunities with native speakers.\\n\\n5. The app also provides personalized recommendations based on your learning progress and areas that need improvement. It tracks your performance and adjusts the difficulty level accordingly.\\n\\n6. Additionally, LangChain offers a community forum where you can interact with other language learners, ask questions, and seek advice.\\n\\n7. It is recommended to set a regular learning schedule and dedicate consistent time to practice using LangChain. Consistency is key to making progress in language learning.\\n\\nRemember, the more you use LangChain, the better your language skills will become. So, make the most of the app's features and engage actively in the learning process.\", additional_kwargs={}, example=False)"
"AIMessage(content=\" As Harrison Chase told me, here are a few tips for using LangChain:\\n\\nFirst, think carefully about what you want LangChain to do. Ask clear, specific questions that provide necessary context and details. The more information you give LangChain, the better it can understand and assist you. \\n\\nWhen entering your question or request, be polite and respectful. LangChain responds best to thoughtful, good faith inquiries. Avoid harmful, unethical, dangerous, or illegal requests.\\n\\nStart simply. Ask basic questions to get a feel for LangChain's capabilities before attempting more advanced or nuanced requests. Build up complexity gradually.\\n\\nBe patient. LangChain may take some time to generate thoughtful, high-quality responses to open-ended or ambiguous questions. Allow it a moment to compose its answers.\\n\\nProvide feedback. Let LangChain know when its responses are helpful, harmful, or off-base. This feedback helps it improve. Upvote useful responses and downvote unhelpful ones.\\n\\nEnjoy the journey. LangChain is an ever-evolving tool. The more you engage with it, the more it will learn and grow its skills. Be curious, imaginative and keep exploring new possibilities together.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 36,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -180,17 +198,17 @@
},
{
"cell_type": "code",
"execution_count": 41,
"id": "a14d0dca",
"execution_count": 9,
"id": "935ad949",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The sum of 2 plus 2 is 4.', additional_kwargs={}, example=False)"
"AIMessage(content=' 4', additional_kwargs={}, example=False)"
]
},
"execution_count": 41,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -199,13 +217,110 @@
"full_chain.invoke({\"question\": \"whats 2 + 2\"})"
]
},
{
"cell_type": "markdown",
"id": "6d8d042c",
"metadata": {},
"source": [
"## Using a custom function\n",
"\n",
"You can also use a custom function to route between different outputs. Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95eff174",
"execution_count": 10,
"id": "687492da",
"metadata": {},
"outputs": [],
"source": []
"source": [
"def route(info):\n",
" inputs = {\"question\": lambda x: x[\"question\"]}\n",
" if \"anthropic\" in info[\"topic\"].lower():\n",
" return inputs | anthropic_chain\n",
" elif \"langchain\" in info[\"topic\"].lower():\n",
" return inputs | langchain_chain\n",
" else:\n",
" return inputs | general_chain"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "02a33c86",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema.runnable import RunnableLambda\n",
"\n",
"full_chain = {\n",
" \"topic\": chain,\n",
" \"question\": lambda x: x[\"question\"]\n",
"} | RunnableLambda(route)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "c2e977a4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" As Dario Amodei told me, to use Anthropic's conversational AI system Anthropic Intelligent Personal Assistant (Anthro IPC), you simply need to chat with it through the web interface or API. Anthro IPC is designed to have natural conversations and be helpful, harmless, and honest.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use Anthroipc?\"})"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "48913dc6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" As Harrison Chase told me, here is how you use LangChain:\\n\\nLangChain is an AI assistant that can have natural conversations and provide helpful information. To use LangChain, simply chat with it in a conversational way, asking questions or making requests. It will try to understand what you are asking and provide an appropriate response. \\n\\nYou can ask LangChain open-ended questions, and it will generate thoughtful answers drawing from its broad knowledge. Or you can ask it for specifics, like definitions of words, calculations, or facts about certain topics. LangChain aims to have nuanced, high-quality conversations spanning many subjects.\\n\\nSome tips for using LangChain effectively:\\n\\n- Ask clear, straightforward questions and state requests plainly. Don't use too much slang or figures of speech that could confuse it.\\n\\n- You can follow up on its responses by asking for clarification, additional details, examples, etc. Having a back-and-forth conversation helps LangChain better understand you.\\n\\n- If you get a response that doesn't seem adequate, try rephrasing your question and asking it in a different way.\\n\\n- Let it know when its responses are helpful, incorrect, or could be improved. Providing feedback\", additional_kwargs={}, example=False)"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use LangChain?\"})"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "a14d0dca",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' 4', additional_kwargs={}, example=False)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"whats 2 + 2\"})"
]
}
],
"metadata": {
@@ -224,7 +339,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.5"
}
},
"nbformat": 4,