Harrison/lcel configuration (#11997)

This commit is contained in:
Harrison Chase 2023-10-18 16:01:38 -07:00 committed by GitHub
parent 26d0858a60
commit bdecc5bade
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1,15 +1,246 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "39eaf61b",
"metadata": {},
"source": [
"# Configuration\n",
"\n",
"Oftentimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things.\n",
"In order to make this experience as easy as possible, we have defined two methods.\n",
"\n",
"First, a `configurable_fields` method. \n",
"This lets you configure particular fields of a runnable.\n",
"\n",
"Second, a `configurable_alternatives` method.\n",
"With this method, you can list out alternatives for any particular runnable that can be set during runtime."
]
},
{
"cell_type": "markdown",
"id": "f2347a11",
"metadata": {},
"source": [
"## Configuration Fields"
]
},
{
"cell_type": "markdown",
"id": "a06f6e2d",
"metadata": {},
"source": [
"### With LLMs\n",
"With LLMs we can configure things like temperature"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "7ba735f4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"\n",
"model = ChatOpenAI(temperature=0).configurable_fields(\n",
" temperature=ConfigurableField(\n",
" id=\"llm_temperature\",\n",
" name=\"LLM Temperature\",\n",
" description=\"The temperature of the LLM\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "63a71165",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='7')"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.invoke(\"pick a random number\")"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "4f83245c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='34')"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.with_config(configurable={\"llm_temperature\": .9}).invoke(\"pick a random number\")"
]
},
{
"cell_type": "markdown",
"id": "9da1fcd2",
"metadata": {},
"source": [
"We can also do this when its used as part of a chain"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "e75ae678",
"metadata": {},
"outputs": [],
"source": [
"prompt = PromptTemplate.from_template(\"Pick a random number above {x}\")\n",
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "44886071",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='57')"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"x\": 0})"
]
},
{
"cell_type": "code",
"execution_count": 42,
"id": "c09fac15",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='6')"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.with_config(configurable={\"llm_temperature\": .9}).invoke({\"x\": 0})"
]
},
{
"cell_type": "markdown",
"id": "fb9637d0",
"metadata": {},
"source": [
"### With HubRunnables\n",
"\n",
"This is useful to allow for switching of prompts"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "7d5836b2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.runnables.hub import HubRunnable"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "9a9ea077",
"metadata": {},
"outputs": [],
"source": [
"prompt = HubRunnable(\"rlm/rag-prompt\").configurable_fields(\n",
" owner_repo_commit=ConfigurableField(\n",
" id=\"hub_commit\",\n",
" name=\"Hub Commit\",\n",
" description=\"The Hub commit to pull from\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "c4a62cee",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: foo \\nContext: bar \\nAnswer:\")])"
]
},
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt.invoke({\"question\": \"foo\", \"context\": \"bar\"})"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "f33f3cf2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: foo \\nContext: bar \\nAnswer: [/INST]\")])"
]
},
"execution_count": 49,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt.with_config(configurable={\"hub_commit\": \"rlm/rag-prompt-llama\"}).invoke({\"question\": \"foo\", \"context\": \"bar\"})"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "79d51519", "id": "79d51519",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Configurable Alternatives\n", "## Configurable Alternatives\n",
"\n", "\n"
"Oftentimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things.\n",
"In order to make this experience as easy as possible, we have defined a `configurable_alternatives` method.\n",
"With this method, you can list out alternatives for any particular runnable that can be set during runtime."
] ]
}, },
{ {
@ -17,7 +248,7 @@
"id": "ac733d35", "id": "ac733d35",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## With LLMs\n", "### With LLMs\n",
"\n", "\n",
"Let's take a look at doing this with LLMs" "Let's take a look at doing this with LLMs"
] ]
@ -129,7 +360,7 @@
"id": "a9134559", "id": "a9134559",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## With Prompts\n", "### With Prompts\n",
"\n", "\n",
"We can do a similar thing, but alternate between prompts\n" "We can do a similar thing, but alternate between prompts\n"
] ]
@ -205,7 +436,7 @@
"id": "0c77124e", "id": "0c77124e",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## With Prompts and LLMs\n", "### With Prompts and LLMs\n",
"\n", "\n",
"We can also have multiple things configurable!\n", "We can also have multiple things configurable!\n",
"Here's an example doing that with both prompts and LLMs." "Here's an example doing that with both prompts and LLMs."
@ -294,7 +525,7 @@
"id": "02fc4841", "id": "02fc4841",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Saving configurations\n", "### Saving configurations\n",
"\n", "\n",
"We can also easily save configured chains as their own objects" "We can also easily save configured chains as their own objects"
] ]