mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-04 16:20:16 +00:00
Compare commits
4 Commits
erick/docs
...
bagatur/lc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8882c80443 | ||
|
|
abfee87050 | ||
|
|
898da75c88 | ||
|
|
7d62637a15 |
888
docs/docs/expression_language/get_started.ipynb
Normal file
888
docs/docs/expression_language/get_started.ipynb
Normal file
@@ -0,0 +1,888 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 0\n",
|
||||
"title: Get started\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f331037f-be3f-4782-856f-d55dab952488",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a9acd2e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic example: prompt + model + output parser\n",
|
||||
"\n",
|
||||
"The most basic and common use case is chaining a prompt template and a model together. To see how this works, let's create a chain that takes a topic and generates a joke:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6b6c5518-85eb-43af-afd8-d3ff4643c389",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\n",
|
||||
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
|
||||
"output_parser = StrOutputParser()\n",
|
||||
"\n",
|
||||
"chain = prompt | model | output_parser\n",
|
||||
"\n",
|
||||
"chain.invoke({\"topic\": \"ice cream\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "ae8ca065-8479-4083-b593-5b5823ffc91a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Notice this line, where we piece together the different components into a single chain\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"chain = prompt | model | output_parser\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"The `|` symbol is similar to a unix pipe operator, creating a chain in which the output of each component is fed as input into the next component.\n",
|
||||
"\n",
|
||||
"In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on. \n",
|
||||
"\n",
|
||||
"### 1. Prompt\n",
|
||||
"\n",
|
||||
"`prompt` is a `BasePromptTemplate`, which means it takes in a dictionary of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing `BaseMessage`s and for producing a string."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "15b85a8f-0d79-49da-9132-b4554d7283e5",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"ChatPromptValue(messages=[HumanMessage(content='Tell me a short joke about ice cream')])"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt_value = prompt.invoke({\"topic\": \"ice cream\"})\n",
|
||||
"prompt_value"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "d0ca55ee-1b96-4e1f-bddb-bb3b12d5e54b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content='Tell me a short joke about ice cream')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt_value.to_messages()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "d5b345ba-48e4-4fda-873b-c92685237c52",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Human: Tell me a short joke about ice cream'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt_value.to_string()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1619c4b7-38f8-4ba4-bf46-ef6ffa92a6d6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 2. Model\n",
|
||||
"\n",
|
||||
"The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `BaseMessage`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "5f99f50c-8091-4bd6-9602-6b7504575ef0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Why did the ice cream go to therapy? \\n\\nBecause it was feeling a little rocky road!')"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"message = model.invoke(prompt_value)\n",
|
||||
"message"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b774231e-29d4-4f22-8c7e-8fd20b756d0d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If our `model` was an `LLM`, it would output a string."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "7d851773-25f9-4173-bb91-c1e94b61967e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the ice cream go to therapy?\\n\\nBecause it was feeling a little soft serve.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
|
||||
"llm.invoke(prompt_value)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "71d18c82-e9aa-4e5a-acda-d211aac20f1d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 3. Output parser\n",
|
||||
"\n",
|
||||
"And lastly we pass our `model` output to the `output_parser`, which is a `BaseOutputParser` meaning it takes either a string or a \n",
|
||||
"`BaseMessage` as input. The `StrOutputParser` specifically simple converts any input into a string."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "a3a0f4f3-6fa6-42de-bfaf-0bd8f3fdbd19",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Why did the ice cream go to therapy? \\n\\nBecause it was feeling a little rocky road!'"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"output_parser.invoke(message)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5b258fd5-22ab-4069-862f-e64c4be6c9a8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Why use LCEL\n",
|
||||
"\n",
|
||||
"To understand the value of LCEL, let's see what we'd have to do to achieve similar functionality without it in this simple use case.\n",
|
||||
"\n",
|
||||
"### Without LCEL\n",
|
||||
"\n",
|
||||
"We could recreate our above functionality without LCEL or LangChain at all by doing something like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e628905c-430e-4e4a-9d7c-c91d2f42052e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import openai\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain(topic: str) -> str:\n",
|
||||
" prompt_value = f\"Tell me a short joke about {topic}\"\n",
|
||||
" client = openai.OpenAI()\n",
|
||||
" response = client.chat.completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo\", messages=[{\"role\": \"user\", \"content\": prompt_value}]\n",
|
||||
" )\n",
|
||||
" return response.choices[0].message.content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Stream\n",
|
||||
"\n",
|
||||
"If we want to stream results instead, we'll need to change our function:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4f2cc6dc-d70a-4c13-9258-452f14290da6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Iterator\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain_stream(topic: str) -> Iterator[str]:\n",
|
||||
" prompt_value = f\"Tell me a short joke about {topic}\"\n",
|
||||
" client = openai.OpenAI()\n",
|
||||
" stream = client.chat.completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo\",\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": prompt_value}],\n",
|
||||
" stream=True,\n",
|
||||
" )\n",
|
||||
" for response in stream:\n",
|
||||
" content = response.choices[0].delta.content\n",
|
||||
" if content is not None:\n",
|
||||
" yield content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b9b41e78-ddeb-44d0-a58b-a0ea0c99a761",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Batch\n",
|
||||
"\n",
|
||||
"If we want to run on a batch of inputs in parallel, we'll again need a new function:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6b492f13-73a6-48ed-8d4f-9ad634da9988",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from concurrent.futures import ThreadPoolExecutor\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain_batch(topics: list) -> list:\n",
|
||||
" with ThreadPoolExecutor(max_workers=5) as executor:\n",
|
||||
" return list(executor.map(manual_chain, topics))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cc5ba36f-eec1-4fc1-8cfe-fa242a7f7809",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Async\n",
|
||||
"\n",
|
||||
"If you needed an asynchronous version:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 47,
|
||||
"id": "eabe6621-e815-41e3-9c9d-5aa561a69835",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"async def manual_chain_async(topic: str) -> str:\n",
|
||||
" prompt_value = f\"Tell me a short joke about {topic}\"\n",
|
||||
" client = openai.AsyncOpenAI()\n",
|
||||
" response = await client.chat.completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo\", messages=[{\"role\": \"user\", \"content\": prompt_value}]\n",
|
||||
" )\n",
|
||||
" return response.choices[0].message.content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f6888245-1ebe-4768-a53b-e1fef6a8b379",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### LLM instead of chat model\n",
|
||||
"\n",
|
||||
"If we want to use a completion endpoint instead of a chat endpoint: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9aca946b-acaa-4f7e-a3d0-ad8e3225e7f2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def manual_chain_completion(topic: str) -> str:\n",
|
||||
" prompt_value = f\"Tell me a short joke about {topic}\"\n",
|
||||
" client = openai.OpenAI()\n",
|
||||
" response = client.completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo-instruct\",\n",
|
||||
" prompt=prompt_value,\n",
|
||||
" )\n",
|
||||
" return response.choices[0].text"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ca115eaf-59ef-45c1-aac1-e8b0ce7db250",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Different model provider\n",
|
||||
"\n",
|
||||
"If we want to use Anthropic instead of OpenAI: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cde2ceb0-f65e-487b-9a32-137b0e9d79d5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import anthropic\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain_anthropic(topic: str) -> str:\n",
|
||||
" prompt_value = f\"Human:\\n\\nTell me a short joke about {topic}\\n\\nAssistant:\"\n",
|
||||
" client = anthropic.Anthropic()\n",
|
||||
" response = client.completions.create(\n",
|
||||
" model=\"claude-2\",\n",
|
||||
" prompt=prompt_value,\n",
|
||||
" max_tokens_to_sample=256,\n",
|
||||
" )\n",
|
||||
" return response.completion"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "370dd4d7-b825-40c4-ae3c-2693cba2f22a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Logging\n",
|
||||
"\n",
|
||||
"If we want to log our intermediate results (we'll `print` here for illustrative purposes):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "383a3c51-926d-48c6-b9ae-42bf8f14ecc8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def manual_chain_anthropic_logging(topic: str) -> str:\n",
|
||||
" print(f\"Input: {topic}\")\n",
|
||||
" prompt_value = f\"Human:\\n\\nTell me a short joke about {topic}\\n\\nAssistant:\"\n",
|
||||
" print(f\"Formatted prompt: {prompt_value}\")\n",
|
||||
" client = anthropic.Anthropic()\n",
|
||||
" response = client.completions.create(\n",
|
||||
" model=\"claude-2\",\n",
|
||||
" prompt=prompt_value,\n",
|
||||
" max_tokens_to_sample=256,\n",
|
||||
" )\n",
|
||||
" print(f\"Output: {response.completion}\")\n",
|
||||
" return response.completion"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e25ce3c5-27a7-4954-9f0e-b94313597135",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Fallbacks\n",
|
||||
"\n",
|
||||
"If you wanted to add retry or fallback logic:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2e49d512-bc83-4c5f-b56e-934b8343b0fe",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def manual_chain_with_fallback(topic: str) -> str:\n",
|
||||
" try:\n",
|
||||
" return manual_chain(topic)\n",
|
||||
" except Exception:\n",
|
||||
" return manual_chain_anthropic(topic)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f7ef59b5-2ce3-479e-a7ac-79e1e2f30e9c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With LCEL\n",
|
||||
"\n",
|
||||
"Now let's take a look at how all of this work with LCEL. We'll use our chain from before (and for ease of use take in a string instead of a dict):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 48,
|
||||
"id": "dc0de76a-daf5-4ec0-ba7f-c63225821591",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\n",
|
||||
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
|
||||
"output_parser = StrOutputParser()\n",
|
||||
"\n",
|
||||
"chain = {\"topic\": RunnablePassthrough()} | prompt | model | output_parser"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b0d85dda-d63c-459f-99ec-5d6d669b5b0c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain.invoke(\"ice cream\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0c9eb899-e7c8-4ab5-aecd-d305cd716082",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Streaming"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "71f15ae5-8353-4fe6-b506-73c67ec9c27d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for chunk in chain.stream(\"ice cream\"):\n",
|
||||
" print(chunk, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2eff0ae2-f2ca-4463-bacb-634fc788b5bb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Batch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "dcf9f4a7-5ded-47fb-9057-adb04ed3382e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain.batch([\"ice cream\", \"spaghetti\", \"dumplings\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "82c49198-3ac3-4805-b898-063c45ce89fb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Async\n",
|
||||
"```python\n",
|
||||
"chain.ainvoke(\"ice cream)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c184ca63-e74d-478c-980c-2c19b459cccd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### LLM instead of chat model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9f18118e-e901-42ec-a4a0-75d011bec10e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
|
||||
"llm_chain = {\"topic\": RunnablePassthrough()} | prompt | llm | output_parser\n",
|
||||
"llm_chain.invoke(\"ice cream\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a5de0201-3980-4f78-b89e-c8c59f1c4e7d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If we wanted, we could even make the choice of chat model or llm runtime configurable"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "937fa94a-b019-450b-bec5-b6e3443fa903",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.runnables import ConfigurableField\n",
|
||||
"\n",
|
||||
"configurable_model = model.configurable_alternatives(\n",
|
||||
" ConfigurableField(id=\"model\"), default_key=\"chat_openai\", openai=llm\n",
|
||||
")\n",
|
||||
"configurable_chain = {\"topic\": RunnablePassthrough()} | prompt | llm | output_parser\n",
|
||||
"configurable_chain.invoke(\"ice cream\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2187eb0b-e86b-4845-a2b3-2355781e1b8a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"configurable_chain.invoke(\"ice cream\", config={\"configurable\": {\"model\": \"openai\"}})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e900a52e-f858-4604-9413-7fa7cb04a8a5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Different model provider\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "983b323c-f573-452a-8f81-98eb8d6906f9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatAnthropic\n",
|
||||
"\n",
|
||||
"anthropic = ChatAnthropic(model=\"claude-2\")\n",
|
||||
"anthropic_chain = {\"topic\": RunnablePassthrough()} | prompt | anthropic | output_parser\n",
|
||||
"anthropic_chain.invoke(\"ice cream\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9c5e16de-a8db-4689-aeef-b2e76d9071cd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Logging\n",
|
||||
"\n",
|
||||
"By turning on LangSmith, every step of every chain is automatically logged. We set these environment variables:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d6204f21-d2e7-4ac6-871f-b60b34e5bd36",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"LANGCHAIN_API_KEY\"] = \"...\"\n",
|
||||
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4842ec53-b58a-4689-97da-32ed17003981",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"And then get a trace of every chain run: {trace}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4274f4bd-3a78-4a28-a531-28ea7ac1efae",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Fallbacks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3d0d8a0f-66eb-4c35-9529-74bec44ce4b8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"fallback_chain = chain.with_fallbacks([anthropic_chain])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f58af836-26bd-4eab-97a0-76dd56d53430",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With vs without LCEL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9fb3d71d-8c69-4dc4-81b7-95cd46b271c2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Our full code **with LCEL** looks like:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "715c469a-545e-434e-bd6e-99745dd880a7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatAnthropic, ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"os.environ[\"LANGCHAIN_API_KEY\"] = \"...\"\n",
|
||||
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\n",
|
||||
"\n",
|
||||
"chat_openai = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
|
||||
"openai = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
|
||||
"anthropic = ChatAnthropic(model=\"claude-2\")\n",
|
||||
"model = chat_openai.with_fallbacks([anthropic]).configurable_alternatives(\n",
|
||||
" ConfigurableField(id=\"model\"),\n",
|
||||
" default_key=\"chat_openai\",\n",
|
||||
" openai=openai,\n",
|
||||
" anthropic=anthropic,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = {\"topic\": RunnablePassthrough()} | prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0a925003-4a1f-406f-87f2-1fd8965b9f87",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Our code **without LCEL** might look something like:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a25837c5-829b-42a3-92b4-7e25831350c6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from concurrent.futures import ThreadPoolExecutor\n",
|
||||
"from typing import Iterator, List, Tuple\n",
|
||||
"\n",
|
||||
"import openai\n",
|
||||
"\n",
|
||||
"prompt_template = \"Tell me a short joke about {topic}\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain(topic: str, *, model: str = \"chat_openai\") -> str:\n",
|
||||
" print(f\"Input: {topic}\")\n",
|
||||
" prompt_value = prompt_template.format(topic=topic)\n",
|
||||
"\n",
|
||||
" if model == \"chat_openai\":\n",
|
||||
" print(f\"Full prompt: {prompt_value}\")\n",
|
||||
" response = openai.OpenAI().chat.completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo\", messages=[{\"role\": \"user\", \"content\": prompt_value}]\n",
|
||||
" )\n",
|
||||
" output = response.choices[0].message.content\n",
|
||||
" elif model == \"openai\":\n",
|
||||
" print(f\"Full prompt: {prompt_value}\")\n",
|
||||
" response = openai.OpenAI().completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo-instruct\",\n",
|
||||
" prompt=prompt_value,\n",
|
||||
" )\n",
|
||||
" output = response.choices[0].text\n",
|
||||
" elif model == \"anthropic\":\n",
|
||||
" prompt_value = f\"Human:\\n\\n{prompt_value}\\n\\nAssistant:\"\n",
|
||||
" print(f\"Full prompt: {prompt_value}\")\n",
|
||||
" response = anthropic.Anthropic().completions.create(\n",
|
||||
" model=\"claude-2\",\n",
|
||||
" prompt=prompt_value,\n",
|
||||
" max_tokens_to_sample=256,\n",
|
||||
" )\n",
|
||||
" output = response.completion\n",
|
||||
" else:\n",
|
||||
" raise ValueError(\n",
|
||||
" f\"Invalid model {model}. Should be one of chat_openai, openai, anthropic.\"\n",
|
||||
" )\n",
|
||||
" print(f\"Output: {output}\")\n",
|
||||
" return output\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain_with_fallbacks(\n",
|
||||
" topic: str, *, model: str = \"chat_openai\", fallbacks: Tuple[str] = (\"anthropic\",)\n",
|
||||
") -> str:\n",
|
||||
" for fallback in fallbacks:\n",
|
||||
" try:\n",
|
||||
" return manual_chain(topic, model=model)\n",
|
||||
" except Exception as e:\n",
|
||||
" print(f\"Error {e}\")\n",
|
||||
" model = fallback\n",
|
||||
" raise e\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain_batch(\n",
|
||||
" topics: List[str],\n",
|
||||
" *,\n",
|
||||
" model: str = \"chat_openai\",\n",
|
||||
" fallbacks: Tuple[str] = (\"anthropic\",),\n",
|
||||
") -> List[str]:\n",
|
||||
" models = [model] * len(topics)\n",
|
||||
" fallbacks_list = [fallbacks] * len(topics)\n",
|
||||
" with ThreadPoolExecutor(max_workers=5) as executor:\n",
|
||||
" return list(\n",
|
||||
" executor.map(manual_chain_with_fallbacks, topics, models, fallbacks_list)\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain_stream(topic: str, *, model: str = \"chat_openai\") -> Iterator[str]:\n",
|
||||
" print(f\"Input: {topic}\")\n",
|
||||
" prompt_value = prompt_template.format(topic=topic)\n",
|
||||
"\n",
|
||||
" if model == \"chat_openai\":\n",
|
||||
" print(f\"Full prompt: {prompt_value}\")\n",
|
||||
" stream = openai.OpenAI().chat.completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo\",\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": prompt_value}],\n",
|
||||
" stream=True,\n",
|
||||
" )\n",
|
||||
" for response in stream:\n",
|
||||
" content = response.choices[0].delta.content\n",
|
||||
" if content is not None:\n",
|
||||
" yield content\n",
|
||||
" elif model == \"openai\":\n",
|
||||
" print(f\"Full prompt: {prompt_value}\")\n",
|
||||
" stream = openai.OpenAI().completions.create(\n",
|
||||
" model=\"gpt-3.5-turbo-instruct\", prompt=prompt_value, stream=True\n",
|
||||
" )\n",
|
||||
" for response in stream:\n",
|
||||
" yield response.choices[0].text\n",
|
||||
" elif model == \"anthropic\":\n",
|
||||
" prompt_value = f\"Human:\\n\\n{prompt_value}\\n\\nAssistant:\"\n",
|
||||
" print(f\"Full prompt: {prompt_value}\")\n",
|
||||
" stream = anthropic.Anthropic().completions.create(\n",
|
||||
" model=\"claude-2\", prompt=prompt_value, max_tokens_to_sample=256, stream=True\n",
|
||||
" )\n",
|
||||
" for response in stream:\n",
|
||||
" yield response.completion\n",
|
||||
" else:\n",
|
||||
" raise ValueError(\n",
|
||||
" f\"Invalid model {model}. Should be one of chat_openai, openai, anthropic.\"\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def manual_chain_async(topic: str, *, model: str = \"chat_openai\") -> str:\n",
|
||||
" # You get the idea :)\n",
|
||||
" ...\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def manual_chain_async_batch(\n",
|
||||
" topics: List[str], *, model: str = \"chat_openai\"\n",
|
||||
") -> List[str]:\n",
|
||||
" ...\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def manual_chain_async_stream(\n",
|
||||
" topic: str, *, model: str = \"chat_openai\"\n",
|
||||
") -> Iterator[str]:\n",
|
||||
" ...\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def manual_chain_stream_with_fallbacks(\n",
|
||||
" topic: str, *, model: str = \"chat_openai\", fallbacks: Tuple[str] = (\"anthropic\",)\n",
|
||||
") -> Iterator[str]:\n",
|
||||
" ..."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -6,7 +6,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 0\n",
|
||||
"sidebar_position: 1\n",
|
||||
"title: Interface\n",
|
||||
"---"
|
||||
]
|
||||
|
||||
@@ -239,16 +239,11 @@ class ChatOpenAI(BaseChatModel):
|
||||
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Build extra kwargs from additional params that were passed in."""
|
||||
all_required_field_names = get_pydantic_field_names(cls)
|
||||
extra = values.get("model_kwargs", {})
|
||||
extra = values.get("extra_params", {})
|
||||
for field_name in list(values):
|
||||
if field_name in extra:
|
||||
raise ValueError(f"Found {field_name} supplied twice.")
|
||||
if field_name not in all_required_field_names:
|
||||
logger.warning(
|
||||
f"""WARNING! {field_name} is not default parameter.
|
||||
{field_name} was transferred to model_kwargs.
|
||||
Please confirm that {field_name} is what you intended."""
|
||||
)
|
||||
extra[field_name] = values.pop(field_name)
|
||||
|
||||
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
|
||||
@@ -258,7 +253,7 @@ class ChatOpenAI(BaseChatModel):
|
||||
f"Instead they were passed in as part of `model_kwargs` parameter."
|
||||
)
|
||||
|
||||
values["model_kwargs"] = extra
|
||||
values["extra_params"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
|
||||
Reference in New Issue
Block a user