mirror of
https://github.com/hwchase17/langchain.git
synced 2025-12-16 20:33:56 +00:00
- Update language / add links in places - De-emphasize output parsers - remove deployment section
368 lines
13 KiB
Plaintext
368 lines
13 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"id": "63ee3f93",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"sidebar_position: 0\n",
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9316da0d",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Build a Simple LLM Application\n",
|
|
"\n",
|
|
"In this quickstart we'll show you how to build a simple LLM application with LangChain. This application will translate text from English into another language. This is a relatively simple LLM application - it's just a single LLM call plus some prompting. Still, this is a great way to get started with LangChain - a lot of features can be built with just some prompting and an LLM call!\n",
|
|
"\n",
|
|
"After reading this tutorial, you'll have a high level overview of:\n",
|
|
"\n",
|
|
"- Using [language models](/docs/concepts/chat_models)\n",
|
|
"\n",
|
|
"- Using [PromptTemplates](/docs/concepts/prompt_templates)\n",
|
|
"\n",
|
|
"- Debugging and tracing your application using [LangSmith](https://docs.smith.langchain.com/)\n",
|
|
"\n",
|
|
"Let's dive in!\n",
|
|
"\n",
|
|
"## Setup\n",
|
|
"\n",
|
|
"### Jupyter Notebook\n",
|
|
"\n",
|
|
"This and other tutorials are perhaps most conveniently run in a [Jupyter notebooks](https://jupyter.org/). Going through guides in an interactive environment is a great way to better understand them. See [here](https://jupyter.org/install) for instructions on how to install.\n",
|
|
"\n",
|
|
"### Installation\n",
|
|
"\n",
|
|
"To install LangChain run:\n",
|
|
"\n",
|
|
"import Tabs from '@theme/Tabs';\n",
|
|
"import TabItem from '@theme/TabItem';\n",
|
|
"import CodeBlock from \"@theme/CodeBlock\";\n",
|
|
"\n",
|
|
"<Tabs>\n",
|
|
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
|
|
" <CodeBlock language=\"bash\">pip install langchain</CodeBlock>\n",
|
|
" </TabItem>\n",
|
|
" <TabItem value=\"conda\" label=\"Conda\">\n",
|
|
" <CodeBlock language=\"bash\">conda install langchain -c conda-forge</CodeBlock>\n",
|
|
" </TabItem>\n",
|
|
"</Tabs>\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
|
|
"\n",
|
|
"### LangSmith\n",
|
|
"\n",
|
|
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
|
|
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
|
|
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
|
|
"\n",
|
|
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
|
|
"\n",
|
|
"```shell\n",
|
|
"export LANGCHAIN_TRACING_V2=\"true\"\n",
|
|
"export LANGCHAIN_API_KEY=\"...\"\n",
|
|
"```\n",
|
|
"\n",
|
|
"Or, if in a notebook, you can set them with:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"import getpass\n",
|
|
"import os\n",
|
|
"\n",
|
|
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
|
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e5558ca9",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Using Language Models\n",
|
|
"\n",
|
|
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably. For details on getting started with a specific model, refer to [supported integrations](/docs/integrations/chat/).\n",
|
|
"\n",
|
|
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
|
"\n",
|
|
"<ChatModelTabs openaiParams={`model=\"gpt-4o-mini\"`} />\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "e4b41234",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# | output: false\n",
|
|
"# | echo: false\n",
|
|
"\n",
|
|
"from langchain_openai import ChatOpenAI\n",
|
|
"\n",
|
|
"model = ChatOpenAI(model=\"gpt-4o\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ca5642ff",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's first use the model directly. [ChatModels](/docs/concepts/chat_models) are instances of LangChain [Runnables](/docs/concepts/runnables/), which means they expose a standard interface for interacting with them. To simply call the model, we can pass in a list of messages to the `.invoke` method."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "1b2481f0",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content='Ciao!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 3, 'prompt_tokens': 20, 'total_tokens': 23, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_9ee9e968ea', 'finish_reason': 'stop', 'logprobs': None}, id='run-ad371806-6082-45c3-b6fa-e44622848ab2-0', usage_metadata={'input_tokens': 20, 'output_tokens': 3, 'total_tokens': 23, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
|
|
]
|
|
},
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_core.messages import HumanMessage, SystemMessage\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" SystemMessage(content=\"Translate the following from English into Italian\"),\n",
|
|
" HumanMessage(content=\"hi!\"),\n",
|
|
"]\n",
|
|
"\n",
|
|
"model.invoke(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f83373db",
|
|
"metadata": {},
|
|
"source": [
|
|
"If we've enabled LangSmith, we can see that this run is logged to LangSmith, and can see the [LangSmith trace](https://smith.langchain.com/public/88baa0b2-7c1a-4d09-ba30-a47985dde2ea/r). The LangSmith trace reports [token](/docs/concepts/tokens/) usage information, latency, [standard model parameters](/docs/concepts/chat_models/#standard-parameters) (such as temperature), and other information.\n",
|
|
"\n",
|
|
"Note that ChatModels receive [message](/docs/concepts/messages/) objects as input and generate message objects as output. In addition to text content, message objects convey conversational [roles](/docs/concepts/messages/#role) and hold important data, such as [tool calls](/docs/concepts/tool_calling/) and token usage counts."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1ab8da31",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Prompt Templates\n",
|
|
"\n",
|
|
"Right now we are passing a list of messages directly into the language model. Where does this list of messages come from? Usually, it is constructed from a combination of user input and application logic. This application logic usually takes the raw user input and transforms it into a list of messages ready to pass to the language model. Common transformations include adding a system message or formatting a template with the user input.\n",
|
|
"\n",
|
|
"[Prompt templates](/docs/concepts/prompt_templates/) are a concept in LangChain designed to assist with this transformation. They take in raw user input and return data (a prompt) that is ready to pass into a language model. \n",
|
|
"\n",
|
|
"Let's create a prompt template here. It will take in two user variables:\n",
|
|
"\n",
|
|
"- `language`: The language to translate text into\n",
|
|
"- `text`: The text to translate"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "3e73cc20",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
|
"\n",
|
|
"system_template = \"Translate the following from English into {language}\"\n",
|
|
"\n",
|
|
"prompt_template = ChatPromptTemplate.from_messages(\n",
|
|
" [(\"system\", system_template), (\"user\", \"{text}\")]\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7e876c2a",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note that `ChatPromptTemplate` supports multiple [message roles](/docs/concepts/messages/#role) in a single template. We format the `language` parameter into the system message, and the user `text` into a user message."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d9711ba6",
|
|
"metadata": {},
|
|
"source": [
|
|
"The input to this prompt template is a dictionary. We can play around with this prompt template by itself to see what it does by itself"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "f781b3cb",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"ChatPromptValue(messages=[SystemMessage(content='Translate the following from English into Italian', additional_kwargs={}, response_metadata={}), HumanMessage(content='hi!', additional_kwargs={}, response_metadata={})])"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"result = prompt_template.invoke({\"language\": \"Italian\", \"text\": \"hi!\"})\n",
|
|
"\n",
|
|
"result"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1a49ba9e",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can see that it returns a `ChatPromptValue` that consists of two messages. If we want to access the messages directly we do:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "2159b619",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[SystemMessage(content='Translate the following from English into Italian', additional_kwargs={}, response_metadata={}),\n",
|
|
" HumanMessage(content='hi!', additional_kwargs={}, response_metadata={})]"
|
|
]
|
|
},
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"result.to_messages()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5a4267a8",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Chaining together components with LCEL\n",
|
|
"\n",
|
|
"We can now combine this with the model from above using the pipe (`|`) operator:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "6c6beb4b",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"chain = prompt_template | model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "3e45595a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Ciao!\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"response = chain.invoke({\"language\": \"Italian\", \"text\": \"hi!\"})\n",
|
|
"print(response.content)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0b19cecb",
|
|
"metadata": {},
|
|
"source": [
|
|
":::tip\n",
|
|
"Message `content` can contain both text and [content blocks](/docs/concepts/messages/#aimessage) with additional structure. See [this guide](/docs/how_to/output_parser_string/) for more information.\n",
|
|
":::\n",
|
|
"\n",
|
|
"\n",
|
|
"This is a simple example of using [LangChain Expression Language (LCEL)](/docs/concepts/lcel) to chain together LangChain modules. There are several benefits to this approach, including optimized streaming and tracing support.\n",
|
|
"\n",
|
|
"If we take a look at the [LangSmith trace](https://smith.langchain.com/public/bc49bec0-6b13-4726-967f-dbd3448b786d/r), we can see both components show up."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "befdb168",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Conclusion\n",
|
|
"\n",
|
|
"That's it! In this tutorial you've learned how to create your first simple LLM application. You've learned how to work with language models, how to how to create a prompt template, and how to get great observability into chains you create with LangSmith.\n",
|
|
"\n",
|
|
"This just scratches the surface of what you will want to learn to become a proficient AI Engineer. Luckily - we've got a lot of other resources!\n",
|
|
"\n",
|
|
"For further reading on the core concepts of LangChain, we've got detailed [Conceptual Guides](/docs/concepts).\n",
|
|
"\n",
|
|
"If you have more specific questions on these concepts, check out the following sections of the how-to guides:\n",
|
|
"\n",
|
|
"- [Chat models](/docs/how_to/#chat-models)\n",
|
|
"- [Prompt templates](/docs/how_to/#prompt-templates)\n",
|
|
"\n",
|
|
"And the LangSmith docs:\n",
|
|
"\n",
|
|
"- [LangSmith](https://docs.smith.langchain.com)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "a3d3e206",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|