Files
langchain/docs/docs/tutorials/agents.ipynb
gbaian10 5efd0fe9ae docs: Change SqliteSaver to MemorySaver (#25306)
fix: #25137

`SqliteSaver.from_conn_string()` has been changed to a `contextmanager`
method in `langgraph >= 0.2.0`, the original usage is no longer
applicable.

Refer to
<https://github.com/langchain-ai/langgraph/pull/1271#issue-2454736415>
modification method to replace `SqliteSaver` with `MemorySaver`.
2024-08-12 13:45:32 -04:00

758 lines
36 KiB
Plaintext

{
"cells": [
{
"cell_type": "raw",
"id": "17546ebb",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [agent, agents]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "1df78a71",
"metadata": {},
"source": [
"# Build an Agent\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Chat Models](/docs/concepts/#chat-models)\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Agents](/docs/concepts/#agents)\n",
"\n",
":::\n",
"\n",
"By themselves, language models can't take actions - they just output text.\n",
"A big use case for LangChain is creating **agents**.\n",
"Agents are systems that use LLMs as reasoning engines to determine which actions to take and the inputs to pass them.\n",
"After executing actions, the results can be fed back into the LLM to determine whether more actions are needed, or whether it is okay to finish.\n",
"\n",
"In this tutorial we will build an agent that can interact with a search engine. You will be able to ask this agent questions, watch it call the search tool, and have conversations with it.\n",
"\n",
"## End-to-end agent\n",
"\n",
"The code snippet below represents a fully functional agent that uses an LLM to decide which tools to use. It is equipped with a generic search tool. It has conversational memory - meaning that it can be used as a multi-turn chatbot.\n",
"\n",
"In the rest of the guide, we will walk through the individual components and what each part does - but if you want to just grab some code and get started, feel free to use this!"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a79bb782",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content=\"Hello Bob! Since you didn't ask a specific question, I don't need to use any tools to respond. It's nice to meet you. San Francisco is a wonderful city with lots to see and do. I hope you're enjoying living there. Please let me know if you have any other questions!\", response_metadata={'id': 'msg_01Mmfzfs9m4XMgVzsCZYMWqH', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 271, 'output_tokens': 65}}, id='run-44c57f9c-a637-4888-b7d9-6d985031ae48-0', usage_metadata={'input_tokens': 271, 'output_tokens': 65, 'total_tokens': 336})]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content=[{'text': 'To get current weather information for your location in San Francisco, let me invoke the search tool:', 'type': 'text'}, {'id': 'toolu_01BGEyQaSz3pTq8RwUUHSRoo', 'input': {'query': 'san francisco weather'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], response_metadata={'id': 'msg_013AVSVsRLKYZjduLpJBY4us', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 347, 'output_tokens': 80}}, id='run-de7923b6-5ee2-4ebe-bd95-5aed4933d0e3-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'san francisco weather'}, 'id': 'toolu_01BGEyQaSz3pTq8RwUUHSRoo'}], usage_metadata={'input_tokens': 347, 'output_tokens': 80, 'total_tokens': 427})]}}\n",
"----\n",
"{'tools': {'messages': [ToolMessage(content='[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'San Francisco\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 37.78, \\'lon\\': -122.42, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1717238643, \\'localtime\\': \\'2024-06-01 3:44\\'}, \\'current\\': {\\'last_updated_epoch\\': 1717237800, \\'last_updated\\': \\'2024-06-01 03:30\\', \\'temp_c\\': 12.0, \\'temp_f\\': 53.6, \\'is_day\\': 0, \\'condition\\': {\\'text\\': \\'Mist\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/night/143.png\\', \\'code\\': 1030}, \\'wind_mph\\': 5.6, \\'wind_kph\\': 9.0, \\'wind_degree\\': 310, \\'wind_dir\\': \\'NW\\', \\'pressure_mb\\': 1013.0, \\'pressure_in\\': 29.92, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 88, \\'cloud\\': 100, \\'feelslike_c\\': 10.5, \\'feelslike_f\\': 50.8, \\'windchill_c\\': 9.3, \\'windchill_f\\': 48.7, \\'heatindex_c\\': 11.1, \\'heatindex_f\\': 51.9, \\'dewpoint_c\\': 8.8, \\'dewpoint_f\\': 47.8, \\'vis_km\\': 6.4, \\'vis_miles\\': 3.0, \\'uv\\': 1.0, \\'gust_mph\\': 12.5, \\'gust_kph\\': 20.1}}\"}, {\"url\": \"https://www.timeanddate.com/weather/usa/san-francisco/historic\", \"content\": \"Past Weather in San Francisco, California, USA \\\\u2014 Yesterday and Last 2 Weeks. Time/General. Weather. Time Zone. DST Changes. Sun & Moon. Weather Today Weather Hourly 14 Day Forecast Yesterday/Past Weather Climate (Averages) Currently: 68 \\\\u00b0F. Passing clouds.\"}]', name='tavily_search_results_json', tool_call_id='toolu_01BGEyQaSz3pTq8RwUUHSRoo')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Based on the search results, the current weather in San Francisco is:\\n\\nTemperature: 53.6°F (12°C)\\nConditions: Misty\\nWind: 5.6 mph (9 kph) from the Northwest\\nHumidity: 88%\\nCloud Cover: 100% \\n\\nThe results provide detailed information like wind chill, heat index, visibility and more. It looks like a typical cool, foggy morning in San Francisco. Let me know if you need any other details about the weather where you live!', response_metadata={'id': 'msg_019WGLbaojuNdbCnqac7zaGW', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1035, 'output_tokens': 120}}, id='run-1bb68bf3-b212-4ef4-8a31-10c830421c78-0', usage_metadata={'input_tokens': 1035, 'output_tokens': 120, 'total_tokens': 1155})]}}\n",
"----\n"
]
}
],
"source": [
"# Import relevant functionality\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"from langchain_core.messages import HumanMessage\n",
"from langgraph.checkpoint.memory import MemorySaver\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"# Create the agent\n",
"memory = MemorySaver()\n",
"model = ChatAnthropic(model_name=\"claude-3-sonnet-20240229\")\n",
"search = TavilySearchResults(max_results=2)\n",
"tools = [search]\n",
"agent_executor = create_react_agent(model, tools, checkpointer=memory)\n",
"\n",
"# Use the agent\n",
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n",
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"hi im bob! and i live in sf\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")\n",
"\n",
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats the weather where I live?\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"### Jupyter Notebook\n",
"\n",
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect interactive environments for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc), and observing these cases is a great way to better understand building with LLMs.\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"\n",
"### Installation\n",
"\n",
"To install LangChain run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "60bb3eb1",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-community langgraph langchain-anthropic tavily-python langgraph-checkpoint-sqlite"
]
},
{
"cell_type": "markdown",
"id": "2ee337ae",
"metadata": {},
"source": [
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n",
"### LangSmith\n",
"\n",
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
"\n",
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
"\n",
"```shell\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set them with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"```\n",
"\n",
"### Tavily\n",
"\n",
"We will be using [Tavily](/docs/integrations/tools/tavily_search) (a search engine) as a tool.\n",
"In order to use it, you will need to get and set an API key:\n",
"\n",
"```bash\n",
"export TAVILY_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set it with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"TAVILY_API_KEY\"] = getpass.getpass()\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "c335d1bf",
"metadata": {},
"source": [
"## Define tools\n",
"\n",
"We first need to create the tools we want to use. Our main tool of choice will be [Tavily](/docs/integrations/tools/tavily_search) - a search engine. We have a built-in tool in LangChain to easily use Tavily search engine as tool.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "482ce13d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'url': 'https://www.weatherapi.com/',\n",
" 'content': \"{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1717238703, 'localtime': '2024-06-01 3:45'}, 'current': {'last_updated_epoch': 1717237800, 'last_updated': '2024-06-01 03:30', 'temp_c': 12.0, 'temp_f': 53.6, 'is_day': 0, 'condition': {'text': 'Mist', 'icon': '//cdn.weatherapi.com/weather/64x64/night/143.png', 'code': 1030}, 'wind_mph': 5.6, 'wind_kph': 9.0, 'wind_degree': 310, 'wind_dir': 'NW', 'pressure_mb': 1013.0, 'pressure_in': 29.92, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 88, 'cloud': 100, 'feelslike_c': 10.5, 'feelslike_f': 50.8, 'windchill_c': 9.3, 'windchill_f': 48.7, 'heatindex_c': 11.1, 'heatindex_f': 51.9, 'dewpoint_c': 8.8, 'dewpoint_f': 47.8, 'vis_km': 6.4, 'vis_miles': 3.0, 'uv': 1.0, 'gust_mph': 12.5, 'gust_kph': 20.1}}\"},\n",
" {'url': 'https://www.wunderground.com/hourly/us/ca/san-francisco/date/2024-01-06',\n",
" 'content': 'Current Weather for Popular Cities . San Francisco, CA 58 ° F Partly Cloudy; Manhattan, NY warning 51 ° F Cloudy; Schiller Park, IL (60176) warning 51 ° F Fair; Boston, MA warning 41 ° F ...'}]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"\n",
"search = TavilySearchResults(max_results=2)\n",
"search_results = search.invoke(\"what is the weather in SF\")\n",
"print(search_results)\n",
"# If we want, we can create other tools.\n",
"# Once we have all the tools we want, we can put them in a list that we will reference later.\n",
"tools = [search]"
]
},
{
"cell_type": "markdown",
"id": "e00068b0",
"metadata": {},
"source": [
"## Using Language Models\n",
"\n",
"Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "69185491",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"model = ChatAnthropic(model=\"claude-3-sonnet-20240229\")"
]
},
{
"cell_type": "markdown",
"id": "642ed8bf",
"metadata": {},
"source": [
"You can call the language model by passing in a list of messages. By default, the response is a `content` string."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c96c960b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hi there!'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"response = model.invoke([HumanMessage(content=\"hi!\")])\n",
"response.content"
]
},
{
"cell_type": "markdown",
"id": "47bf8210",
"metadata": {},
"source": [
"We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind_tools` to give the language model knowledge of these tools"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ba692a74",
"metadata": {},
"outputs": [],
"source": [
"model_with_tools = model.bind_tools(tools)"
]
},
{
"cell_type": "markdown",
"id": "fd920b69",
"metadata": {},
"source": [
"We can now call the model. Let's first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b6a7e925",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ContentString: Hello!\n",
"ToolCalls: []\n"
]
}
],
"source": [
"response = model_with_tools.invoke([HumanMessage(content=\"Hi!\")])\n",
"\n",
"print(f\"ContentString: {response.content}\")\n",
"print(f\"ToolCalls: {response.tool_calls}\")"
]
},
{
"cell_type": "markdown",
"id": "e8c81e76",
"metadata": {},
"source": [
"Now, let's try calling it with some input that would expect a tool to be called."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "688b465d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ContentString: \n",
"ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'weather san francisco'}, 'id': 'toolu_01VTP7DUvSfgtYxsq9x4EwMp'}]\n"
]
}
],
"source": [
"response = model_with_tools.invoke([HumanMessage(content=\"What's the weather in SF?\")])\n",
"\n",
"print(f\"ContentString: {response.content}\")\n",
"print(f\"ToolCalls: {response.tool_calls}\")"
]
},
{
"cell_type": "markdown",
"id": "83c4bcd3",
"metadata": {},
"source": [
"We can see that there's now no text content, but there is a tool call! It wants us to call the Tavily Search tool.\n",
"\n",
"This isn't calling that tool yet - it's just telling us to. In order to actually call it, we'll want to create our agent."
]
},
{
"cell_type": "markdown",
"id": "40ccec80",
"metadata": {},
"source": [
"## Create the agent\n",
"\n",
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n",
"Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic.\n"
]
},
{
"cell_type": "markdown",
"id": "f8014c9d",
"metadata": {},
"source": [
"Now, we can initialize the agent with the LLM and the tools.\n",
"\n",
"Note that we are passing in the `model`, not `model_with_tools`. That is because `create_react_agent` will call `.bind_tools` for us under the hood."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "89cf72b4-6046-4b47-8f27-5522d8cb8036",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent_executor = create_react_agent(model, tools)"
]
},
{
"cell_type": "markdown",
"id": "e4df0e06",
"metadata": {},
"source": [
"## Run the agent\n",
"\n",
"We can now run the agent on a few queries! Note that for now, these are all **stateless** queries (it won't remember previous interactions). Note that the agent will return the **final** state at the end of the interaction (which includes any inputs, we will see later on how to get only the outputs).\n",
"\n",
"First up, let's see how it responds when there's no need to call a tool:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "114ba50d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!', id='a820fcc5-9b87-457a-9af0-f21768143ee3'),\n",
" AIMessage(content='Hello!', response_metadata={'id': 'msg_01VbC493X1VEDyusgttiEr1z', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 264, 'output_tokens': 5}}, id='run-0e0ddae8-a85b-4bd6-947c-c36c857a4698-0', usage_metadata={'input_tokens': 264, 'output_tokens': 5, 'total_tokens': 269})]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response = agent_executor.invoke({\"messages\": [HumanMessage(content=\"hi!\")]})\n",
"\n",
"response[\"messages\"]"
]
},
{
"cell_type": "markdown",
"id": "71493a42",
"metadata": {},
"source": [
"In order to see exactly what is happening under the hood (and to make sure it's not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/28311faa-e135-4d6a-ab6b-caecf6482aaa/r)\n",
"\n",
"Let's now try it out on an example where it should be invoking the tool"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "77c2f769",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='whats the weather in sf?', id='1d6c96bb-4ddb-415c-a579-a07d5264de0d'),\n",
" AIMessage(content=[{'id': 'toolu_01Y5EK4bw2LqsQXeaUv8iueF', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], response_metadata={'id': 'msg_0132wQUcEduJ8UKVVVqwJzM4', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 269, 'output_tokens': 61}}, id='run-26d5e5e8-d4fd-46d2-a197-87b95b10e823-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in san francisco'}, 'id': 'toolu_01Y5EK4bw2LqsQXeaUv8iueF'}], usage_metadata={'input_tokens': 269, 'output_tokens': 61, 'total_tokens': 330}),\n",
" ToolMessage(content='[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'San Francisco\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 37.78, \\'lon\\': -122.42, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1717238703, \\'localtime\\': \\'2024-06-01 3:45\\'}, \\'current\\': {\\'last_updated_epoch\\': 1717237800, \\'last_updated\\': \\'2024-06-01 03:30\\', \\'temp_c\\': 12.0, \\'temp_f\\': 53.6, \\'is_day\\': 0, \\'condition\\': {\\'text\\': \\'Mist\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/night/143.png\\', \\'code\\': 1030}, \\'wind_mph\\': 5.6, \\'wind_kph\\': 9.0, \\'wind_degree\\': 310, \\'wind_dir\\': \\'NW\\', \\'pressure_mb\\': 1013.0, \\'pressure_in\\': 29.92, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 88, \\'cloud\\': 100, \\'feelslike_c\\': 10.5, \\'feelslike_f\\': 50.8, \\'windchill_c\\': 9.3, \\'windchill_f\\': 48.7, \\'heatindex_c\\': 11.1, \\'heatindex_f\\': 51.9, \\'dewpoint_c\\': 8.8, \\'dewpoint_f\\': 47.8, \\'vis_km\\': 6.4, \\'vis_miles\\': 3.0, \\'uv\\': 1.0, \\'gust_mph\\': 12.5, \\'gust_kph\\': 20.1}}\"}, {\"url\": \"https://www.timeanddate.com/weather/usa/san-francisco/hourly\", \"content\": \"Sun & Moon. Weather Today Weather Hourly 14 Day Forecast Yesterday/Past Weather Climate (Averages) Currently: 59 \\\\u00b0F. Passing clouds. (Weather station: San Francisco International Airport, USA). See more current weather.\"}]', name='tavily_search_results_json', id='37aa1fd9-b232-4a02-bd22-bc5b9b44a22c', tool_call_id='toolu_01Y5EK4bw2LqsQXeaUv8iueF'),\n",
" AIMessage(content='Based on the search results, here is a summary of the current weather in San Francisco:\\n\\nThe weather in San Francisco is currently misty with a temperature of around 53°F (12°C). There is complete cloud cover and moderate winds from the northwest around 5-9 mph (9-14 km/h). Humidity is high at 88%. Visibility is around 3 miles (6.4 km). \\n\\nThe results provide an hourly forecast as well as current conditions from a couple different weather sources. Let me know if you need any additional details about the San Francisco weather!', response_metadata={'id': 'msg_01BRX9mrT19nBDdHYtR7wJ92', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 920, 'output_tokens': 132}}, id='run-d0325583-3ddc-4432-b2b2-d023eb97660f-0', usage_metadata={'input_tokens': 920, 'output_tokens': 132, 'total_tokens': 1052})]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response = agent_executor.invoke(\n",
" {\"messages\": [HumanMessage(content=\"whats the weather in sf?\")]}\n",
")\n",
"response[\"messages\"]"
]
},
{
"cell_type": "markdown",
"id": "c174f838",
"metadata": {},
"source": [
"We can check out the [LangSmith trace](https://smith.langchain.com/public/f520839d-cd4d-4495-8764-e32b548e235d/r) to make sure it's calling the search tool effectively."
]
},
{
"cell_type": "markdown",
"id": "8f6ca7e4",
"metadata": {},
"source": [
"## Streaming Messages\n",
"\n",
"We've seen how the agent can be called with `.invoke` to get back a final response. If the agent is executing multiple steps, that may take a while. In order to show intermediate progress, we can stream back messages as they occur."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "532d6557",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_50Kb8zHmFqPYavQwF5TgcOH8', 'function': {'arguments': '{\\n \"query\": \"current weather in San Francisco\"\\n}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 134, 'total_tokens': 157}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-042d5feb-c2cc-4c3f-b8fd-dbc22fd0bc07-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_50Kb8zHmFqPYavQwF5TgcOH8'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'San Francisco\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 37.78, \\'lon\\': -122.42, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1714426906, \\'localtime\\': \\'2024-04-29 14:41\\'}, \\'current\\': {\\'last_updated_epoch\\': 1714426200, \\'last_updated\\': \\'2024-04-29 14:30\\', \\'temp_c\\': 17.8, \\'temp_f\\': 64.0, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Sunny\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/113.png\\', \\'code\\': 1000}, \\'wind_mph\\': 23.0, \\'wind_kph\\': 37.1, \\'wind_degree\\': 290, \\'wind_dir\\': \\'WNW\\', \\'pressure_mb\\': 1019.0, \\'pressure_in\\': 30.09, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 50, \\'cloud\\': 0, \\'feelslike_c\\': 17.8, \\'feelslike_f\\': 64.0, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 5.0, \\'gust_mph\\': 27.5, \\'gust_kph\\': 44.3}}\"}, {\"url\": \"https://world-weather.info/forecast/usa/san_francisco/april-2024/\", \"content\": \"Extended weather forecast in San Francisco. Hourly Week 10 days 14 days 30 days Year. Detailed \\\\u26a1 San Francisco Weather Forecast for April 2024 - day/night \\\\ud83c\\\\udf21\\\\ufe0f temperatures, precipitations - World-Weather.info.\"}]', name='tavily_search_results_json', id='d88320ac-3fe1-4f73-870a-3681f15f6982', tool_call_id='call_50Kb8zHmFqPYavQwF5TgcOH8')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='The current weather in San Francisco, California is sunny with a temperature of 17.8°C (64.0°F). The wind is coming from the WNW at 23.0 mph. The humidity is at 50%. [source](https://www.weatherapi.com/)', response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 602, 'total_tokens': 660}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0cd2a507-ded5-4601-afe3-3807400e9989-0')]}}\n",
"----\n"
]
}
],
"source": [
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats the weather in sf?\")]}\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "c72b3043",
"metadata": {},
"source": [
"## Streaming tokens\n",
"\n",
"In addition to streaming back messages, it is also useful to be streaming back tokens.\n",
"We can do this with the `.astream_events` method.\n",
"\n",
":::{.callout-important}\n",
"This `.astream_events` method only works with Python 3.11 or higher.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3fb262c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--\n",
"Starting tool: tavily_search_results_json with inputs: {'query': 'current weather in San Francisco'}\n",
"Done tool: tavily_search_results_json\n",
"Tool output was: [{'url': 'https://www.weatherapi.com/', 'content': \"{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1714427052, 'localtime': '2024-04-29 14:44'}, 'current': {'last_updated_epoch': 1714426200, 'last_updated': '2024-04-29 14:30', 'temp_c': 17.8, 'temp_f': 64.0, 'is_day': 1, 'condition': {'text': 'Sunny', 'icon': '//cdn.weatherapi.com/weather/64x64/day/113.png', 'code': 1000}, 'wind_mph': 23.0, 'wind_kph': 37.1, 'wind_degree': 290, 'wind_dir': 'WNW', 'pressure_mb': 1019.0, 'pressure_in': 30.09, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 50, 'cloud': 0, 'feelslike_c': 17.8, 'feelslike_f': 64.0, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 5.0, 'gust_mph': 27.5, 'gust_kph': 44.3}}\"}, {'url': 'https://www.weathertab.com/en/c/e/04/united-states/california/san-francisco/', 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...'}]\n",
"--\n",
"The| current| weather| in| San| Francisco|,| California|,| USA| is| sunny| with| a| temperature| of| |17|.|8|°C| (|64|.|0|°F|).| The| wind| is| blowing| from| the| W|NW| at| a| speed| of| |37|.|1| k|ph| (|23|.|0| mph|).| The| humidity| level| is| at| |50|%.| [|Source|](|https|://|www|.weather|api|.com|/)|"
]
}
],
"source": [
"async for event in agent_executor.astream_events(\n",
" {\"messages\": [HumanMessage(content=\"whats the weather in sf?\")]}, version=\"v1\"\n",
"):\n",
" kind = event[\"event\"]\n",
" if kind == \"on_chain_start\":\n",
" if (\n",
" event[\"name\"] == \"Agent\"\n",
" ): # Was assigned when creating the agent with `.with_config({\"run_name\": \"Agent\"})`\n",
" print(\n",
" f\"Starting agent: {event['name']} with input: {event['data'].get('input')}\"\n",
" )\n",
" elif kind == \"on_chain_end\":\n",
" if (\n",
" event[\"name\"] == \"Agent\"\n",
" ): # Was assigned when creating the agent with `.with_config({\"run_name\": \"Agent\"})`\n",
" print()\n",
" print(\"--\")\n",
" print(\n",
" f\"Done agent: {event['name']} with output: {event['data'].get('output')['output']}\"\n",
" )\n",
" if kind == \"on_chat_model_stream\":\n",
" content = event[\"data\"][\"chunk\"].content\n",
" if content:\n",
" # Empty content in the context of OpenAI means\n",
" # that the model is asking for a tool to be invoked.\n",
" # So we only print non-empty content\n",
" print(content, end=\"|\")\n",
" elif kind == \"on_tool_start\":\n",
" print(\"--\")\n",
" print(\n",
" f\"Starting tool: {event['name']} with inputs: {event['data'].get('input')}\"\n",
" )\n",
" elif kind == \"on_tool_end\":\n",
" print(f\"Done tool: {event['name']}\")\n",
" print(f\"Tool output was: {event['data'].get('output')}\")\n",
" print(\"--\")"
]
},
{
"cell_type": "markdown",
"id": "022cbc8a",
"metadata": {},
"source": [
"## Adding in memory\n",
"\n",
"As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in a checkpointer. When passing in a checkpointer, we also have to pass in a `thread_id` when invoking the agent (so it knows which thread/conversation to resume from)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c4073e35",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.checkpoint.memory import MemorySaver\n",
"\n",
"memory = MemorySaver()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e64a944e-f9ac-43cf-903c-d3d28d765377",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = create_react_agent(model, tools, checkpointer=memory)\n",
"\n",
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "a13462d0-2d02-4474-921e-15a1ba1fa274",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content=\"Hello Bob! It's nice to meet you again.\", response_metadata={'id': 'msg_013C1z2ZySagEFwmU1EsysR2', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1162, 'output_tokens': 14}}, id='run-f878acfd-d195-44e8-9166-e2796317e3f8-0', usage_metadata={'input_tokens': 1162, 'output_tokens': 14, 'total_tokens': 1176})]}}\n",
"----\n"
]
}
],
"source": [
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"hi im bob!\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "56d8028b-5dbc-40b2-86f5-ed60631d86a3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='You mentioned your name is Bob when you introduced yourself earlier. So your name is Bob.', response_metadata={'id': 'msg_01WNwnRNGwGDRw6vRdivt6i1', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1184, 'output_tokens': 21}}, id='run-f5c0b957-8878-405a-9d4b-a7cd38efe81f-0', usage_metadata={'input_tokens': 1184, 'output_tokens': 21, 'total_tokens': 1205})]}}\n",
"----\n"
]
}
],
"source": [
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats my name?\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "bda99754-0a11-4447-b408-e8db8f2e3517",
"metadata": {},
"source": [
"Example [LangSmith trace](https://smith.langchain.com/public/fa73960b-0f7d-4910-b73d-757a12f33b2b/r)"
]
},
{
"cell_type": "markdown",
"id": "ae908088",
"metadata": {},
"source": [
"If I want to start a new conversation, all I have to do is change the `thread_id` used"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "24460239",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content=\"I'm afraid I don't actually know your name. As an AI assistant without personal information about you, I don't have a specific name associated with our conversation.\", response_metadata={'id': 'msg_01NoaXNNYZKSoBncPcLkdcbo', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 267, 'output_tokens': 36}}, id='run-c9f7df3d-525a-4d8f-bbcf-a5b4a5d2e4b0-0', usage_metadata={'input_tokens': 267, 'output_tokens': 36, 'total_tokens': 303})]}}\n",
"----\n"
]
}
],
"source": [
"config = {\"configurable\": {\"thread_id\": \"xyz123\"}}\n",
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats my name?\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "c029798f",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"That's a wrap! In this quick start we covered how to create a simple agent. \n",
"We've then shown how to stream back a response - not only the intermediate steps, but also tokens!\n",
"We've also added in memory so you can have a conversation with them.\n",
"Agents are a complex topic, and there's lot to learn! \n",
"\n",
"For more information on Agents, please check out the [LangGraph](/docs/concepts/#langgraph) documentation. This has it's own set of concepts, tutorials, and how-to guides."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e3ec3244",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}