mirror of
https://github.com/hwchase17/langchain.git
synced 2026-04-03 10:55:08 +00:00
Done with a script + manual review: 1. Map unique file names to new paths; 2. Where those file names have old links, update them.
715 lines
27 KiB
Plaintext
715 lines
27 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# How to use a chat model to call tools\n",
|
|
"\n",
|
|
"```{=mdx}\n",
|
|
":::info\n",
|
|
"We use the term tool calling interchangeably with function calling. Although\n",
|
|
"function calling is sometimes meant to refer to invocations of a single function,\n",
|
|
"we treat all models as though they can return multiple tool or function calls in \n",
|
|
"each message.\n",
|
|
":::\n",
|
|
"```\n",
|
|
"\n",
|
|
"Tool calling allows a chat model to respond to a given prompt by \"calling a tool\".\n",
|
|
"While the name implies that the model is performing \n",
|
|
"some action, this is actually not the case! The model generates the \n",
|
|
"arguments to a tool, and actually running the tool (or not) is up to the user.\n",
|
|
"For example, if you want to [extract output matching some schema](/docs/how_to/structured_output/) \n",
|
|
"from unstructured text, you could give the model an \"extraction\" tool that takes \n",
|
|
"parameters matching the desired schema, then treat the generated output as your final \n",
|
|
"result.\n",
|
|
"\n",
|
|
"However, tool calling goes beyond [structured output](/docs/how_to/structured_output/)\n",
|
|
"since you can pass responses to caled tools back to the model to create longer interactions.\n",
|
|
"For instance, given a search engine tool, an LLM might handle a \n",
|
|
"query by first issuing a call to the search engine with arguments. The system calling the LLM can \n",
|
|
"receive the tool call, execute it, and return the output to the LLM to inform its \n",
|
|
"response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/) \n",
|
|
"and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools). \n",
|
|
"\n",
|
|
"Tool calling is not universal, but many popular LLM providers, including [Anthropic](https://www.anthropic.com/), \n",
|
|
"[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), \n",
|
|
"[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, \n",
|
|
"support variants of a tool calling feature.\n",
|
|
"\n",
|
|
"LangChain implements standard interfaces for defining tools, passing them to LLMs, \n",
|
|
"and representing tool calls. This guide will show you how to use them.\n",
|
|
"\n",
|
|
"```{=mdx}\n",
|
|
"import PrerequisiteLinks from \"@theme/PrerequisiteLinks\";\n",
|
|
"\n",
|
|
"<PrerequisiteLinks content={`\n",
|
|
"- [Chat models](/docs/concepts/#chat-models)\n",
|
|
"- [LangChain Tools](/docs/concepts/#tools)\n",
|
|
"`} />\n",
|
|
"```\n",
|
|
"\n",
|
|
"## Passing tools to chat models\n",
|
|
"\n",
|
|
"Chat models that support tool calling features implement a `.bind_tools` method, which \n",
|
|
"receives a list of LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
|
|
"and binds them to the chat model in its expected format. Subsequent invocations of the \n",
|
|
"chat model will include tool schemas in its calls to the LLM.\n",
|
|
"\n",
|
|
"For example, we can define the schema for custom tools using the `@tool` decorator \n",
|
|
"on Python functions:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.tools import tool\n",
|
|
"\n",
|
|
"\n",
|
|
"@tool\n",
|
|
"def add(a: int, b: int) -> int:\n",
|
|
" \"\"\"Adds a and b.\"\"\"\n",
|
|
" return a + b\n",
|
|
"\n",
|
|
"\n",
|
|
"@tool\n",
|
|
"def multiply(a: int, b: int) -> int:\n",
|
|
" \"\"\"Multiplies a and b.\"\"\"\n",
|
|
" return a * b\n",
|
|
"\n",
|
|
"\n",
|
|
"tools = [add, multiply]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Or below, we define the schema using [Pydantic](https://docs.pydantic.dev):"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
|
|
"\n",
|
|
"\n",
|
|
"# Note that the docstrings here are crucial, as they will be passed along\n",
|
|
"# to the model along with the class name.\n",
|
|
"class Add(BaseModel):\n",
|
|
" \"\"\"Add two integers together.\"\"\"\n",
|
|
"\n",
|
|
" a: int = Field(..., description=\"First integer\")\n",
|
|
" b: int = Field(..., description=\"Second integer\")\n",
|
|
"\n",
|
|
"\n",
|
|
"class Multiply(BaseModel):\n",
|
|
" \"\"\"Multiply two integers together.\"\"\"\n",
|
|
"\n",
|
|
" a: int = Field(..., description=\"First integer\")\n",
|
|
" b: int = Field(..., description=\"Second integer\")\n",
|
|
"\n",
|
|
"\n",
|
|
"tools = [Add, Multiply]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can bind them to chat models as follows:\n",
|
|
"\n",
|
|
"```{=mdx}\n",
|
|
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
|
"\n",
|
|
"<ChatModelTabs\n",
|
|
" customVarName=\"llm\"\n",
|
|
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
|
|
"/>\n",
|
|
"```\n",
|
|
"\n",
|
|
"We'll use the `.bind_tools()` method to handle converting\n",
|
|
"`Multiply` to the proper format for the model, then and bind it (i.e.,\n",
|
|
"passing it in each time the model is invoked)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# | output: false\n",
|
|
"# | echo: false\n",
|
|
"\n",
|
|
"%pip install -qU langchain langchain_openai\n",
|
|
"\n",
|
|
"import os\n",
|
|
"from getpass import getpass\n",
|
|
"\n",
|
|
"from langchain_openai import ChatOpenAI\n",
|
|
"\n",
|
|
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
|
|
"\n",
|
|
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm_with_tools = llm.bind_tools(tools)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Tool calls\n",
|
|
"\n",
|
|
"If tool calls are included in a LLM response, they are attached to the corresponding \n",
|
|
"[message](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage) \n",
|
|
"or [message chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
|
|
"as a list of [tool call](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall) \n",
|
|
"objects in the `.tool_calls` attribute.\n",
|
|
"\n",
|
|
"Note that chat models can call multiple tools at once.\n",
|
|
"\n",
|
|
"A `ToolCall` is a typed dict that includes a \n",
|
|
"tool name, dict of argument values, and (optionally) an identifier. Messages with no \n",
|
|
"tool calls default to an empty list for this attribute."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'name': 'Multiply',\n",
|
|
" 'args': {'a': 3, 'b': 12},\n",
|
|
" 'id': 'call_KquHA7mSbgtAkpkmRPaFnJKa'},\n",
|
|
" {'name': 'Add',\n",
|
|
" 'args': {'a': 11, 'b': 49},\n",
|
|
" 'id': 'call_Fl0hQi4IBTzlpaJYlM5kPQhE'}]"
|
|
]
|
|
},
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
|
|
"\n",
|
|
"llm_with_tools.invoke(query).tool_calls"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The `.tool_calls` attribute should contain valid tool calls. Note that on occasion, \n",
|
|
"model providers may output malformed tool calls (e.g., arguments that are not \n",
|
|
"valid JSON). When parsing fails in these cases, instances \n",
|
|
"of [InvalidToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.InvalidToolCall.html#langchain_core.messages.tool.InvalidToolCall) \n",
|
|
"are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n",
|
|
"a name, string arguments, identifier, and error message.\n",
|
|
"\n",
|
|
"If desired, [output parsers](/docs/modules/model_io/output_parsers) can further \n",
|
|
"process the output. For example, we can convert back to the original Pydantic class:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[Multiply(a=3, b=12), Add(a=11, b=49)]"
|
|
]
|
|
},
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
|
|
"\n",
|
|
"chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])\n",
|
|
"chain.invoke(query)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Streaming\n",
|
|
"\n",
|
|
"When tools are called in a streaming context, \n",
|
|
"[message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
|
|
"will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) \n",
|
|
"objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes \n",
|
|
"optional string fields for the tool `name`, `args`, and `id`, and includes an optional \n",
|
|
"integer field `index` that can be used to join chunks together. Fields are optional \n",
|
|
"because portions of a tool call may be streamed across different chunks (e.g., a chunk \n",
|
|
"that includes a substring of the arguments may have null values for the tool name and id).\n",
|
|
"\n",
|
|
"Because message chunks inherit from their parent message class, an \n",
|
|
"[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
|
|
"with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. \n",
|
|
"These fields are parsed best-effort from the message's tool call chunks.\n",
|
|
"\n",
|
|
"Note that not all providers currently support streaming for tool calls:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"[]\n",
|
|
"[{'name': 'Multiply', 'args': '', 'id': 'call_3aQwTP9CYlFxwOvQZPHDu6wL', 'index': 0}]\n",
|
|
"[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 0}]\n",
|
|
"[{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}]\n",
|
|
"[{'name': None, 'args': '\"b\": 1', 'id': None, 'index': 0}]\n",
|
|
"[{'name': None, 'args': '2}', 'id': None, 'index': 0}]\n",
|
|
"[{'name': 'Add', 'args': '', 'id': 'call_SQUoSsJz2p9Kx2x73GOgN1ja', 'index': 1}]\n",
|
|
"[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 1}]\n",
|
|
"[{'name': None, 'args': ': 11,', 'id': None, 'index': 1}]\n",
|
|
"[{'name': None, 'args': ' \"b\": ', 'id': None, 'index': 1}]\n",
|
|
"[{'name': None, 'args': '49}', 'id': None, 'index': 1}]\n",
|
|
"[]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"async for chunk in llm_with_tools.astream(query):\n",
|
|
" print(chunk.tool_call_chunks)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/modules/model_io/output_parsers/types/openai_tools/) support streaming.\n",
|
|
"\n",
|
|
"For example, below we accumulate tool call chunks:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"[]\n",
|
|
"[{'name': 'Multiply', 'args': '', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\"', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, ', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 1', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\"', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11,', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": ', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
|
|
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"first = True\n",
|
|
"async for chunk in llm_with_tools.astream(query):\n",
|
|
" if first:\n",
|
|
" gathered = chunk\n",
|
|
" first = False\n",
|
|
" else:\n",
|
|
" gathered = gathered + chunk\n",
|
|
"\n",
|
|
" print(gathered.tool_call_chunks)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"<class 'str'>\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(type(gathered.tool_call_chunks[0][\"args\"]))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And below we accumulate tool calls to demonstrate partial parsing:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"[]\n",
|
|
"[]\n",
|
|
"[{'name': 'Multiply', 'args': {}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
|
|
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"first = True\n",
|
|
"async for chunk in llm_with_tools.astream(query):\n",
|
|
" if first:\n",
|
|
" gathered = chunk\n",
|
|
" first = False\n",
|
|
" else:\n",
|
|
" gathered = gathered + chunk\n",
|
|
"\n",
|
|
" print(gathered.tool_calls)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"<class 'dict'>\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(type(gathered.tool_calls[0][\"args\"]))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Passing tool outputs to the model\n",
|
|
"\n",
|
|
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
|
|
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]),\n",
|
|
" ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'),\n",
|
|
" ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')]"
|
|
]
|
|
},
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_core.messages import HumanMessage, ToolMessage\n",
|
|
"\n",
|
|
"messages = [HumanMessage(query)]\n",
|
|
"ai_msg = llm_with_tools.invoke(messages)\n",
|
|
"messages.append(ai_msg)\n",
|
|
"for tool_call in ai_msg.tool_calls:\n",
|
|
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
|
|
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
|
|
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
|
|
"messages"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0')"
|
|
]
|
|
},
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"llm_with_tools.invoke(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls.\n",
|
|
"\n",
|
|
"## Few-shot prompting\n",
|
|
"\n",
|
|
"For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding `AIMessage`s with `ToolCall`s and corresponding `ToolMessage`s to our prompt.\n",
|
|
"\n",
|
|
"For example, even with some special instructions our model can get tripped up by order of operations:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'name': 'Multiply',\n",
|
|
" 'args': {'a': 119, 'b': 8},\n",
|
|
" 'id': 'call_T88XN6ECucTgbXXkyDeC2CQj'},\n",
|
|
" {'name': 'Add',\n",
|
|
" 'args': {'a': 952, 'b': -20},\n",
|
|
" 'id': 'call_licdlmGsRqzup8rhqJSb1yZ4'}]"
|
|
]
|
|
},
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"llm_with_tools.invoke(\n",
|
|
" \"Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations\"\n",
|
|
").tool_calls"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The model shouldn't be trying to add anything yet, since it technically can't know the results of 119 * 8 yet.\n",
|
|
"\n",
|
|
"By adding a prompt with some examples we can correct this behavior:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'name': 'Multiply',\n",
|
|
" 'args': {'a': 119, 'b': 8},\n",
|
|
" 'id': 'call_9MvuwQqg7dlJupJcoTWiEsDo'}]"
|
|
]
|
|
},
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_core.messages import AIMessage\n",
|
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
|
"from langchain_core.runnables import RunnablePassthrough\n",
|
|
"\n",
|
|
"examples = [\n",
|
|
" HumanMessage(\n",
|
|
" \"What's the product of 317253 and 128472 plus four\", name=\"example_user\"\n",
|
|
" ),\n",
|
|
" AIMessage(\n",
|
|
" \"\",\n",
|
|
" name=\"example_assistant\",\n",
|
|
" tool_calls=[\n",
|
|
" {\"name\": \"Multiply\", \"args\": {\"x\": 317253, \"y\": 128472}, \"id\": \"1\"}\n",
|
|
" ],\n",
|
|
" ),\n",
|
|
" ToolMessage(\"16505054784\", tool_call_id=\"1\"),\n",
|
|
" AIMessage(\n",
|
|
" \"\",\n",
|
|
" name=\"example_assistant\",\n",
|
|
" tool_calls=[{\"name\": \"Add\", \"args\": {\"x\": 16505054784, \"y\": 4}, \"id\": \"2\"}],\n",
|
|
" ),\n",
|
|
" ToolMessage(\"16505054788\", tool_call_id=\"2\"),\n",
|
|
" AIMessage(\n",
|
|
" \"The product of 317253 and 128472 plus four is 16505054788\",\n",
|
|
" name=\"example_assistant\",\n",
|
|
" ),\n",
|
|
"]\n",
|
|
"\n",
|
|
"system = \"\"\"You are bad at math but are an expert at using a calculator. \n",
|
|
"\n",
|
|
"Use past tool usage as an example of how to correctly use the tools.\"\"\"\n",
|
|
"few_shot_prompt = ChatPromptTemplate.from_messages(\n",
|
|
" [\n",
|
|
" (\"system\", system),\n",
|
|
" *examples,\n",
|
|
" (\"human\", \"{query}\"),\n",
|
|
" ]\n",
|
|
")\n",
|
|
"\n",
|
|
"chain = {\"query\": RunnablePassthrough()} | few_shot_prompt | llm_with_tools\n",
|
|
"chain.invoke(\"Whats 119 times 8 minus 20\").tool_calls"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And we get the correct output this time.\n",
|
|
"\n",
|
|
"Here's what the [LangSmith trace](https://smith.langchain.com/public/f70550a1-585f-4c9d-a643-13148ab1616f/r) looks like."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Binding model-specific formats (advanced)\n",
|
|
"\n",
|
|
"Providers adopt different conventions for formatting tool schemas. \n",
|
|
"For instance, OpenAI uses a format like this:\n",
|
|
"\n",
|
|
"- `type`: The type of the tool. At the time of writing, this is always `\"function\"`.\n",
|
|
"- `function`: An object containing tool parameters.\n",
|
|
"- `function.name`: The name of the schema to output.\n",
|
|
"- `function.description`: A high level description of the schema to output.\n",
|
|
"- `function.parameters`: The nested details of the schema you want to extract, formatted as a [JSON schema](https://json-schema.org/) dict.\n",
|
|
"\n",
|
|
"We can bind this model-specific format directly to the model as well if preferred. Here's an example:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe', 'function': {'arguments': '{\"a\":119,\"b\":8}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 62, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-353e8a9a-7125-4f94-8c68-4f3da4c21120-0', tool_calls=[{'name': 'multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe'}])"
|
|
]
|
|
},
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_openai import ChatOpenAI\n",
|
|
"\n",
|
|
"model = ChatOpenAI()\n",
|
|
"\n",
|
|
"model_with_tools = model.bind(\n",
|
|
" tools=[\n",
|
|
" {\n",
|
|
" \"type\": \"function\",\n",
|
|
" \"function\": {\n",
|
|
" \"name\": \"multiply\",\n",
|
|
" \"description\": \"Multiply two integers together.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"a\": {\"type\": \"number\", \"description\": \"First integer\"},\n",
|
|
" \"b\": {\"type\": \"number\", \"description\": \"Second integer\"},\n",
|
|
" },\n",
|
|
" \"required\": [\"a\", \"b\"],\n",
|
|
" },\n",
|
|
" },\n",
|
|
" }\n",
|
|
" ]\n",
|
|
")\n",
|
|
"\n",
|
|
"model_with_tools.invoke(\"Whats 119 times 8?\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This is functionally equivalent to the `bind_tools()` calls above."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Next steps\n",
|
|
"\n",
|
|
"Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, check out some more specific uses of tool calling:\n",
|
|
"\n",
|
|
"- Building [tool-using chains and agents](/docs/use_cases/tool_use/)\n",
|
|
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.1"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|