mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-07 20:15:40 +00:00
docs[patch]: Simplify tool calling guide, improve tool calling conceptual guide (#24637)
Lots of duplicated content from concepts, missing pointers to the second half of the tool calling loop Simpler + more focused + a more prominent link to the second half of the loop was what I was aiming for, but down to be more conservative and just more prominently link the "passing tools back to the model" guide. I have also moved the tool calling conceptual guide out from under `Structured Output` (while leaving a small section for structured output-specific information) and added more content. The existing `#functiontool-calling` link will go to this new section.
This commit is contained in:
parent
4840db6892
commit
ce067c19e9
@ -165,7 +165,7 @@ Some important things to note:
|
|||||||
ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model.
|
ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model.
|
||||||
|
|
||||||
:::important
|
:::important
|
||||||
**Tool Calling** Some chat models have been fine-tuned for tool calling and provide a dedicated API for tool calling.
|
Some chat models have been fine-tuned for **tool calling** and provide a dedicated API for it.
|
||||||
Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling.
|
Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling.
|
||||||
Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information.
|
Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information.
|
||||||
:::
|
:::
|
||||||
@ -255,7 +255,7 @@ This represents the result of a tool call. In addition to `role` and `content`,
|
|||||||
|
|
||||||
#### (Legacy) FunctionMessage
|
#### (Legacy) FunctionMessage
|
||||||
|
|
||||||
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. ToolMessage should be used instead to correspond to the updated tool-calling API.
|
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. `ToolMessage` should be used instead to correspond to the updated tool-calling API.
|
||||||
|
|
||||||
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
|
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
|
||||||
|
|
||||||
@ -826,6 +826,61 @@ units (like words or subwords) that carry meaning, rather than individual charac
|
|||||||
to learn and understand the structure of the language, including grammar and context.
|
to learn and understand the structure of the language, including grammar and context.
|
||||||
Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing.
|
Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing.
|
||||||
|
|
||||||
|
### Function/tool calling
|
||||||
|
|
||||||
|
:::info
|
||||||
|
We use the term tool calling interchangeably with function calling. Although
|
||||||
|
function calling is sometimes meant to refer to invocations of a single function,
|
||||||
|
we treat all models as though they can return multiple tool or function calls in
|
||||||
|
each message.
|
||||||
|
:::
|
||||||
|
|
||||||
|
Tool calling allows a [chat model](/docs/concepts/#chat-models) to respond to a given prompt by generating output that
|
||||||
|
matches a user-defined schema.
|
||||||
|
|
||||||
|
While the name implies that the model is performing
|
||||||
|
some action, this is actually not the case! The model only generates the arguments to a tool, and actually running the tool (or not) is up to the user.
|
||||||
|
One common example where you **wouldn't** want to call a function with the generated arguments
|
||||||
|
is if you want to [extract structured output matching some schema](/docs/concepts/#structured-output)
|
||||||
|
from unstructured text. You would give the model an "extraction" tool that takes
|
||||||
|
parameters matching the desired schema, then treat the generated output as your final
|
||||||
|
result.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Tool calling is not universal, but is supported by many popular LLM providers, including [Anthropic](/docs/integrations/chat/anthropic/),
|
||||||
|
[Cohere](/docs/integrations/chat/cohere/), [Google](/docs/integrations/chat/google_vertex_ai_palm/),
|
||||||
|
[Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/), and even for locally-running models via [Ollama](/docs/integrations/chat/ollama/).
|
||||||
|
|
||||||
|
LangChain provides a standardized interface for tool calling that is consistent across different models.
|
||||||
|
|
||||||
|
The standard interface consists of:
|
||||||
|
|
||||||
|
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) as well as [Pydantic](https://pydantic.dev/) objects.
|
||||||
|
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
|
||||||
|
|
||||||
|
#### Tool usage
|
||||||
|
|
||||||
|
After the model calls tools, you can use the tool by invoking it, then passing the arguments back to the model.
|
||||||
|
LangChain provides the [`Tool`](/docs/concepts/#tools) abstraction to help you handle this.
|
||||||
|
|
||||||
|
The general flow is this:
|
||||||
|
|
||||||
|
1. Generate tool calls with a chat model in response to a query.
|
||||||
|
2. Invoke the appropriate tools using the generated tool call as arguments.
|
||||||
|
3. Format the result of the tool invocations as [`ToolMessages`](/docs/concepts/#toolmessage).
|
||||||
|
4. Pass the entire list of messages back to the model so that it can generate a final answer (or call more tools).
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
This is how tool calling [agents](/docs/concepts/#agents) perform tasks and answer queries.
|
||||||
|
|
||||||
|
Check out some more focused guides below:
|
||||||
|
|
||||||
|
- [How to use chat models to call tools](/docs/how_to/tool_calling/)
|
||||||
|
- [How to pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model/)
|
||||||
|
- [Building an agent with LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
|
||||||
|
|
||||||
### Structured output
|
### Structured output
|
||||||
|
|
||||||
LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide
|
LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide
|
||||||
@ -958,48 +1013,48 @@ chain.invoke({ "question": "What is the powerhouse of the cell?" })
|
|||||||
|
|
||||||
For a full list of model providers that support JSON mode, see [this table](/docs/integrations/chat/#advanced-features).
|
For a full list of model providers that support JSON mode, see [this table](/docs/integrations/chat/#advanced-features).
|
||||||
|
|
||||||
#### Function/tool calling
|
#### Tool calling {#structured-output-tool-calling}
|
||||||
|
|
||||||
:::info
|
For models that support it, [tool calling](/docs/concepts/#functiontool-calling) can be very convenient for structured output. It removes the
|
||||||
We use the term tool calling interchangeably with function calling. Although
|
guesswork around how best to prompt schemas in favor of a built-in model feature.
|
||||||
function calling is sometimes meant to refer to invocations of a single function,
|
|
||||||
we treat all models as though they can return multiple tool or function calls in
|
|
||||||
each message
|
|
||||||
:::
|
|
||||||
|
|
||||||
Tool calling allows a model to respond to a given prompt by generating output that
|
It works by first binding the desired schema either directly or via a [LangChain tool](/docs/concepts/#tools) to a
|
||||||
matches a user-defined schema. While the name implies that the model is performing
|
[chat model](/docs/concepts/#chat-models) using the `.bind_tools()` method. The model will then generate an `AIMessage` containing
|
||||||
some action, this is actually not the case! The model is coming up with the
|
a `tool_calls` field containing `args` that match the desired shape.
|
||||||
arguments to a tool, and actually running the tool (or not) is up to the user -
|
|
||||||
for example, if you want to [extract output matching some schema](/docs/tutorials/extraction)
|
|
||||||
from unstructured text, you could give the model an "extraction" tool that takes
|
|
||||||
parameters matching the desired schema, then treat the generated output as your final
|
|
||||||
result.
|
|
||||||
|
|
||||||
For models that support it, tool calling can be very convenient. It removes the
|
There are several acceptable formats you can use to bind tools to a model in LangChain. Here's one example:
|
||||||
guesswork around how best to prompt schemas in favor of a built-in model feature. It can also
|
|
||||||
more naturally support agentic flows, since you can just pass multiple tool schemas instead
|
|
||||||
of fiddling with enums or unions.
|
|
||||||
|
|
||||||
Many LLM providers, including [Anthropic](https://www.anthropic.com/),
|
```python
|
||||||
[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai),
|
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||||
[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others,
|
from langchain_openai import ChatOpenAI
|
||||||
support variants of a tool calling feature. These features typically allow requests
|
|
||||||
to the LLM to include available tools and their schemas, and for responses to include
|
|
||||||
calls to these tools. For instance, given a search engine tool, an LLM might handle a
|
|
||||||
query by first issuing a call to the search engine. The system calling the LLM can
|
|
||||||
receive the tool call, execute it, and return the output to the LLM to inform its
|
|
||||||
response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/)
|
|
||||||
and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools).
|
|
||||||
|
|
||||||
LangChain provides a standardized interface for tool calling that is consistent across different models.
|
class ResponseFormatter(BaseModel):
|
||||||
|
"""Always use this tool to structure your response to the user."""
|
||||||
|
|
||||||
The standard interface consists of:
|
answer: str = Field(description="The answer to the user's question")
|
||||||
|
followup_question: str = Field(description="A followup question the user could ask")
|
||||||
|
|
||||||
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) here.
|
model = ChatOpenAI(
|
||||||
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
|
model="gpt-4o",
|
||||||
|
temperature=0,
|
||||||
|
)
|
||||||
|
|
||||||
The following how-to guides are good practical resources for using function/tool calling:
|
model_with_tools = model.bind_tools([ResponseFormatter])
|
||||||
|
|
||||||
|
ai_msg = model_with_tools.invoke("What is the powerhouse of the cell?")
|
||||||
|
|
||||||
|
ai_msg.tool_calls[0]["args"]
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
{'answer': "The powerhouse of the cell is the mitochondrion. It generates most of the cell's supply of adenosine triphosphate (ATP), which is used as a source of chemical energy.",
|
||||||
|
'followup_question': 'How do mitochondria generate ATP?'}
|
||||||
|
```
|
||||||
|
|
||||||
|
Tool calling is a generally consistent way to get a model to generate structured output, and is the default technique
|
||||||
|
used for the [`.with_structured_output()`](/docs/concepts/#with_structured_output) method when a model supports it.
|
||||||
|
|
||||||
|
The following how-to guides are good practical resources for using function/tool calling for structured output:
|
||||||
|
|
||||||
- [How to return structured data from an LLM](/docs/how_to/structured_output/)
|
- [How to return structured data from an LLM](/docs/how_to/structured_output/)
|
||||||
- [How to use a model to call tools](/docs/how_to/tool_calling)
|
- [How to use a model to call tools](/docs/how_to/tool_calling)
|
||||||
|
@ -22,57 +22,36 @@
|
|||||||
":::info Prerequisites\n",
|
":::info Prerequisites\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This guide assumes familiarity with the following concepts:\n",
|
"This guide assumes familiarity with the following concepts:\n",
|
||||||
|
"\n",
|
||||||
"- [Chat models](/docs/concepts/#chat-models)\n",
|
"- [Chat models](/docs/concepts/#chat-models)\n",
|
||||||
"- [LangChain Tools](/docs/concepts/#tools)\n",
|
"- [LangChain Tools](/docs/concepts/#tools)\n",
|
||||||
|
"- [Tool calling](/docs/concepts/#functiontool-calling)\n",
|
||||||
"- [Output parsers](/docs/concepts/#output-parsers)\n",
|
"- [Output parsers](/docs/concepts/#output-parsers)\n",
|
||||||
"\n",
|
"\n",
|
||||||
":::\n",
|
":::\n",
|
||||||
"\n",
|
"\n",
|
||||||
":::info Tool calling vs function calling\n",
|
"[Tool calling](/docs/concepts/#functiontool-calling) allows a chat model to respond to a given prompt by \"calling a tool\".\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We use the term tool calling interchangeably with function calling. Although\n",
|
"Remember, while the name \"tool calling\" implies that the model is directly performing some action, this is actually not the case! The model only generates the arguments to a tool, and actually running the tool (or not) is up to the user.\n",
|
||||||
"function calling is sometimes meant to refer to invocations of a single function,\n",
|
"\n",
|
||||||
"we treat all models as though they can return multiple tool or function calls in \n",
|
"Tool calling is a general technique that generates structured output from a model, and you can use it even when you don't intend to invoke any tools. An example use-case of that is [extraction from unstructured text](/docs/tutorials/extraction/).\n",
|
||||||
"each message.\n",
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"If you want to see how to use the model-generated tool call to actually run a tool function [check out this guide](/docs/how_to/tool_results_pass_to_model/).\n",
|
||||||
|
"\n",
|
||||||
|
":::note Supported models\n",
|
||||||
|
"\n",
|
||||||
|
"Tool calling is not universal, but is supported by many popular LLM providers, including [Anthropic](/docs/integrations/chat/anthropic/), \n",
|
||||||
|
"[Cohere](/docs/integrations/chat/cohere/), [Google](/docs/integrations/chat/google_vertex_ai_palm/), \n",
|
||||||
|
"[Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/), and even for locally-running models via [Ollama](/docs/integrations/chat/ollama/).\n",
|
||||||
|
"\n",
|
||||||
|
"You can find a [list of all models that support tool calling here](/docs/integrations/chat/).\n",
|
||||||
"\n",
|
"\n",
|
||||||
":::\n",
|
":::\n",
|
||||||
"\n",
|
"\n",
|
||||||
":::info Supported models\n",
|
"LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls.\n",
|
||||||
"\n",
|
"This guide will cover how to bind tools to an LLM, then invoke the LLM to generate these arguments."
|
||||||
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
|
|
||||||
"\n",
|
|
||||||
":::\n",
|
|
||||||
"\n",
|
|
||||||
"Tool calling allows a chat model to respond to a given prompt by \"calling a tool\".\n",
|
|
||||||
"While the name implies that the model is performing \n",
|
|
||||||
"some action, this is actually not the case! The model generates the \n",
|
|
||||||
"arguments to a tool, and actually running the tool (or not) is up to the user.\n",
|
|
||||||
"For example, if you want to [extract output matching some schema](/docs/how_to/structured_output/) \n",
|
|
||||||
"from unstructured text, you could give the model an \"extraction\" tool that takes \n",
|
|
||||||
"parameters matching the desired schema, then treat the generated output as your final \n",
|
|
||||||
"result.\n",
|
|
||||||
"\n",
|
|
||||||
":::note\n",
|
|
||||||
"\n",
|
|
||||||
"If you only need formatted values, try the [.with_structured_output()](/docs/how_to/structured_output/#the-with_structured_output-method) chat model method as a simpler entrypoint.\n",
|
|
||||||
"\n",
|
|
||||||
":::\n",
|
|
||||||
"\n",
|
|
||||||
"However, tool calling goes beyond [structured output](/docs/how_to/structured_output/)\n",
|
|
||||||
"since you can pass responses from called tools back to the model to create longer interactions.\n",
|
|
||||||
"For instance, given a search engine tool, an LLM might handle a \n",
|
|
||||||
"query by first issuing a call to the search engine with arguments. The system calling the LLM can \n",
|
|
||||||
"receive the tool call, execute it, and return the output to the LLM to inform its \n",
|
|
||||||
"response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/) \n",
|
|
||||||
"and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools). \n",
|
|
||||||
"\n",
|
|
||||||
"Tool calling is not universal, but many popular LLM providers, including [Anthropic](https://www.anthropic.com/), \n",
|
|
||||||
"[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), \n",
|
|
||||||
"[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, \n",
|
|
||||||
"support variants of a tool calling feature.\n",
|
|
||||||
"\n",
|
|
||||||
"LangChain implements standard interfaces for defining tools, passing them to LLMs, \n",
|
|
||||||
"and representing tool calls. This guide and the other How-to pages in the Tool section will show you how to use tools with LangChain."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -91,7 +70,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 2,
|
"execution_count": 1,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@ -112,14 +91,14 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"LangChain also implements a `@tool` decorator that allows for further control of the tool schema, such as tool names and argument descriptions. See the how-to guide [here](/docs/how_to/custom_tools/#creating-tools-from-functions) for detail.\n",
|
"LangChain also implements a `@tool` decorator that allows for further control of the tool schema, such as tool names and argument descriptions. See the how-to guide [here](/docs/how_to/custom_tools/#creating-tools-from-functions) for details.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We can also define the schema using [Pydantic](https://docs.pydantic.dev):"
|
"We can also define the schemas without the accompanying functions using [Pydantic](https://docs.pydantic.dev):"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 1,
|
"execution_count": 2,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@ -149,7 +128,8 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"We can bind them to chat models as follows:\n",
|
"To actually bind those schemas to a chat model, we'll use the `.bind_tools()` method. This handles converting\n",
|
||||||
|
"the `Add` and `Multiply` schemas to the proper format for the model. The tool schema will then be passed it in each time the model is invoked.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"```{=mdx}\n",
|
"```{=mdx}\n",
|
||||||
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
||||||
@ -158,11 +138,7 @@
|
|||||||
" customVarName=\"llm\"\n",
|
" customVarName=\"llm\"\n",
|
||||||
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
|
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
|
||||||
"/>\n",
|
"/>\n",
|
||||||
"```\n",
|
"```"
|
||||||
"\n",
|
|
||||||
"We'll use the `.bind_tools()` method to handle converting\n",
|
|
||||||
"`Multiply` to the proper format for the model, then and bind it (i.e.,\n",
|
|
||||||
"passing it in each time the model is invoked)."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -183,7 +159,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
|
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
|
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -194,7 +170,7 @@
|
|||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_g4RuAijtDcSeM96jXyCuiLSN', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 95, 'total_tokens': 113}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5157d15a-7e0e-4ab1-af48-3d98010cd152-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_g4RuAijtDcSeM96jXyCuiLSN'}], usage_metadata={'input_tokens': 95, 'output_tokens': 18, 'total_tokens': 113})"
|
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_wLTBasMppAwpdiA5CD92l9x7', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 89, 'total_tokens': 107}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_0f03d4f0ee', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d3f36cca-f225-416f-ac16-0217046f0b38-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_wLTBasMppAwpdiA5CD92l9x7', 'type': 'tool_call'}], usage_metadata={'input_tokens': 89, 'output_tokens': 18, 'total_tokens': 107})"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 4,
|
"execution_count": 4,
|
||||||
@ -214,7 +190,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"As we can see, even though the prompt didn't really suggest a tool call, our LLM made one since it was forced to do so. You can look at the docs for [bind_tools()](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools."
|
"As we can see our LLM generated arguments to a tool! You can look at the docs for [bind_tools()](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools, as well as [this guide on how to force the LLM to call a tool](/docs/how_to/tool_choice/) rather than letting it decide."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -246,10 +222,12 @@
|
|||||||
"text/plain": [
|
"text/plain": [
|
||||||
"[{'name': 'Multiply',\n",
|
"[{'name': 'Multiply',\n",
|
||||||
" 'args': {'a': 3, 'b': 12},\n",
|
" 'args': {'a': 3, 'b': 12},\n",
|
||||||
" 'id': 'call_TnadLbWJu9HwDULRb51RNSMw'},\n",
|
" 'id': 'call_uqJsNrDJ8ZZnFa1BHHYAllEv',\n",
|
||||||
|
" 'type': 'tool_call'},\n",
|
||||||
" {'name': 'Add',\n",
|
" {'name': 'Add',\n",
|
||||||
" 'args': {'a': 11, 'b': 49},\n",
|
" 'args': {'a': 11, 'b': 49},\n",
|
||||||
" 'id': 'call_Q9vt1up05sOQScXvUYWzSpCg'}]"
|
" 'id': 'call_ud1uHAaYsdpWuxugwoJ63BDs',\n",
|
||||||
|
" 'type': 'tool_call'}]"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 5,
|
"execution_count": 5,
|
||||||
@ -308,17 +286,17 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Next steps\n",
|
"## Next steps\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, you can learn more about how to use tools:\n",
|
"Now you've learned how to bind tool schemas to a chat model and have the model call the tool.\n",
|
||||||
|
"\n",
|
||||||
|
"Next, check out this guide on actually using the tool by invoking the function and passing the results back to the model:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- Few shot promting [with tools](/docs/how_to/tools_few_shot/)\n",
|
|
||||||
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
|
|
||||||
"- Bind [model-specific tools](/docs/how_to/tools_model_specific/)\n",
|
|
||||||
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
|
|
||||||
"- Pass [tool results back to model](/docs/how_to/tool_results_pass_to_model)\n",
|
"- Pass [tool results back to model](/docs/how_to/tool_results_pass_to_model)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can also check out some more specific uses of tool calling:\n",
|
"You can also check out some more specific uses of tool calling:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- Building [tool-using chains and agents](/docs/how_to#tools)\n",
|
"- Few shot prompting [with tools](/docs/how_to/tools_few_shot/)\n",
|
||||||
|
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
|
||||||
|
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
|
||||||
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
|
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@ -339,7 +317,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.10.4"
|
"version": "3.10.5"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -9,12 +9,34 @@
|
|||||||
":::info Prerequisites\n",
|
":::info Prerequisites\n",
|
||||||
"This guide assumes familiarity with the following concepts:\n",
|
"This guide assumes familiarity with the following concepts:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- [Tools](/docs/concepts/#tools)\n",
|
"- [LangChain Tools](/docs/concepts/#tools)\n",
|
||||||
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
|
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
|
||||||
|
"- [Using chat models to call tools](/docs/how_to/tool_calling)\n",
|
||||||
|
"- [Defining custom tools](/docs/how_to/custom_tools/)\n",
|
||||||
"\n",
|
"\n",
|
||||||
":::\n",
|
":::\n",
|
||||||
"\n",
|
"\n",
|
||||||
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s and `ToolCall`s. First, let's define our tools and our model."
|
"Some models are capable of [**tool calling**](/docs/concepts/#functiontool-calling) - generating arguments that conform to a specific user-provided schema. This guide will demonstrate how to use those tool cals to actually call a function and properly pass the results back to the model.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"First, let's define our tools and our model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"```{=mdx}\n",
|
||||||
|
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
||||||
|
"\n",
|
||||||
|
"<ChatModelTabs\n",
|
||||||
|
" customVarName=\"llm\"\n",
|
||||||
|
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
|
||||||
|
"/>\n",
|
||||||
|
"```"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -22,6 +44,25 @@
|
|||||||
"execution_count": 1,
|
"execution_count": 1,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# | output: false\n",
|
||||||
|
"# | echo: false\n",
|
||||||
|
"\n",
|
||||||
|
"import os\n",
|
||||||
|
"from getpass import getpass\n",
|
||||||
|
"\n",
|
||||||
|
"from langchain_openai import ChatOpenAI\n",
|
||||||
|
"\n",
|
||||||
|
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
|
||||||
|
"\n",
|
||||||
|
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from langchain_core.tools import tool\n",
|
"from langchain_core.tools import tool\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -38,23 +79,8 @@
|
|||||||
" return a * b\n",
|
" return a * b\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"tools = [add, multiply]"
|
"tools = [add, multiply]\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 2,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import os\n",
|
|
||||||
"from getpass import getpass\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from langchain_openai import ChatOpenAI\n",
|
|
||||||
"\n",
|
|
||||||
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
|
|
||||||
"\n",
|
|
||||||
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
|
|
||||||
"llm_with_tools = llm.bind_tools(tools)"
|
"llm_with_tools = llm.bind_tools(tools)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -62,15 +88,88 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The nice thing about Tools is that if we invoke them with a ToolCall, we'll automatically get back a ToolMessage that can be fed back to the model: \n",
|
"Now, let's get the model to call a tool. We'll add it to a list of messages that we'll treat as conversation history:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 6,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_GPGPE943GORirhIAYnWv00rK', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_dm8o64ZrY3WFZHAvCh1bEJ6i', 'type': 'tool_call'}]\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain_core.messages import HumanMessage\n",
|
||||||
"\n",
|
"\n",
|
||||||
":::info Requires ``langchain-core >= 0.2.19``\n",
|
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
|
"messages = [HumanMessage(query)]\n",
|
||||||
|
"\n",
|
||||||
|
"ai_msg = llm_with_tools.invoke(messages)\n",
|
||||||
|
"\n",
|
||||||
|
"print(ai_msg.tool_calls)\n",
|
||||||
|
"\n",
|
||||||
|
"messages.append(ai_msg)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Next let's invoke the tool functions using the args the model populated!\n",
|
||||||
|
"\n",
|
||||||
|
"Conveniently, if we invoke a LangChain `Tool` with a `ToolCall`, we'll automatically get back a `ToolMessage` that can be fed back to the model:\n",
|
||||||
|
"\n",
|
||||||
|
":::caution Compatibility\n",
|
||||||
|
"\n",
|
||||||
|
"This functionality was added in `langchain-core == 0.2.19`. Please make sure your package is up to date.\n",
|
||||||
|
"\n",
|
||||||
|
"If you are on earlier versions of `langchain-core`, you will need to extract the `args` field from the tool and construct a `ToolMessage` manually.\n",
|
||||||
"\n",
|
"\n",
|
||||||
":::"
|
":::"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
|
||||||
|
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_loT2pliJwJe3p7nkgXYF48A1', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'multiply'}, 'type': 'function'}, {'id': 'call_bG9tYZCXOeYDZf3W46TceoV4', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 87, 'total_tokens': 137}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_661538dc1f', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-e3db3c46-bf9e-478e-abc1-dc9a264f4afe-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_loT2pliJwJe3p7nkgXYF48A1', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_bG9tYZCXOeYDZf3W46TceoV4', 'type': 'tool_call'}], usage_metadata={'input_tokens': 87, 'output_tokens': 50, 'total_tokens': 137}),\n",
|
||||||
|
" ToolMessage(content='36', name='multiply', tool_call_id='call_loT2pliJwJe3p7nkgXYF48A1'),\n",
|
||||||
|
" ToolMessage(content='60', name='add', tool_call_id='call_bG9tYZCXOeYDZf3W46TceoV4')]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"for tool_call in ai_msg.tool_calls:\n",
|
||||||
|
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
|
||||||
|
" tool_msg = selected_tool.invoke(tool_call)\n",
|
||||||
|
" messages.append(tool_msg)\n",
|
||||||
|
"\n",
|
||||||
|
"messages"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"And finally, we'll invoke the model with the tool results. The model will use this information to generate a final answer to our original query:"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 5,
|
"execution_count": 5,
|
||||||
@ -79,10 +178,7 @@
|
|||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
|
"AIMessage(content='The result of \\\\(3 \\\\times 12\\\\) is 36, and the result of \\\\(11 + 49\\\\) is 60.', response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 153, 'total_tokens': 184}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_661538dc1f', 'finish_reason': 'stop', 'logprobs': None}, id='run-87d1ef0a-1223-4bb3-9310-7b591789323d-0', usage_metadata={'input_tokens': 153, 'output_tokens': 31, 'total_tokens': 184})"
|
||||||
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'multiply'}, 'type': 'function'}, {'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 49, 'prompt_tokens': 88, 'total_tokens': 137}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-56657feb-96dd-456c-ab8e-1857eab2ade0-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'type': 'tool_call'}], usage_metadata={'input_tokens': 88, 'output_tokens': 49, 'total_tokens': 137}),\n",
|
|
||||||
" ToolMessage(content='36', name='multiply', tool_call_id='call_Smg3NHJNxrKfAmd4f9GkaYn3'),\n",
|
|
||||||
" ToolMessage(content='60', name='add', tool_call_id='call_55K1C0DmH6U5qh810gW34xZ0')]"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 5,
|
"execution_count": 5,
|
||||||
@ -90,37 +186,6 @@
|
|||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"source": [
|
|
||||||
"from langchain_core.messages import HumanMessage, ToolMessage\n",
|
|
||||||
"\n",
|
|
||||||
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
|
|
||||||
"\n",
|
|
||||||
"messages = [HumanMessage(query)]\n",
|
|
||||||
"ai_msg = llm_with_tools.invoke(messages)\n",
|
|
||||||
"messages.append(ai_msg)\n",
|
|
||||||
"for tool_call in ai_msg.tool_calls:\n",
|
|
||||||
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
|
|
||||||
" tool_msg = selected_tool.invoke(tool_call)\n",
|
|
||||||
" messages.append(tool_msg)\n",
|
|
||||||
"messages"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 6,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [
|
|
||||||
{
|
|
||||||
"data": {
|
|
||||||
"text/plain": [
|
|
||||||
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 153, 'total_tokens': 171}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ba5032f0-f773-406d-a408-8314e66511d0-0', usage_metadata={'input_tokens': 153, 'output_tokens': 18, 'total_tokens': 171})"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"execution_count": 6,
|
|
||||||
"metadata": {},
|
|
||||||
"output_type": "execute_result"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"source": [
|
"source": [
|
||||||
"llm_with_tools.invoke(messages)"
|
"llm_with_tools.invoke(messages)"
|
||||||
]
|
]
|
||||||
@ -129,15 +194,25 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls."
|
"Note that each `ToolMessage` must include a `tool_call_id` that matches an `id` in the original tool calls that the model generates. This helps the model match tool responses with tool calls.\n",
|
||||||
|
"\n",
|
||||||
|
"Tool calling agents, like those in [LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/), use this basic flow to answer queries and solve tasks.\n",
|
||||||
|
"\n",
|
||||||
|
"## Related\n",
|
||||||
|
"\n",
|
||||||
|
"- [LangGraph quickstart](https://langchain-ai.github.io/langgraph/tutorials/introduction/)\n",
|
||||||
|
"- Few shot prompting [with tools](/docs/how_to/tools_few_shot/)\n",
|
||||||
|
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
|
||||||
|
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
|
||||||
|
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "poetry-venv-311",
|
"display_name": "Python 3",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "poetry-venv-311"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
@ -149,7 +224,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.9"
|
"version": "3.10.5"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
BIN
docs/static/img/tool_call.png
vendored
Normal file
BIN
docs/static/img/tool_call.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 73 KiB |
BIN
docs/static/img/tool_calling_flow.png
vendored
Normal file
BIN
docs/static/img/tool_calling_flow.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 85 KiB |
BIN
docs/static/img/tool_invocation.png
vendored
Normal file
BIN
docs/static/img/tool_invocation.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 77 KiB |
BIN
docs/static/img/tool_results.png
vendored
Normal file
BIN
docs/static/img/tool_results.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 69 KiB |
Loading…
Reference in New Issue
Block a user