Compare commits

...

2 Commits

Author SHA1 Message Date
jacoblee93
ff2203471b Copy 2024-07-11 04:50:10 -07:00
jacoblee93
502845aac9 Adds guide on passing additional params to a tool 2024-07-11 04:48:57 -07:00
2 changed files with 216 additions and 0 deletions

View File

@@ -196,6 +196,7 @@ LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to p
- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: disable parallel tool calling](/docs/how_to/tool_choice)
- [How to: stream events from within a tool](/docs/how_to/tool_stream_events)
- [How to: pass additional parameters to tools](/docs/how_to/tools_additional_params)
### Multimodal

View File

@@ -0,0 +1,215 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass additional parameters to tools\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [How to use a model to call tools](/docs/how_to/tool_calling)\n",
"- [Chaining](/docs/how_to/sequence/)\n",
"\n",
":::\n",
"\n",
"When using a chat model to call a tool in a chain or agent, it can be convenient for that tool to use information aside from arguments generated by the model.\n",
"\n",
"This guide covers how to use a technique called [currying](https://en.wikipedia.org/wiki/Currying) to predefine certain parameters passed to a tool while letting the model generate the rest. It's similar in spirit to [partially applying variables to a prompt](/docs/how_to/prompts_partial/).\n",
"\n",
"Let's mock out a function called `weather` as follows. Note that we aren't wrapping it with a `@tool` decorator:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"def current_weather(location: str, use_cached_results: bool):\n",
" \"\"\"Use this to get the current weather in a given location.\"\"\"\n",
" if use_cached_results:\n",
" # Use a cached response\n",
" return f\"The current weather in {location} is 50 degrees and rainy (cached).\"\n",
" else:\n",
" # Use a freshly retrieved response\n",
" return f\"The current weather in {location} is 0 degrees and sunny.\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You probably don't want the model to determine our caching strategy by populating `use_cached_results` - instead you probably want to define it yourself based on something like recency.\n",
"\n",
"To do this, define a `curry` function that accepts a standard Python function, binding arguments to it and converting it into a tool:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import inspect\n",
"from functools import wraps\n",
"\n",
"from langchain_core.tools import StructuredTool\n",
"\n",
"\n",
"def curry(func, **curried_kwargs):\n",
" \"\"\"Util to return partialed functions whose signatures also hide the curried variables.\"\"\"\n",
"\n",
" @wraps(func)\n",
" def wrapper(*args, **kwargs):\n",
" # Merge curried kwargs with the new kwargs\n",
" new_kwargs = {**curried_kwargs, **kwargs}\n",
" return func(*args, **new_kwargs)\n",
"\n",
" # Get the original function's signature\n",
" sig = inspect.signature(func)\n",
"\n",
" # Create a new signature without the curried parameters\n",
" new_params = [p for name, p in sig.parameters.items() if name not in curried_kwargs]\n",
" wrapper.__signature__ = sig.replace(parameters=new_params)\n",
"\n",
" return StructuredTool.from_function(wrapper)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can wrap the `current_weather` function in the `curry` function to automatically set a `use_cached_result` to `True` and ensure that the model only generates a `location` parameter. Here's what that looks like when we bind it to a chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"model\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_anthropic\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
"\n",
"model = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'current_weather',\n",
" 'args': {'location': 'San Francisco'},\n",
" 'id': 'toolu_01KaLhdKqg8CvwCUS7Mmiom6'}]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"curried_tool = curry(current_weather, use_cached_results=True)\n",
"\n",
"llm_with_tools = model.bind_tools([curried_tool])\n",
"\n",
"res = llm_with_tools.invoke(\"What is the weather in SF?\")\n",
"\n",
"res.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see that the model does not generate a parameter for `use_cached_results`. If you then extend the chain to invoke the tool with the provided arguments, you can see that you'll always get a cached result rather than a fresh one:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The current weather in San Francisco is 50 degrees and rainy (cached).'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Simplified way to call the tool from model output\n",
"# Assumes the model always calls the weather tool\n",
"chain = llm_with_tools | (lambda x: x.tool_calls[0][\"args\"]) | curried_tool\n",
"\n",
"chain.invoke(\"What is the weather in SF?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now seen how to pass additional arguments to tools. Next, check out the following guides to learn more about tools:\n",
"\n",
"- [Build agents using LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/)\n",
"- [Force a tool call](/docs/how_to/tool_choice)\n",
"- [Pass tool results back to model](/docs/how_to/tool_results_pass_to_model)\n",
"\n",
"You can also check out the full list of how-to guides on tools:\n",
"\n",
"- Building [tool-using chains and agents](/docs/how_to#tools)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}