From 3adf710f1dd48de2311493582ff09667e6fd2d8d Mon Sep 17 00:00:00 2001 From: Harrison Chase Date: Thu, 18 Jul 2024 08:52:12 -0700 Subject: [PATCH] docs: improve docs on tools (#24404) Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> Co-authored-by: Eugene Yurtsev --- docs/docs/concepts.mdx | 77 +++++++++++++++++++----------------------- 1 file changed, 34 insertions(+), 43 deletions(-) diff --git a/docs/docs/concepts.mdx b/docs/docs/concepts.mdx index 32b25b89b2c..ee213c19796 100644 --- a/docs/docs/concepts.mdx +++ b/docs/docs/concepts.mdx @@ -236,7 +236,7 @@ This is where information like log-probs and token usage may be stored. These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output. They can be accessed from there with the `.tool_calls` property. -This property returns a list of dictionaries. Each dictionary has the following keys: +This property returns a list of `ToolCall`s. A `ToolCall` is a dictionary with the following arguments: - `name`: The name of the tool that should be called. - `args`: The arguments to that tool. @@ -513,67 +513,58 @@ A tool consists of: When a tool is bound to a model, the name, description and JSON schema are provided as context to the model. Given a list of tools and a set of instructions, a model can request to call one or more tools with specific inputs. +Typical usage may look like the following: + +```python +tools = [...] # Define a list of tools +llm_with_tools = llm.bind_tools(tools) +ai_msg = llm_with_tools.invoke("do xyz...") # AIMessage(tool_calls=[ToolCall(...), ...], ...) +``` + +The `AIMessage` returned from the model MAY have `tool_calls` associated with it. +Read [this guide](/docs/concepts/#aimessage) for more information on what the response type may look like. + Once the chosen tools are invoked, the results can be passed back to the model so that it can complete whatever task it's performing. +There are generally two different ways to invoke the tool and pass back the response: -#### Tool inputs +#### Invoke with just the arguments -A tool can take arbitrary arguments as input. At runtime, these arguments can be passed in either: - -1. As a dict of just the arguments, -2. As a `ToolCall`, which contains the arguments along with other metadata like the tool call ID. +When you invoke a tool with just the arguments, you will get back the raw tool output (usually a string). +This generally looks like: ```python -tool = ... -llm_with_tools = llm.bind_tools([tool]) -ai_msg = llm_with_tools.invoke("do xyz...") # AIMessage(tool_calls=[ToolCall(...), ...], ...) +# You will want to previously check that the LLM returned tool calls tool_call = ai_msg.tool_calls[0] # ToolCall(args={...}, id=..., ...) - -# 1. pass in args directly -tool.invoke(tool_call["args"]) - -# 2. pass in the whole ToolCall -tool.invoke(tool_call) +tool_output = tool.invoke(tool_call["args"]) +tool_message = ToolMessage(content=tool_output, tool_call_id=tool_call["id"], name=tool_call["name"]) ``` -A tool also has access to the `RunnableConfig` that's passed into whatever chain the tool is a part of. This allows you to write tool logic that can be parameterized by the chain config. +Note that the `content` field will generally be passed back to the model. +If you do not want the raw tool response to be passed to the model, but you still want to keep it around, +you can transform the tool output but also pass it as an artifact (read more about [`ToolMessage.artifact` here](/docs/concepts/#toolmessage)) ```python -config = {"configurable": {"tool_param_foo": ...}} -tool.invoke(tool_call, config) +... # Same code as above +response_for_llm = transform(response) +tool_message = ToolMessage(content=response_for_llm, tool_call_id=tool_call["id"], name=tool_call["name"], artifact=tool_output) ``` -See the how-to guide for [passing in configs here](/docs/how_to/tool_configure/). +#### Invoke with `ToolCall` -#### Tool outputs - -The format of a tool's output depends on the format of the input. If a tool is called: - -1. With a dict of its arguments then it will produce an arbitrary output that we assume can be passed to a model as the `ToolMessage.content` field, -2. A `ToolCall` then it will produce a `ToolMessage(content=..., ...)` where the tool output has already been assigned to the `ToolMessage.content` field. +The other way to invoke a tool is to call it with the full `ToolCall` that was generated by the model. +When you do this, the tool will return a ToolMessage. +The benefits of this are that you don't have to write the logic yourself to transform the tool output into a ToolMessage. +This generally looks like: ```python -# 1. pass in args directly - -tool.invoke(tool_call["args"]) -# -> "tool result foobar..." - -# 2. pass in the whole ToolCall - -tool.invoke(tool_call) +tool_call = ai_msg.tool_calls[0] # ToolCall(args={...}, id=..., ...) +tool_message = tool.invoke(tool_call) # -> ToolMessage(content="tool result foobar...", tool_call_id=..., name="tool_name") ``` -A tool can also be defined to include an artifact when invoked with a `ToolCall`. An artifact is some element of the -tool's execution which is useful to return but shouldn't be sent to the model. The artifact can *only* be returned -when the tool input is a `ToolCall`: - -```python -tool_with_artifact.invoke(tool_call) -# -> ToolMessage(content="tool result foobar...", tool_call_id=..., name="tool_name", artifact=...). -``` - -Learn about [`ToolMessage.artifact` here](/docs/concepts/#toolmessage) and about [defining tools that return artifacts here](/docs/how_to/tool_artifacts/). +If you are invoking the tool this way and want to include an [artifact](/docs/concepts/#toolmessage) for the ToolMessage, you will need to have the tool return two things. +Read more about [defining tools that return artifacts here](/docs/how_to/tool_artifacts/). #### Best practices