diff --git a/docs/docs/concepts/index.mdx b/docs/docs/concepts/index.mdx
index ad3100dfd25..a2678549369 100644
--- a/docs/docs/concepts/index.mdx
+++ b/docs/docs/concepts/index.mdx
@@ -529,26 +529,105 @@ for modifying **multiple** key-value pairs at once:
For key-value store implementations, see [this section](/docs/integrations/stores/).
-
-**Version A**
-
### Tools
-[What are tools?](/docs/concepts/tools)
+Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models.
+Tools are needed whenever you want a model to control parts of your code or call out to external APIs.
-OR
+A tool consists of:
-[What are tools?](/docs/concepts/tools): Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. Tools are needed whenever you want a model to control parts of your code or call out to external APIs.
+1. The `name` of the tool.
+2. A `description` of what the tool does.
+3. A `JSON schema` defining the inputs to the tool.
+4. A `function` (and, optionally, an async variant of the function).
+When a tool is bound to a model, the name, description and JSON schema are provided as context to the model.
+Given a list of tools and a set of instructions, a model can request to call one or more tools with specific inputs.
+Typical usage may look like the following:
-**Version 2**
-### [Tools2](/docs/concepts/tools)
+```python
+tools = [...] # Define a list of tools
+llm_with_tools = llm.bind_tools(tools)
+ai_msg = llm_with_tools.invoke("do xyz...")
+# -> AIMessage(tool_calls=[ToolCall(...), ...], ...)
+```
+The `AIMessage` returned from the model MAY have `tool_calls` associated with it.
+Read [this guide](/docs/concepts/#aimessage) for more information on what the response type may look like.
+Once the chosen tools are invoked, the results can be passed back to the model so that it can complete whatever task
+it's performing.
+There are generally two different ways to invoke the tool and pass back the response:
+#### Invoke with just the arguments
-## Toolkits
+When you invoke a tool with just the arguments, you will get back the raw tool output (usually a string).
+This generally looks like:
+
+```python
+# You will want to previously check that the LLM returned tool calls
+tool_call = ai_msg.tool_calls[0]
+# ToolCall(args={...}, id=..., ...)
+tool_output = tool.invoke(tool_call["args"])
+tool_message = ToolMessage(
+ content=tool_output,
+ tool_call_id=tool_call["id"],
+ name=tool_call["name"]
+)
+```
+
+Note that the `content` field will generally be passed back to the model.
+If you do not want the raw tool response to be passed to the model, but you still want to keep it around,
+you can transform the tool output but also pass it as an artifact (read more about [`ToolMessage.artifact` here](/docs/concepts/#toolmessage))
+
+```python
+... # Same code as above
+response_for_llm = transform(response)
+tool_message = ToolMessage(
+ content=response_for_llm,
+ tool_call_id=tool_call["id"],
+ name=tool_call["name"],
+ artifact=tool_output
+)
+```
+
+#### Invoke with `ToolCall`
+
+The other way to invoke a tool is to call it with the full `ToolCall` that was generated by the model.
+When you do this, the tool will return a ToolMessage.
+The benefits of this are that you don't have to write the logic yourself to transform the tool output into a ToolMessage.
+This generally looks like:
+
+```python
+tool_call = ai_msg.tool_calls[0]
+# -> ToolCall(args={...}, id=..., ...)
+tool_message = tool.invoke(tool_call)
+# -> ToolMessage(
+# content="tool result foobar...",
+# tool_call_id=...,
+# name="tool_name"
+# )
+```
+
+If you are invoking the tool this way and want to include an [artifact](/docs/concepts/#toolmessage) for the ToolMessage, you will need to have the tool return two things.
+Read more about [defining tools that return artifacts here](/docs/how_to/tool_artifacts/).
+
+#### Best practices
+
+When designing tools to be used by a model, it is important to keep in mind that:
+
+- Chat models that have explicit [tool-calling APIs](/docs/concepts/#functiontool-calling) will be better at tool calling than non-fine-tuned models.
+- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. This another form of prompt engineering.
+- Simple, narrowly scoped tools are easier for models to use than complex tools.
+
+#### Related
+
+For specifics on how to use tools, see the [tools how-to guides](/docs/how_to/#tools).
+
+To use a pre-built tool, see the [tool integration docs](/docs/integrations/tools/).
+
+### Toolkits
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.