diff --git a/docs/docs/modules/agents/agent_types/index.mdx b/docs/docs/modules/agents/agent_types/index.mdx
index 550f9bd84ca..bcb543ccb03 100644
--- a/docs/docs/modules/agents/agent_types/index.mdx
+++ b/docs/docs/modules/agents/agent_types/index.mdx
@@ -30,12 +30,12 @@ Whether this agent requires the model to support any additional parameters. Some
Our commentary on when you should consider using this agent type.
-| Agent Type | Intended Model Type | Supports Chat History | Supports Multi-Input Tools | Supports Parallel Function Calling | Required Model Params | When to Use |
+| Agent Type | Intended Model Type | Supports Chat History | Supports Multi-Input Tools | Supports Parallel Function Calling | Required Model Params | When to Use | API |
|--------------------------------------------|---------------------|-----------------------|----------------------------|-------------------------------------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [OpenAI Tools](./openai_tools) | Chat | ✅ | ✅ | ✅ | `tools` | If you are using a recent OpenAI model (`1106` onwards) |
-| [OpenAI Functions](./openai_functions_agent)| Chat | ✅ | ✅ | | `functions` | If you are using an OpenAI model, or an open-source model that has been finetuned for function calling and exposes the same `functions` parameters as OpenAI |
-| [XML](./xml_agent) | LLM | ✅ | | | | If you are using Anthropic models, or other models good at XML |
-| [Structured Chat](./structured_chat) | Chat | ✅ | ✅ | | | If you need to support tools with multiple inputs |
-| [JSON Chat](./json_agent) | Chat | ✅ | | | | If you are using a model good at JSON |
-| [ReAct](./react) | LLM | ✅ | | | | If you are using a simple model |
-| [Self Ask With Search](./self_ask_with_search)| LLM | | | | | If you are using a simple model and only have one search tool |
+| [OpenAI Tools](./openai_tools) | Chat | ✅ | ✅ | ✅ | `tools` | If you are using a recent OpenAI model (`1106` onwards) | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_tools.base.create_openai_tools_agent.html) |
+| [OpenAI Functions](./openai_functions_agent)| Chat | ✅ | ✅ | | `functions` | If you are using an OpenAI model, or an open-source model that has been finetuned for function calling and exposes the same `functions` parameters as OpenAI | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.create_openai_functions_agent.html) |
+| [XML](./xml_agent) | LLM | ✅ | | | | If you are using Anthropic models, or other models good at XML | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.xml.base.create_xml_agent.html) |
+| [Structured Chat](./structured_chat) | Chat | ✅ | ✅ | | | If you need to support tools with multiple inputs | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.create_structured_chat_agent.html) |
+| [JSON Chat](./json_agent) | Chat | ✅ | | | | If you are using a model good at JSON | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.json_chat.base.create_json_chat_agent.html) |
+| [ReAct](./react) | LLM | ✅ | | | | If you are using a simple model | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html) |
+| [Self Ask With Search](./self_ask_with_search)| LLM | | | | | If you are using a simple model and only have one search tool | [Ref](https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.create_self_ask_with_search_agent.html) |
\ No newline at end of file
diff --git a/libs/langchain/langchain/agents/json_chat/base.py b/libs/langchain/langchain/agents/json_chat/base.py
index 954dd375f25..3e34568a972 100644
--- a/libs/langchain/langchain/agents/json_chat/base.py
+++ b/libs/langchain/langchain/agents/json_chat/base.py
@@ -16,8 +16,20 @@ def create_json_chat_agent(
) -> Runnable:
"""Create an agent that uses JSON to format its logic, build for Chat Models.
- Examples:
+ Args:
+ llm: LLM to use as the agent.
+ tools: Tools this agent has access to.
+ prompt: The prompt to use, must have input keys:
+ `tools`: contains descriptions and arguments for each tool.
+ `tool_names`: contains all tool names.
+ `agent_scratchpad`: contains previous agent actions and tool outputs.
+ Returns:
+ A Runnable sequence representing an agent. It takes as input all the same input
+ variables as the prompt passed in does. It returns as output either an
+ AgentAction or AgentFinish.
+
+ Example:
.. code-block:: python
@@ -46,18 +58,82 @@ def create_json_chat_agent(
}
)
- Args:
- llm: LLM to use as the agent.
- tools: Tools this agent has access to.
- prompt: The prompt to use, must have input keys of
- `tools`, `tool_names`, and `agent_scratchpad`.
+ Creating prompt example:
- Returns:
- A runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
+ .. code-block:: python
- """
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
+
+ system = '''Assistant is a large language model trained by OpenAI.
+
+ Assistant is designed to be able to assist with a wide range of tasks, from answering \
+ simple questions to providing in-depth explanations and discussions on a wide range of \
+ topics. As a language model, Assistant is able to generate human-like text based on \
+ the input it receives, allowing it to engage in natural-sounding conversations and \
+ provide responses that are coherent and relevant to the topic at hand.
+
+ Assistant is constantly learning and improving, and its capabilities are constantly \
+ evolving. It is able to process and understand large amounts of text, and can use this \
+ knowledge to provide accurate and informative responses to a wide range of questions. \
+ Additionally, Assistant is able to generate its own text based on the input it \
+ receives, allowing it to engage in discussions and provide explanations and \
+ descriptions on a wide range of topics.
+
+ Overall, Assistant is a powerful system that can help with a wide range of tasks \
+ and provide valuable insights and information on a wide range of topics. Whether \
+ you need help with a specific question or just want to have a conversation about \
+ a particular topic, Assistant is here to assist.'''
+
+ human = '''TOOLS
+ ------
+ Assistant can ask the user to use tools to look up information that may be helpful in \
+ answering the users original question. The tools the human can use are:
+
+ {tools}
+
+ RESPONSE FORMAT INSTRUCTIONS
+ ----------------------------
+
+ When responding to me, please output a response in one of two formats:
+
+ **Option 1:**
+ Use this if you want the human to use a tool.
+ Markdown code snippet formatted in the following schema:
+
+ ```json
+ {{
+ "action": string, \ The action to take. Must be one of {tool_names}
+ "action_input": string \ The input to the action
+ }}
+ ```
+
+ **Option #2:**
+ Use this if you want to respond directly to the human. Markdown code snippet formatted \
+ in the following schema:
+
+ ```json
+ {{
+ "action": "Final Answer",
+ "action_input": string \ You should put what you want to return to use here
+ }}
+ ```
+
+ USER'S INPUT
+ --------------------
+ Here is the user's input (remember to respond with a markdown code snippet of a json \
+ blob with a single action, and NOTHING else):
+
+ {input}'''
+
+ prompt = ChatPromptTemplate.from_messages(
+ [
+ ("system", system),
+ MessagesPlaceholder("chat_history", optional=True),
+ ("human", human),
+ MessagesPlaceholder("agent_scratchpad"),
+ ]
+ )
+ """ # noqa: E501
missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
prompt.input_variables
)
diff --git a/libs/langchain/langchain/agents/openai_functions_agent/base.py b/libs/langchain/langchain/agents/openai_functions_agent/base.py
index f388f5cae26..1d486dc6856 100644
--- a/libs/langchain/langchain/agents/openai_functions_agent/base.py
+++ b/libs/langchain/langchain/agents/openai_functions_agent/base.py
@@ -235,7 +235,20 @@ def create_openai_functions_agent(
) -> Runnable:
"""Create an agent that uses OpenAI function calling.
- Examples:
+ Args:
+ llm: LLM to use as the agent. Should work with OpenAI function calling,
+ so either be an OpenAI model that supports that or a wrapper of
+ a different model that adds in equivalent support.
+ tools: Tools this agent has access to.
+ prompt: The prompt to use, must have input key `agent_scratchpad`, which will
+ contain agent action and tool output messages.
+
+ Returns:
+ A Runnable sequence representing an agent. It takes as input all the same input
+ variables as the prompt passed in does. It returns as output either an
+ AgentAction or AgentFinish.
+
+ Example:
Creating an agent with no memory
@@ -266,18 +279,20 @@ def create_openai_functions_agent(
}
)
- Args:
- llm: LLM to use as the agent. Should work with OpenAI function calling,
- so either be an OpenAI model that supports that or a wrapper of
- a different model that adds in equivalent support.
- tools: Tools this agent has access to.
- prompt: The prompt to use, must have an input key of `agent_scratchpad`.
+ Creating prompt example:
- Returns:
- A runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
+ .. code-block:: python
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
+
+ prompt = ChatPromptTemplate.from_messages(
+ [
+ ("system", "You are a helpful assistant"),
+ MessagesPlaceholder("chat_history", optional=True),
+ ("human", "{input}"),
+ MessagesPlaceholder("agent_scratchpad"),
+ ]
+ )
"""
if "agent_scratchpad" not in prompt.input_variables:
raise ValueError(
diff --git a/libs/langchain/langchain/agents/openai_tools/base.py b/libs/langchain/langchain/agents/openai_tools/base.py
index c1206ea4efc..4395fb32bbd 100644
--- a/libs/langchain/langchain/agents/openai_tools/base.py
+++ b/libs/langchain/langchain/agents/openai_tools/base.py
@@ -17,7 +17,18 @@ def create_openai_tools_agent(
) -> Runnable:
"""Create an agent that uses OpenAI tools.
- Examples:
+ Args:
+ llm: LLM to use as the agent.
+ tools: Tools this agent has access to.
+ prompt: The prompt to use, must have input key `agent_scratchpad`, which will
+ contain agent action and tool output messages.
+
+ Returns:
+ A Runnable sequence representing an agent. It takes as input all the same input
+ variables as the prompt passed in does. It returns as output either an
+ AgentAction or AgentFinish.
+
+ Example:
.. code-block:: python
@@ -46,15 +57,20 @@ def create_openai_tools_agent(
}
)
- Args:
- llm: LLM to use as the agent.
- tools: Tools this agent has access to.
- prompt: The prompt to use, must have input keys of `agent_scratchpad`.
+ Creating prompt example:
- Returns:
- A runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
+ .. code-block:: python
+
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
+
+ prompt = ChatPromptTemplate.from_messages(
+ [
+ ("system", "You are a helpful assistant"),
+ MessagesPlaceholder("chat_history", optional=True),
+ ("human", "{input}"),
+ MessagesPlaceholder("agent_scratchpad"),
+ ]
+ )
"""
missing_vars = {"agent_scratchpad"}.difference(prompt.input_variables)
if missing_vars:
diff --git a/libs/langchain/langchain/agents/react/agent.py b/libs/langchain/langchain/agents/react/agent.py
index fe27a42b6a5..23277c36540 100644
--- a/libs/langchain/langchain/agents/react/agent.py
+++ b/libs/langchain/langchain/agents/react/agent.py
@@ -17,6 +17,20 @@ def create_react_agent(
) -> Runnable:
"""Create an agent that uses ReAct prompting.
+ Args:
+ llm: LLM to use as the agent.
+ tools: Tools this agent has access to.
+ prompt: The prompt to use, must have input keys:
+ `tools`: contains descriptions and arguments for each tool.
+ `tool_names`: contains all tool names.
+ `agent_scratchpad`: contains previous agent actions and tool outputs.
+
+
+ Returns:
+ A Runnable sequence representing an agent. It takes as input all the same input
+ variables as the prompt passed in does. It returns as output either an
+ AgentAction or AgentFinish.
+
Examples:
.. code-block:: python
@@ -45,18 +59,34 @@ def create_react_agent(
}
)
- Args:
- llm: LLM to use as the agent.
- tools: Tools this agent has access to.
- prompt: The prompt to use, must have input keys of
- `tools`, `tool_names`, and `agent_scratchpad`.
+ Creating prompt example:
- Returns:
- A runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
+ .. code-block:: python
- """
+ from langchain_core.prompts import PromptTemplate
+
+ template = '''Answer the following questions as best you can. You have access to the following tools:
+
+ {tools}
+
+ Use the following format:
+
+ Question: the input question you must answer
+ Thought: you should always think about what to do
+ Action: the action to take, should be one of [{tool_names}]
+ Action Input: the input to the action
+ Observation: the result of the action
+ ... (this Thought/Action/Action Input/Observation can repeat N times)
+ Thought: I now know the final answer
+ Final Answer: the final answer to the original input question
+
+ Begin!
+
+ Question: {input}
+ Thought:{agent_scratchpad}'''
+
+ prompt = PromptTemplate.from_template(template)
+ """ # noqa: E501
missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
prompt.input_variables
)
diff --git a/libs/langchain/langchain/agents/self_ask_with_search/base.py b/libs/langchain/langchain/agents/self_ask_with_search/base.py
index 32c482eeff2..9df14cb4335 100644
--- a/libs/langchain/langchain/agents/self_ask_with_search/base.py
+++ b/libs/langchain/langchain/agents/self_ask_with_search/base.py
@@ -91,8 +91,19 @@ def create_self_ask_with_search_agent(
) -> Runnable:
"""Create an agent that uses self-ask with search prompting.
- Examples:
+ Args:
+ llm: LLM to use as the agent.
+ tools: List of tools. Should just be of length 1, with that tool having
+ name `Intermediate Answer`
+ prompt: The prompt to use, must have input key `agent_scratchpad` which will
+ contain agent actions and tool outputs.
+ Returns:
+ A Runnable sequence representing an agent. It takes as input all the same input
+ variables as the prompt passed in does. It returns as output either an
+ AgentAction or AgentFinish.
+
+ Examples:
.. code-block:: python
@@ -111,18 +122,53 @@ def create_self_ask_with_search_agent(
agent_executor.invoke({"input": "hi"})
- Args:
- llm: LLM to use as the agent.
- tools: List of tools. Should just be of length 1, with that tool having
- name `Intermediate Answer`
- prompt: The prompt to use, must have input keys of `agent_scratchpad`.
+ Create prompt example:
- Returns:
- A runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
+ .. code-block:: python
- """
+ from langchain_core.prompts import PromptTemplate
+
+ template = '''Question: Who lived longer, Muhammad Ali or Alan Turing?
+ Are follow up questions needed here: Yes.
+ Follow up: How old was Muhammad Ali when he died?
+ Intermediate answer: Muhammad Ali was 74 years old when he died.
+ Follow up: How old was Alan Turing when he died?
+ Intermediate answer: Alan Turing was 41 years old when he died.
+ So the final answer is: Muhammad Ali
+
+ Question: When was the founder of craigslist born?
+ Are follow up questions needed here: Yes.
+ Follow up: Who was the founder of craigslist?
+ Intermediate answer: Craigslist was founded by Craig Newmark.
+ Follow up: When was Craig Newmark born?
+ Intermediate answer: Craig Newmark was born on December 6, 1952.
+ So the final answer is: December 6, 1952
+
+ Question: Who was the maternal grandfather of George Washington?
+ Are follow up questions needed here: Yes.
+ Follow up: Who was the mother of George Washington?
+ Intermediate answer: The mother of George Washington was Mary Ball Washington.
+ Follow up: Who was the father of Mary Ball Washington?
+ Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
+ So the final answer is: Joseph Ball
+
+ Question: Are both the directors of Jaws and Casino Royale from the same country?
+ Are follow up questions needed here: Yes.
+ Follow up: Who is the director of Jaws?
+ Intermediate answer: The director of Jaws is Steven Spielberg.
+ Follow up: Where is Steven Spielberg from?
+ Intermediate answer: The United States.
+ Follow up: Who is the director of Casino Royale?
+ Intermediate answer: The director of Casino Royale is Martin Campbell.
+ Follow up: Where is Martin Campbell from?
+ Intermediate answer: New Zealand.
+ So the final answer is: No
+
+ Question: {input}
+ Are followup questions needed here:{agent_scratchpad}'''
+
+ prompt = PromptTemplate.from_template(template)
+ """ # noqa: E501
missing_vars = {"agent_scratchpad"}.difference(prompt.input_variables)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")
diff --git a/libs/langchain/langchain/agents/structured_chat/base.py b/libs/langchain/langchain/agents/structured_chat/base.py
index 68bb381897d..53e11b33c01 100644
--- a/libs/langchain/langchain/agents/structured_chat/base.py
+++ b/libs/langchain/langchain/agents/structured_chat/base.py
@@ -155,8 +155,20 @@ def create_structured_chat_agent(
) -> Runnable:
"""Create an agent aimed at supporting tools with multiple inputs.
- Examples:
+ Args:
+ llm: LLM to use as the agent.
+ tools: Tools this agent has access to.
+ prompt: The prompt to use, must have input keys
+ `tools`: contains descriptions and arguments for each tool.
+ `tool_names`: contains all tool names.
+ `agent_scratchpad`: contains previous agent actions and tool outputs.
+ Returns:
+ A Runnable sequence representing an agent. It takes as input all the same input
+ variables as the prompt passed in does. It returns as output either an
+ AgentAction or AgentFinish.
+
+ Examples:
.. code-block:: python
@@ -185,18 +197,63 @@ def create_structured_chat_agent(
}
)
- Args:
- llm: LLM to use as the agent.
- tools: Tools this agent has access to.
- prompt: The prompt to use, must have input keys of
- `tools`, `tool_names`, and `agent_scratchpad`.
+ Creating prompt example:
- Returns:
- A runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
+ .. code-block:: python
- """
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
+
+ system = '''Respond to the human as helpfully and accurately as possible. You have access to the following tools:
+
+ {tools}
+
+ Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
+
+ Valid "action" values: "Final Answer" or {tool_names}
+
+ Provide only ONE action per $JSON_BLOB, as shown:
+
+ ```
+ {{
+ "action": $TOOL_NAME,
+ "action_input": $INPUT
+ }}
+ ```
+
+ Follow this format:
+
+ Question: input question to answer
+ Thought: consider previous and subsequent steps
+ Action:
+ ```
+ $JSON_BLOB
+ ```
+ Observation: action result
+ ... (repeat Thought/Action/Observation N times)
+ Thought: I know what to respond
+ Action:
+ ```
+ {{
+ "action": "Final Answer",
+ "action_input": "Final response to human"
+ }}
+
+ Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'''
+
+ human = '''{input}
+
+ {agent_scratchpad}
+
+ (reminder to respond in a JSON blob no matter what)'''
+
+ prompt = ChatPromptTemplate.from_messages(
+ [
+ ("system", system),
+ MessagesPlaceholder("chat_history", optional=True),
+ ("human", human),
+ ]
+ )
+ """ # noqa: E501
missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
prompt.input_variables
)
diff --git a/libs/langchain/langchain/agents/xml/base.py b/libs/langchain/langchain/agents/xml/base.py
index abdacf48803..bd678979f74 100644
--- a/libs/langchain/langchain/agents/xml/base.py
+++ b/libs/langchain/langchain/agents/xml/base.py
@@ -112,8 +112,19 @@ def create_xml_agent(
) -> Runnable:
"""Create an agent that uses XML to format its logic.
- Examples:
+ Args:
+ llm: LLM to use as the agent.
+ tools: Tools this agent has access to.
+ prompt: The prompt to use, must have input keys
+ `tools`: contains descriptions for each tool.
+ `agent_scratchpad`: contains previous agent actions and tool outputs.
+ Returns:
+ A Runnable sequence representing an agent. It takes as input all the same input
+ variables as the prompt passed in does. It returns as output either an
+ AgentAction or AgentFinish.
+
+ Example:
.. code-block:: python
@@ -137,22 +148,41 @@ def create_xml_agent(
"input": "what's my name?",
# Notice that chat_history is a string
# since this prompt is aimed at LLMs, not chat models
- "chat_history": "Human: My name is Bob\nAI: Hello Bob!",
+ "chat_history": "Human: My name is Bob\\nAI: Hello Bob!",
}
)
- Args:
- llm: LLM to use as the agent.
- tools: Tools this agent has access to.
- prompt: The prompt to use, must have input keys of
- `tools` and `agent_scratchpad`.
+ Creating prompt example:
- Returns:
- A runnable sequence representing an agent. It takes as input all the same input
- variables as the prompt passed in does. It returns as output either an
- AgentAction or AgentFinish.
+ .. code-block:: python
- """
+ from langchain_core.prompts import PromptTemplate
+
+ template = '''You are a helpful assistant. Help the user answer any questions.
+
+ You have access to the following tools:
+
+ {tools}
+
+ In order to use a tool, you can use and tags. You will then get back a response in the form
+ For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:
+
+ searchweather in SF
+ 64 degrees
+
+ When you are done, respond with a final answer between . For example:
+
+ The weather in SF is 64 degrees
+
+ Begin!
+
+ Previous Conversation:
+ {chat_history}
+
+ Question: {input}
+ {agent_scratchpad}'''
+ prompt = PromptTemplate.from_template(template)
+ """ # noqa: E501
missing_vars = {"tools", "agent_scratchpad"}.difference(prompt.input_variables)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")