mirror of
https://github.com/hwchase17/langchain.git
synced 2025-07-31 16:39:20 +00:00
Adding a template for Solo Performance Prompting Agent (#12627)
**Description:** This template creates an agent that transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. **Tag maintainer:** @hwchase17 --------- Co-authored-by: Sayandip Sarkar <sayandip.sarkar@skypointcloud.com> Co-authored-by: Erick Friis <erick@langchain.dev>
This commit is contained in:
parent
ae63c186af
commit
8dbbcf0b6c
21
templates/solo-performance-prompting-agent/LICENSE
Normal file
21
templates/solo-performance-prompting-agent/LICENSE
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 LangChain, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
70
templates/solo-performance-prompting-agent/README.md
Normal file
70
templates/solo-performance-prompting-agent/README.md
Normal file
@ -0,0 +1,70 @@
|
||||
# solo-performance-prompting-agent
|
||||
|
||||
This template creates an agent that transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas.
|
||||
A cognitive synergist refers to an intelligent agent that collaborates with multiple minds, combining their individual strengths and knowledge, to enhance problem-solving and overall performance in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs.
|
||||
|
||||
This template will use the `DuckDuckGo` search API.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
This template will use `OpenAI` by default.
|
||||
Be sure that `OPENAI_API_KEY` is set in your environment.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package solo-performance-prompting-agent
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add solo-performance-prompting-agent
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from solo_performance_prompting_agent import chain as solo_performance_prompting_agent_chain
|
||||
|
||||
add_routes(app, solo_performance_prompting_agent_chain, path="/solo-performance-prompting-agent")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/solo-performance-prompting-agent/playground](http://127.0.0.1:8000/solo-performance-prompting-agent/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/solo-performance-prompting-agent")
|
||||
```
|
1827
templates/solo-performance-prompting-agent/poetry.lock
generated
Normal file
1827
templates/solo-performance-prompting-agent/poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
25
templates/solo-performance-prompting-agent/pyproject.toml
Normal file
25
templates/solo-performance-prompting-agent/pyproject.toml
Normal file
@ -0,0 +1,25 @@
|
||||
[tool.poetry]
|
||||
name = "solo-performance-prompting-agent"
|
||||
version = "0.0.1"
|
||||
description = ""
|
||||
authors = []
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain = ">=0.0.313, <0.1"
|
||||
openai = "^0.28.1"
|
||||
duckduckgo-search = "^3.9.3"
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
langchain-cli = {extras = ["serve"], version = "^0.0.10"}
|
||||
fastapi = "^0.104.0"
|
||||
sse-starlette = "^1.6.5"
|
||||
|
||||
[tool.langserve]
|
||||
export_module = "solo_performance_prompting_agent.agent"
|
||||
export_attr = "agent_executor"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
@ -0,0 +1,38 @@
|
||||
from langchain.agents import AgentExecutor
|
||||
from langchain.agents.format_scratchpad import format_xml
|
||||
from langchain.llms import OpenAI
|
||||
from langchain.pydantic_v1 import BaseModel
|
||||
from langchain.tools import DuckDuckGoSearchRun
|
||||
from langchain.tools.render import render_text_description
|
||||
|
||||
from solo_performance_prompting_agent.parser import parse_output
|
||||
from solo_performance_prompting_agent.prompts import conversational_prompt
|
||||
|
||||
_model = OpenAI()
|
||||
_tools = [DuckDuckGoSearchRun()]
|
||||
_prompt = conversational_prompt.partial(
|
||||
tools=render_text_description(_tools),
|
||||
tool_names=", ".join([t.name for t in _tools]),
|
||||
)
|
||||
_llm_with_stop = _model.bind(stop=["</tool_input>", "</final_answer>"])
|
||||
|
||||
agent = (
|
||||
{
|
||||
"question": lambda x: x["question"],
|
||||
"agent_scratchpad": lambda x: format_xml(x["intermediate_steps"]),
|
||||
}
|
||||
| _prompt
|
||||
| _llm_with_stop
|
||||
| parse_output
|
||||
)
|
||||
|
||||
|
||||
class AgentInput(BaseModel):
|
||||
question: str
|
||||
|
||||
|
||||
agent_executor = AgentExecutor(
|
||||
agent=agent, tools=_tools, verbose=True, handle_parsing_errors=True
|
||||
).with_types(input_type=AgentInput)
|
||||
|
||||
agent_executor = agent_executor | (lambda x: x["output"])
|
@ -0,0 +1,18 @@
|
||||
from langchain.schema import AgentAction, AgentFinish
|
||||
|
||||
|
||||
def parse_output(message: str):
|
||||
FINAL_ANSWER_ACTION = "<final_answer>"
|
||||
includes_answer = FINAL_ANSWER_ACTION in message
|
||||
if includes_answer:
|
||||
answer = message.split(FINAL_ANSWER_ACTION)[1].strip()
|
||||
if "</final_answer>" in answer:
|
||||
answer = answer.split("</final_answer>")[0].strip()
|
||||
return AgentFinish(return_values={"output": answer}, log=message)
|
||||
elif "</tool>" in message:
|
||||
tool, tool_input = message.split("</tool>")
|
||||
_tool = tool.split("<tool>")[1]
|
||||
_tool_input = tool_input.split("<tool_input>")[1]
|
||||
if "</tool_input>" in _tool_input:
|
||||
_tool_input = _tool_input.split("</tool_input>")[0]
|
||||
return AgentAction(tool=_tool, tool_input=_tool_input, log=message)
|
@ -0,0 +1,54 @@
|
||||
from langchain.prompts import ChatPromptTemplate
|
||||
|
||||
template = """When faced with a task, begin by identifying the participants who will contribute to solving the task. Then, initiate a multi-round collaboration process until a final solution is reached. The participants will
|
||||
give critical comments and detailed suggestions whenever necessary.
|
||||
The experts also have access to {tools} and can use them based on their expertise.
|
||||
In order to use a tool, the participants can use <tool></tool> and <tool_input></tool_input> tags. They will then get back a response in the form <observation></observation>
|
||||
For example, if they have a tool called 'search' that could run a google search, in order to search for the weather in SF they would respond:
|
||||
|
||||
<tool>search</tool><tool_input>weather in SF</tool_input>
|
||||
<observation>64 degrees</observation>
|
||||
|
||||
When they are done, they can respond with the answer to the conversation.
|
||||
Once the participants have reached a final solution, they can respond with the final answer in the form <final_answer></final_answer>
|
||||
Here are some examples:
|
||||
---
|
||||
Example 1: Use numbers 6 12 1 1 and basic arithmetic operations (+ - * /) to obtain 24. You need to use all numbers, and each number can only be used once.
|
||||
Participants: AI Assistant (you); Math Expert
|
||||
Start collaboration!
|
||||
Math Expert: Let's analyze the task in detail. You need to make sure that you meet the requirement, that you need to use exactly the four numbers (6 12 1 1) to construct 24. To reach 24, you can think
|
||||
of the common divisors of 24 such as 4, 6, 8, 3 and try to construct these first. Also you need to think of potential additions that can reach 24, such as 12 + 12.
|
||||
AI Assistant (you): Thanks for the hints! Here's one initial solution: (12 / (1 + 1)) * 6 = 24
|
||||
Math Expert: Let's check the answer step by step. (1+1) = 2, (12 / 2) = 6, 6 * 6 = 36 which is not 24! The answer is not correct. Can you fix this by considering other combinations? Please do not make
|
||||
similar mistakes.
|
||||
AI Assistant (you): Thanks for pointing out the mistake. Here is a revised solution considering 24 can also be reached by 3 * 8: (6 + 1 + 1) * (12 / 4) = 24.
|
||||
Math Expert: Let's first check if the calculation is correct. (6 + 1 + 1) = 8, 12 / 4 = 3, 8 * 3 = 24. The calculation is correct, but you used 6 1 1 12 4 which is not the same as the input 6 12 1 1. Can you
|
||||
avoid using a number that is not part of the input?
|
||||
AI Assistant (you): You are right, here is a revised solution considering 24 can be reached by 12 + 12 and without using any additional numbers: 6 * (1 - 1) + 12 = 24.
|
||||
Math Expert: Let's check the answer again. 1 - 1 = 0, 6 * 0 = 0, 0 + 12 = 12. I believe you are very close, here is a hint: try to change the "1 - 1" to "1 + 1".
|
||||
AI Assistant (you): Sure, here is the corrected answer: 6 * (1+1) + 12 = 24
|
||||
Math Expert: Let's verify the solution. 1 + 1 = 2, 6 * 2 = 12, 12 + 12 = 12. You used 1 1 6 12 which is identical to the input 6 12 1 1. Everything looks good!
|
||||
Finish collaboration!
|
||||
<final_answer>6 * (1 + 1) + 12 = 24</final_answer>
|
||||
|
||||
---
|
||||
Example 2: Who is the father of the longest serving US president?
|
||||
Participants: AI Assistant (you); History Expert
|
||||
Start collaboration!
|
||||
History Expert: The longest serving US president is Franklin D. Roosevelt. He served for 12 years and 39 days. We need to run a search to find out who is his father.
|
||||
AI Assistant (you): Thanks for the hints! Let me run a search: <tool>search</tool><tool_input>Who is the father of Franklin D. Roosevelt?</tool_input>
|
||||
<observation>James Roosevelt I</observation>
|
||||
AI Assistant (you): James Roosevelt I is the father of Franklin D. Roosevelt, the longest serving US President.
|
||||
History Expert: Everything looks good!
|
||||
Finish collaboration!
|
||||
<final_answer>James Roosevelt I is the father of Franklin D. Roosevelt, the longest serving US President.</final_answer>
|
||||
---
|
||||
Now, identify the participants and collaboratively solve the following task step by step.""" # noqa: E501
|
||||
|
||||
conversational_prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
("system", template),
|
||||
("user", "{question}"),
|
||||
("ai", "{agent_scratchpad}"),
|
||||
]
|
||||
)
|
Loading…
Reference in New Issue
Block a user