Compare commits

..

23 Commits

Author SHA1 Message Date
vowelparrot
babb5e5c7d Add Chat Agent Regression Tests 2023-04-26 20:01:41 -07:00
Tim Asp
539142f8d5 Add way to get serpapi results async (#3604)
Sometimes it's nice to get the raw results from serpapi, and we're
missing the async version of this function.
2023-04-26 16:37:03 -07:00
Zander Chase
443a893ffd Align names of search tools (#3620)
Tools for Bing, DDG and Google weren't consistent even though the
underlying implementations were.
All three services now have the same tools and implementations to easily
switch and experiment when building chains.
2023-04-26 16:21:34 -07:00
Maciej Bryński
aa345a4bb7 Add get_text_separator parameter to BSHTMLLoader (#3551)
By default get_text doesn't separate content of different HTML tag.
Adding option for specifying separator helps with document splitting.
2023-04-26 16:10:16 -07:00
Bhupendra Aole
568c4f0d81 Close dataframe column names are being treated as one by the LLM (#3611)
We are sending sample dataframe to LLM with df.head().
If the column names are close by, LLM treats two columns names as one,
returning incorrect results.


![image](https://user-images.githubusercontent.com/4707543/234678692-97851fa0-9e12-44db-92ec-9ad9f3545ae2.png)

In the above case the LLM uses **Org Week** as the column name instead
of **Week** if asked about a specific week.

Returning head() as a markdown separates out the columns names and thus
using correct column name.


![image](https://user-images.githubusercontent.com/4707543/234678945-c6d7b218-143e-4e70-9e17-77dc64841a49.png)
2023-04-26 16:05:53 -07:00
James O'Dwyer
860fa59cd3 add metal to ecosystem (#3613) 2023-04-26 15:57:48 -07:00
Zander Chase
ee670c448e Persistent Bash Shell (#3580)
Clean up linting and make more idiomatic by using an output parser

---------

Co-authored-by: FergusFettes <fergusfettes@gmail.com>
2023-04-26 15:20:28 -07:00
Ilyes Bouchada
c5451f4298 Update docker-compose.yaml (#3582)
The following error gets returned when trying to launch
langchain-server:

ERROR: The Compose file
'/opt/homebrew/lib/python3.11/site-packages/langchain/docker-compose.yaml'
is invalid because:
services.langchain-db.expose is invalid: should be of the format
'PORT[/PROTOCOL]'

Solution:
Change line 28 from - 5432:5432 to - 5432
2023-04-26 15:11:59 -07:00
Kátia Nakamura
e1a4fc55e6 Add docs for Fly.io deployment (#3584)
A minimal example of how to deploy LangChain to Fly.io using Flask.
2023-04-26 14:41:08 -07:00
Chirag Bhatia
08478deec5 Fixed typo for HuggingFaceHub (#3612)
The current text has a typo. This PR contains the corrected spelling for
HuggingFaceHub
2023-04-26 14:33:31 -07:00
Charlie Holtz
246710def9 Fix Replicate llm response to handle iterator / multiple outputs (#3614)
One of our users noticed a bug when calling streaming models. This is
because those models return an iterator. So, I've updated the Replicate
`_call` code to join together the output. The other advantage of this
fix is that if you requested multiple outputs you would get them all –
previously I was just returning output[0].

I also adjusted the demo docs to use dolly, because we're featuring that
model right now and it's always hot, so people won't have to wait for
the model to boot up.

The error that this fixes:
```
> llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”)
>  llm(“hello”)
> Traceback (most recent call last):
  File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module>
    print(llm(prompt))
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__
    return self.generate([prompt], stop=stop).generations[0][0].text
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate
    raise e
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate
    output = self._generate(prompts, stop=stop)
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate
    text = self._call(prompt, stop=stop)
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call
    return outputs[0]
TypeError: 'generator' object is not subscriptable
```
2023-04-26 14:26:33 -07:00
Harrison Chase
7536912125 bump ver 150 (#3599) 2023-04-26 08:29:09 -07:00
Chirag Bhatia
f174aa7712 Fix broken Cerebrium link in documentation (#3554)
The current hyperlink has a typo. This PR contains the corrected
hyperlink to Cerebrium docs
2023-04-26 08:11:58 -07:00
Harrison Chase
d880775e5d Harrison/plugnplai (#3573)
Co-authored-by: Eduardo Reis <edu.pontes@gmail.com>
2023-04-26 08:09:34 -07:00
Zander Chase
85dae78548 Confluence beautifulsoup (#3576)
Co-authored-by: Theau Heral <theau.heral@ln.email.gs.com>
2023-04-25 23:40:06 -07:00
Mike Wang
64501329ab [simple] updated annotation in load_tools.py (#3544)
- added a few missing annotation for complex local variables.
- auto formatted.
- I also went through all other files in agent directory. no seeing any
other missing piece. (there are several prompt strings not annotated,
but I think it’s trivial. Also adding annotation will make it harder to
read in terms of indents.) Anyway, I think this is the last PR in
agent/annotation.
2023-04-25 23:30:49 -07:00
Zander Chase
d6d697a41b Sentence Transformers Aliasing (#3541)
The sentence transformers was a dup of the HF one. 

This is a breaking change (model_name vs. model) for anyone using
`SentenceTransformerEmbeddings(model="some/nondefault/model")`, but
since it was landed only this week it seems better to do this now rather
than doing a wrapper.
2023-04-25 23:29:20 -07:00
Eric Peter
603ea75bcd Fix docs error for google drive loader (#3574) 2023-04-25 22:52:59 -07:00
CG80499
cfd34e268e Add ReAct eval chain (#3161)
- Adds GPT-4 eval chain for arbitrary agents using any set of tools
- Adds notebook

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-04-25 21:22:25 -07:00
mbchang
4bc209c6f7 example: multi player dnd (#3560)
This notebook shows how the DialogueAgent and DialogueSimulator class
make it easy to extend the [Two-Player Dungeons & Dragons
example](https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html)
to multiple players.

The main difference between simulating two players and multiple players
is in revising the schedule for when each agent speaks

To this end, we augment DialogueSimulator to take in a custom function
that determines the schedule of which agent speaks. In the example
below, each character speaks in round-robin fashion, with the
storyteller interleaved between each player.
2023-04-25 21:20:39 -07:00
James Brotchie
5fdaa95e06 Strip surrounding quotes from requests tool URLs. (#3563)
Often an LLM will output a requests tool input argument surrounded by
single quotes. This triggers an exception in the requests library. Here,
we add a simple clean url function that strips any leading and trailing
single and double quotes before passing the URL to the underlying
requests library.

Co-authored-by: James Brotchie <brotchie@google.com>
2023-04-25 21:20:26 -07:00
Harrison Chase
f4829025fe add feast nb (#3565) 2023-04-25 17:46:06 -07:00
Harrison Chase
47da5f0e58 Harrison/streamlit handler (#3564)
Co-authored-by: kurupapi <37198601+kurupapi@users.noreply.github.com>
2023-04-25 17:26:30 -07:00
59 changed files with 2853 additions and 935 deletions

BIN
docs/_static/MetalDash.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 MiB

View File

@@ -33,6 +33,10 @@ It implements a Question Answering app and contains instructions for deploying t
A minimal example on how to run LangChain on Vercel using Flask.
## [Fly.io](https://github.com/fly-apps/hello-fly-langchain)
A minimal example of how to deploy LangChain to [Fly.io](https://fly.io/) using Flask.
## [Digitalocean App Platform](https://github.com/homanp/digitalocean-langchain)
A minimal example on how to deploy LangChain to DigitalOcean App Platform.

26
docs/ecosystem/metal.md Normal file
View File

@@ -0,0 +1,26 @@
# Metal
This page covers how to use [Metal](https://getmetal.io) within LangChain.
## What is Metal?
Metal is a managed retrieval & memory platform built for production. Easily index your data into `Metal` and run semantic search and retrieval on it.
![Metal](../_static/MetalDash.png)
## Quick start
Get started by [creating a Metal account](https://app.getmetal.io/signup).
Then, you can easily take advantage of the `MetalRetriever` class to start retrieving your data for semantic search, prompting context, etc. This class takes a `Metal` instance and a dictionary of parameters to pass to the Metal API.
```python
from langchain.retrievers import MetalRetriever
from metal_sdk.metal import Metal
metal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");
retriever = MetalRetriever(metal, params={"limit": 2})
docs = retriever.get_relevant_documents("search term")
```

View File

@@ -9,7 +9,7 @@ This page covers how to run models on Replicate within LangChain.
Find a model on the [Replicate explore page](https://replicate.com/explore), and then paste in the model name and version in this format: `owner-name/model-name:version`
For example, for this [flan-t5 model](https://replicate.com/daanelson/flan-t5), click on the API tab. The model name/version would be: `daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8`
For example, for this [dolly model](https://replicate.com/replicate/dolly-v2-12b), click on the API tab. The model name/version would be: `"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5"`
Only the `model` param is required, but any other model parameters can also be passed in with the format `input={model_param: value, ...}`
@@ -24,7 +24,7 @@ Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6
From here, we can initialize our model:
```python
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
```
And run it:
@@ -40,8 +40,7 @@ llm(prompt)
We can call any Replicate model (not just LLMs) using this syntax. For example, we can call [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion):
```python
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={'image_dimensions'='512x512'}
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions':'512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")
```

View File

@@ -39,11 +39,27 @@
"name": "stdout",
"output_type": "stream",
"text": [
"apify.ipynb\n",
"arxiv.ipynb\n",
"bash.ipynb\n",
"bing_search.ipynb\n",
"chatgpt_plugins.ipynb\n",
"ddg.ipynb\n",
"google_places.ipynb\n",
"google_search.ipynb\n",
"google_serper.ipynb\n",
"gradio_tools.ipynb\n",
"human_tools.ipynb\n",
"ifttt.ipynb\n",
"openweathermap.ipynb\n",
"python.ipynb\n",
"requests.ipynb\n",
"search_tools.ipynb\n",
"searx_search.ipynb\n",
"serpapi.ipynb\n",
"wikipedia.ipynb\n",
"wolfram_alpha.ipynb\n",
"zapier.ipynb\n",
"\n"
]
}
@@ -54,9 +70,94 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "e7896f8e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"apify.ipynb\n",
"arxiv.ipynb\n",
"bash.ipynb\n",
"bing_search.ipynb\n",
"chatgpt_plugins.ipynb\n",
"ddg.ipynb\n",
"google_places.ipynb\n",
"google_search.ipynb\n",
"google_serper.ipynb\n",
"gradio_tools.ipynb\n",
"human_tools.ipynb\n",
"ifttt.ipynb\n",
"openweathermap.ipynb\n",
"python.ipynb\n",
"requests.ipynb\n",
"search_tools.ipynb\n",
"searx_search.ipynb\n",
"serpapi.ipynb\n",
"wikipedia.ipynb\n",
"wolfram_alpha.ipynb\n",
"zapier.ipynb\n",
"\n"
]
}
],
"source": [
"bash.run(\"cd ..\")\n",
"# The commands are executed in a new subprocess each time, meaning that\n",
"# this call will return the same results as the last.\n",
"print(bash.run(\"ls\"))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "851fee9f",
"metadata": {},
"source": [
"## Terminal Persistance\n",
"\n",
"By default, the bash command will be executed in a new subprocess each time. To retain a persistent bash session, we can use the `persistent=True` arg."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4a93ea2c",
"metadata": {},
"outputs": [],
"source": [
"bash = BashProcess(persistent=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a1e98b78",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"custom_tools.ipynb\t\tmulti_input_tool.ipynb\n",
"examples\t\t\ttool_input_validation.ipynb\n",
"getting_started.md\n"
]
}
],
"source": [
"bash.run(\"cd ..\")\n",
"# Note the list of files is different\n",
"print(bash.run(\"ls\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e13c1c9c",
"metadata": {},
"outputs": [],
"source": []
}
@@ -77,7 +178,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.8.16"
}
},
"nbformat": 4,

View File

@@ -27,7 +27,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools import DuckDuckGoSearchTool"
"from langchain.tools import DuckDuckGoSearchRun"
]
},
{
@@ -37,7 +37,7 @@
"metadata": {},
"outputs": [],
"source": [
"search = DuckDuckGoSearchTool()"
"search = DuckDuckGoSearchRun()"
]
},
{

View File

@@ -24,8 +24,8 @@
"\n",
"```bash\n",
"echo \"Hello World\"\n",
"```\u001b[0m['```bash', 'echo \"Hello World\"', '```']\n",
"\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['echo \"Hello World\"']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mHello World\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@@ -65,7 +65,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -93,7 +93,7 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 3,
"metadata": {},
"outputs": [
{
@@ -107,8 +107,8 @@
"\n",
"```bash\n",
"printf \"Hello World\\n\"\n",
"```\u001b[0m['```bash', 'printf \"Hello World\\\\n\"', '```']\n",
"\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['printf \"Hello World\\\\n\"']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mHello World\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@@ -120,7 +120,7 @@
"'Hello World\\n'"
]
},
"execution_count": 29,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -132,6 +132,114 @@
"\n",
"bash_chain.run(text)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Persistent Terminal\n",
"\n",
"By default, the chain will run in a separate subprocess each time it is called. This behavior can be changed by instantiating with a persistent bash process."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMBashChain chain...\u001b[0m\n",
"List the current directory then move up a level.\u001b[32;1m\u001b[1;3m\n",
"\n",
"```bash\n",
"ls\n",
"cd ..\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mapi.ipynb\t\t\tllm_summarization_checker.ipynb\n",
"constitutional_chain.ipynb\tmoderation.ipynb\n",
"llm_bash.ipynb\t\t\topenai_openapi.yaml\n",
"llm_checker.ipynb\t\topenapi.ipynb\n",
"llm_math.ipynb\t\t\tpal.ipynb\n",
"llm_requests.ipynb\t\tsqlite.ipynb\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'api.ipynb\\t\\t\\tllm_summarization_checker.ipynb\\r\\nconstitutional_chain.ipynb\\tmoderation.ipynb\\r\\nllm_bash.ipynb\\t\\t\\topenai_openapi.yaml\\r\\nllm_checker.ipynb\\t\\topenapi.ipynb\\r\\nllm_math.ipynb\\t\\t\\tpal.ipynb\\r\\nllm_requests.ipynb\\t\\tsqlite.ipynb'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.utilities.bash import BashProcess\n",
"\n",
"\n",
"persistent_process = BashProcess(persistent=True)\n",
"bash_chain = LLMBashChain.from_bash_process(llm=llm, bash_process=persistent_process, verbose=True)\n",
"\n",
"text = \"List the current directory then move up a level.\"\n",
"\n",
"bash_chain.run(text)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMBashChain chain...\u001b[0m\n",
"List the current directory then move up a level.\u001b[32;1m\u001b[1;3m\n",
"\n",
"```bash\n",
"ls\n",
"cd ..\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mexamples\t\tgetting_started.ipynb\tindex_examples\n",
"generic\t\t\thow_to_guides.rst\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'examples\\t\\tgetting_started.ipynb\\tindex_examples\\r\\ngeneric\\t\\t\\thow_to_guides.rst'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Run the same command again and see that the state is maintained between calls\n",
"bash_chain.run(text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -150,7 +258,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.8.16"
}
},
"nbformat": 4,

View File

@@ -16,7 +16,7 @@
"1. `pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib`\n",
"\n",
"## 🧑 Instructions for ingesting your Google Docs data\n",
"By default, the `GoogleDriveLoader` expects the `credentials.json` file to be `~/.credentials/credentials.json`, but this is configurable using the `credentials_file` keyword argument. Same thing with `token.json`. Note that `token.json` will be created automatically the first time you use the loader.\n",
"By default, the `GoogleDriveLoader` expects the `credentials.json` file to be `~/.credentials/credentials.json`, but this is configurable using the `credentials_path` keyword argument. Same thing with `token.json` - `token_path`. Note that `token.json` will be created automatically the first time you use the loader.\n",
"\n",
"`GoogleDriveLoader` can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:\n",
"* Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is `\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\"`\n",

View File

@@ -6,7 +6,7 @@
"source": [
"# CerebriumAI\n",
"\n",
"`Cerebrium` is an AWS Sagemaker alternative. It also provides API access to [several LLM models](https://docs.cerebrium.ai/cerebrium/prebuilt-models/deploymen).\n",
"`Cerebrium` is an AWS Sagemaker alternative. It also provides API access to [several LLM models](https://docs.cerebrium.ai/cerebrium/prebuilt-models/deployment).\n",
"\n",
"This notebook goes over how to use Langchain with [CerebriumAI](https://docs.cerebrium.ai/introduction)."
]

View File

@@ -11,7 +11,7 @@
"\n",
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HugigngFaceHub](huggingface_hub.ipynb) notebook."
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HuggingFaceHub](huggingface_hub.ipynb) notebook."
]
},
{

View File

@@ -44,7 +44,7 @@
},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
@@ -85,6 +85,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -92,7 +93,7 @@
"\n",
"Find a model on the [replicate explore page](https://replicate.com/explore), and then paste in the model name and version in this format: model_name/version\n",
"\n",
"For example, for this [flan-t5 model]( https://replicate.com/daanelson/flan-t5), click on the API tab. The model name/version would be: `daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8`\n",
"For example, for this [dolly model](https://replicate.com/replicate/dolly-v2-12b), click on the API tab. The model name/version would be: `replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5`\n",
"\n",
"Only the `model` param is required, but we can add other model params when initializing.\n",
"\n",
@@ -113,7 +114,7 @@
},
"outputs": [],
"source": [
"llm = Replicate(model=\"daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8\")"
"llm = Replicate(model=\"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5\")"
]
},
{
@@ -243,7 +244,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm = Replicate(model=\"daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8\")\n",
"dolly_llm = Replicate(model=\"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5\")\n",
"text2image = Replicate(model=\"stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf\")"
]
},
@@ -265,7 +266,7 @@
" template=\"What is a good name for a company that makes {product}?\",\n",
")\n",
"\n",
"chain = LLMChain(llm=llm, prompt=prompt)"
"chain = LLMChain(llm=dolly_llm, prompt=prompt)"
]
},
{
@@ -285,7 +286,7 @@
" input_variables=[\"company_name\"],\n",
" template=\"Write a description of a logo for this company: {company_name}\",\n",
")\n",
"chain_two = LLMChain(llm=llm, prompt=second_prompt)"
"chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)"
]
},
{

View File

@@ -8,12 +8,14 @@
"source": [
"# Sentence Transformers Embeddings\n",
"\n",
"Let's generate embeddings using the [SentenceTransformers](https://www.sbert.net/) integration. SentenceTransformers is a python package that can generate text and image embeddings, originating from [Sentence-BERT](https://arxiv.org/abs/1908.10084)"
"[SentenceTransformers](https://www.sbert.net/) embeddings are called using the `HuggingFaceEmbeddings` integration. We have also added an alias for `SentenceTransformerEmbeddings` for users who are more familiar with directly using that package.\n",
"\n",
"SentenceTransformers is a python package that can generate text and image embeddings, originating from [Sentence-BERT](https://arxiv.org/abs/1908.10084)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 1,
"id": "06c9f47d",
"metadata": {},
"outputs": [
@@ -21,10 +23,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
"To disable this warning, you can either:\n",
"\t- Avoid using `tokenizers` before the fork if possible\n",
"\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.1.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n"
]
}
],
@@ -34,27 +35,28 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"id": "861521a9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import SentenceTransformerEmbeddings "
"from langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddings "
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "ff9be586",
"metadata": {},
"outputs": [],
"source": [
"embeddings = SentenceTransformerEmbeddings(model=\"all-MiniLM-L6-v2\")"
"embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
"# Equivalent to SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 4,
"id": "d0a98ae9",
"metadata": {},
"outputs": [],
@@ -64,7 +66,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 5,
"id": "5d6c682b",
"metadata": {},
"outputs": [],
@@ -74,7 +76,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 6,
"id": "bb5e74c0",
"metadata": {},
"outputs": [],
@@ -107,7 +109,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.8.16"
},
"vscode": {
"interpreter": {

View File

@@ -0,0 +1,237 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a792b119",
"metadata": {},
"source": [
"# Connecting to a Feature Store\n",
"\n",
"Feature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see [here](https://www.tecton.ai/blog/what-is-a-feature-store/).\n",
"\n",
"This concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs.\n",
"\n",
"In this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt."
]
},
{
"cell_type": "markdown",
"id": "ad0b5edf",
"metadata": {},
"source": [
"## Feast\n",
"\n",
"To start, we will use the popular open source feature store framework [Feast](https://github.com/feast-dev/feast).\n",
"\n",
"This assumes you have already run the steps in the README around getting started. We will build of off that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics."
]
},
{
"cell_type": "markdown",
"id": "7f02f6f3",
"metadata": {},
"source": [
"### Load Feast Store\n",
"\n",
"Again, this should be set up according to the instructions in the Feast README"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "fd1a452a",
"metadata": {},
"outputs": [],
"source": [
"from feast import FeatureStore\n",
"\n",
"# You may need to update the path depending on where you stored it\n",
"feast_repo_path = \"../../../../../my_feature_repo/feature_repo/\"\n",
"store = FeatureStore(repo_path=feast_repo_path)"
]
},
{
"cell_type": "markdown",
"id": "cfe8aae5",
"metadata": {},
"source": [
"### Prompts\n",
"\n",
"Here we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt.\n",
"\n",
"Note that the input to this prompt template is just `driver_id`, since that is the only user defined piece (all other variables are looked up inside the prompt template)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5e9cee04",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate, StringPromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "594a3cf3",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Given the driver's up to date stats, write them note relaying those stats to them.\n",
"If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better\n",
"\n",
"Here are the drivers stats:\n",
"Conversation rate: {conv_rate}\n",
"Acceptance rate: {acc_rate}\n",
"Average Daily Trips: {avg_daily_trips}\n",
"\n",
"Your response:\"\"\"\n",
"prompt = PromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "8464c731",
"metadata": {},
"outputs": [],
"source": [
"class FeastPromptTemplate(StringPromptTemplate):\n",
" \n",
" def format(self, **kwargs) -> str:\n",
" driver_id = kwargs.pop(\"driver_id\")\n",
" feature_vector = store.get_online_features(\n",
" features=[\n",
" 'driver_hourly_stats:conv_rate',\n",
" 'driver_hourly_stats:acc_rate',\n",
" 'driver_hourly_stats:avg_daily_trips'\n",
" ],\n",
" entity_rows=[{\"driver_id\": 1001}]\n",
" ).to_dict()\n",
" kwargs[\"conv_rate\"] = feature_vector[\"conv_rate\"][0]\n",
" kwargs[\"acc_rate\"] = feature_vector[\"acc_rate\"][0]\n",
" kwargs[\"avg_daily_trips\"] = feature_vector[\"avg_daily_trips\"][0]\n",
" return prompt.format(**kwargs)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "c0c7bae2",
"metadata": {},
"outputs": [],
"source": [
"prompt_template = FeastPromptTemplate(input_variables=[\"driver_id\"])"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "d8d70bb7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Given the driver's up to date stats, write them note relaying those stats to them.\n",
"If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better\n",
"\n",
"Here are the drivers stats:\n",
"Conversation rate: 0.4745151400566101\n",
"Acceptance rate: 0.055561766028404236\n",
"Average Daily Trips: 936\n",
"\n",
"Your response:\n"
]
}
],
"source": [
"print(prompt_template.format(driver_id=1001))"
]
},
{
"cell_type": "markdown",
"id": "2870d070",
"metadata": {},
"source": [
"### Use in a chain\n",
"\n",
"We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "7106255c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "79543326",
"metadata": {},
"outputs": [],
"source": [
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "97a741a0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot.\""
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(1001)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12e59aaf",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -12,5 +12,6 @@ Specific implementations of agent simulations (or parts of agent simulations) in
- [CAMEL](agent_simulations/camel_role_playing.ipynb): an implementation of the CAMEL (Communicative Agents for “Mind” Exploration of Large Scale Language Model Society) paper, where two agents communicate with each other.
- [Two Player D&D](agent_simulations/two_player_dnd.ipynb): an example of how to use a generic simulator for two agents to implement a variant of the popular Dungeons & Dragons role playing game.
## Generative Agents
## Simulations with Multiple Agents
- [Multi-Player D&D](agent_simulations/multi_player_dnd.ipynb): an example of how to use a generic dialogue simulator for multiple dialogue agents with a custom speaker-ordering, illustrated with a variant of the popular Dungeons & Dragons role playing game.
- [Generative Agents](agent_simulations/characters.ipynb): This notebook implements a generative agent based on the paper [Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442) by Park, et. al.

View File

@@ -0,0 +1,493 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multi-Player Dungeons & Dragons\n",
"\n",
"This notebook shows how the `DialogueAgent` and `DialogueSimulator` class make it easy to extend the [Two-Player Dungeons & Dragons example](https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html) to multiple players.\n",
"\n",
"The main difference between simulating two players and multiple players is in revising the schedule for when each agent speaks\n",
"\n",
"To this end, we augment `DialogueSimulator` to take in a custom function that determines the schedule of which agent speaks. In the example below, each character speaks in round-robin fashion, with the storyteller interleaved between each player."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import LangChain related modules "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Dict, Callable\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
" BaseMessage,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `DialogueAgent` class\n",
"The `DialogueAgent` class is a simple wrapper around the `ChatOpenAI` model that stores the message history from the `dialogue_agent`'s point of view by simply concatenating the messages as strings.\n",
"\n",
"It exposes two methods: \n",
"- `send()`: applies the chatmodel to the message history and returns the message string\n",
"- `receive(name, message)`: adds the `message` spoken by `name` to message history"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"class DialogueAgent():\n",
"\n",
" def __init__(\n",
" self,\n",
" name,\n",
" system_message: SystemMessage,\n",
" model: ChatOpenAI,\n",
" ) -> None:\n",
" self.name = name\n",
" self.system_message = system_message\n",
" self.model = model\n",
" self.message_history = f\"\"\"Here is the conversation so far.\n",
" \"\"\"\n",
" self.prefix = f'\\n{self.name}:'\n",
" \n",
" def send(self) -> str:\n",
" \"\"\"\n",
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" [self.system_message, \n",
" HumanMessage(content=self.message_history+self.prefix)])\n",
" return message.content\n",
" \n",
" def receive(self, name: str, message: str) -> None:\n",
" \"\"\"\n",
" Concatenates {message} spoken by {name} into message history\n",
" \"\"\"\n",
" self.message_history += f'\\n{name}: {message}'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `DialogueSimulator` class\n",
"The `DialogueSimulator` class takes a list of agents. At each step, it performs the following:\n",
"1. Select the next speaker\n",
"2. Calls the next speaker to send a message \n",
"3. Broadcasts the message to all other agents\n",
"4. Update the step counter.\n",
"The selection of the next speaker can be implemented as any function, but in this case we simply loop through the agents."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"class DialogueSimulator():\n",
" \n",
" def __init__(\n",
" self, \n",
" agents: List[DialogueAgent], \n",
" selection_function: Callable[[int, List[DialogueAgent]], int]\n",
" ) -> None:\n",
" self.agents = agents\n",
" self._step = 0\n",
" self.select_next_speaker = selection_function\n",
" \n",
" def reset(self, name: str, message: str):\n",
" \"\"\"\n",
" Initiates the conversation with a {message} from {name}\n",
" \"\"\"\n",
" for agent in self.agents:\n",
" agent.receive(name, message)\n",
" \n",
" # increment time\n",
" self._step += 1\n",
" \n",
" def step(self) -> tuple[str, str]:\n",
" # 1. choose the next speaker\n",
" speaker_idx = self.select_next_speaker(self._step, self.agents)\n",
" speaker = self.agents[speaker_idx]\n",
" \n",
" # 2. next speaker sends message\n",
" message = speaker.send()\n",
" \n",
" # 3. everyone receives message\n",
" for receiver in self.agents:\n",
" receiver.receive(speaker.name, message)\n",
" \n",
" # 4. increment time\n",
" self._step += 1\n",
" \n",
" return speaker.name, message"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define roles and quest"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"character_names = [\"Harry Potter\", \"Ron Weasley\", \"Hermione Granger\", \"Argus Filch\"]\n",
"storyteller_name = \"Dungeon Master\"\n",
"quest = \"Find all of Lord Voldemort's seven horcruxes.\"\n",
"word_limit = 50 # word limit for task brainstorming"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Ask an LLM to add detail to the game description"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"game_description = f\"\"\"Here is the topic for a Dungeons & Dragons game: {quest}.\n",
" The characters are: {*character_names,}.\n",
" The story is narrated by the storyteller, {storyteller_name}.\"\"\"\n",
"\n",
"player_descriptor_system_message = SystemMessage(\n",
" content=\"You can add detail to the description of a Dungeons & Dragons player.\")\n",
"\n",
"def generate_character_description(character_name):\n",
" character_specifier_prompt = [\n",
" player_descriptor_system_message,\n",
" HumanMessage(content=\n",
" f\"\"\"{game_description}\n",
" Please reply with a creative description of the character, {character_name}, in {word_limit} words or less. \n",
" Speak directly to {character_name}.\n",
" Do not add anything else.\"\"\"\n",
" )\n",
" ]\n",
" character_description = ChatOpenAI(temperature=1.0)(character_specifier_prompt).content\n",
" return character_description\n",
"\n",
"def generate_character_system_message(character_name, character_description):\n",
" return SystemMessage(content=(\n",
" f\"\"\"{game_description}\n",
" Your name is {character_name}. \n",
" Your character description is as follows: {character_description}.\n",
" You will propose actions you plan to take and {storyteller_name} will explain what happens when you take those actions.\n",
" Speak in the first person from the perspective of {character_name}.\n",
" For describing your own body movements, wrap your description in '*'.\n",
" Do not change roles!\n",
" Do not speak from the perspective of anyone else.\n",
" Remember you are {character_name}.\n",
" Stop speaking the moment you finish speaking from your perspective.\n",
" Never forget to keep your response to {word_limit} words!\n",
" Do not add anything else.\n",
" \"\"\"\n",
" ))\n",
"\n",
"character_descriptions = [generate_character_description(character_name) for character_name in character_names]\n",
"character_system_messages = [generate_character_system_message(character_name, character_description) for character_name, character_description in zip(character_names, character_descriptions)]\n",
"\n",
"storyteller_specifier_prompt = [\n",
" player_descriptor_system_message,\n",
" HumanMessage(content=\n",
" f\"\"\"{game_description}\n",
" Please reply with a creative description of the storyteller, {storyteller_name}, in {word_limit} words or less. \n",
" Speak directly to {storyteller_name}.\n",
" Do not add anything else.\"\"\"\n",
" )\n",
"]\n",
"storyteller_description = ChatOpenAI(temperature=1.0)(storyteller_specifier_prompt).content\n",
"\n",
"storyteller_system_message = SystemMessage(content=(\n",
"f\"\"\"{game_description}\n",
"You are the storyteller, {storyteller_name}. \n",
"Your description is as follows: {storyteller_description}.\n",
"The other players will propose actions to take and you will explain what happens when they take those actions.\n",
"Speak in the first person from the perspective of {storyteller_name}.\n",
"Do not change roles!\n",
"Do not speak from the perspective of anyone else.\n",
"Remember you are the storyteller, {storyteller_name}.\n",
"Stop speaking the moment you finish speaking from your perspective.\n",
"Never forget to keep your response to {word_limit} words!\n",
"Do not add anything else.\n",
"\"\"\"\n",
"))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Storyteller Description:\n",
"Dungeon Master, your vivid imagination conjures a world of wonder and danger. Will you lead our triumphant trio or be the ultimate foil to their quest to rid the world of Voldemort's horcruxes? The fate of both the muggle and wizarding worlds rests in your hands.\n",
"Harry Potter Description:\n",
"Harry Potter, the boy who lived, you hold the fate of the wizarding world in your hands. Your bravery and loyalty to your friends are unmatched. The burden you carry is heavy, but with the power of love by your side, you can overcome any obstacle. The hunt for the horcruxes begins now.\n",
"Ron Weasley Description:\n",
"Ron Weasley, you are Harry Potter's loyal and brave best friend. You have a great sense of humor and always bring joy to the team. Your skills with magic and strategy make you a valuable asset in the fight against Voldemort. Your love for food and your family keeps you grounded and motivated.\n",
"Hermione Granger Description:\n",
"Hermione Granger, you are the brightest witch of your age. Your quick wit and vast knowledge are essential in our quest to find the horcruxes. Trust in your abilities and remember, knowledge is power.\n",
"Argus Filch Description:\n",
"Argus Filch, you are a bitter and cruel caretaker of the Hogwarts School of Witchcraft and Wizardry. Your harsh mannerisms and love for punishing the students know no bounds. Your loyalty to the Wizarding World and disdain for magic-wielders makes it surprising that you would join Harry, Ron, and Hermione in their quest to defeat Voldemort.\n"
]
}
],
"source": [
"print('Storyteller Description:')\n",
"print(storyteller_description)\n",
"for character_name, character_description in zip(character_names, character_descriptions):\n",
" print(f'{character_name} Description:')\n",
" print(character_description)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use an LLM to create an elaborate quest description"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Original quest:\n",
"Find all of Lord Voldemort's seven horcruxes.\n",
"\n",
"Detailed quest:\n",
"You have discovered that one of Voldemort's horcruxes is hidden deep in the Forbidden Forest. You must navigate the dangerous terrain, avoid the creatures lurking within, and find the horcrux before the full moon rises, unleashing a pack of hungry werewolves. Remember, time is of the essence!\n",
"\n"
]
}
],
"source": [
"quest_specifier_prompt = [\n",
" SystemMessage(content=\"You can make a task more specific.\"),\n",
" HumanMessage(content=\n",
" f\"\"\"{game_description}\n",
" \n",
" You are the storyteller, {storyteller_name}.\n",
" Please make the quest more specific. Be creative and imaginative.\n",
" Please reply with the specified quest in {word_limit} words or less. \n",
" Speak directly to the characters: {*character_names,}.\n",
" Do not add anything else.\"\"\"\n",
" )\n",
"]\n",
"specified_quest = ChatOpenAI(temperature=1.0)(quest_specifier_prompt).content\n",
"\n",
"print(f\"Original quest:\\n{quest}\\n\")\n",
"print(f\"Detailed quest:\\n{specified_quest}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Main Loop"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"characters = []\n",
"for character_name, character_system_message in zip(character_names, character_system_messages):\n",
" characters.append(DialogueAgent(\n",
" name=character_name,\n",
" system_message=character_system_message, \n",
" model=ChatOpenAI(temperature=0.2)))\n",
"storyteller = DialogueAgent(name=storyteller_name,\n",
" system_message=storyteller_system_message, \n",
" model=ChatOpenAI(temperature=0.2))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:\n",
" \"\"\"\n",
" If the step is even, then select the storyteller\n",
" Otherwise, select the other characters in a round-robin fashion.\n",
" \n",
" For example, with three characters with indices: 1 2 3\n",
" The storyteller is index 0.\n",
" Then the selected index will be as follows:\n",
"\n",
" step: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16\n",
"\n",
" idx: 0 1 0 2 0 3 0 1 0 2 0 3 0 1 0 2 0\n",
" \"\"\"\n",
" if step % 2 == 0:\n",
" idx = 0\n",
" else:\n",
" idx = (step//2) % (len(agents)-1) + 1\n",
" return idx"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(Dungeon Master): You have discovered that one of Voldemort's horcruxes is hidden deep in the Forbidden Forest. You must navigate the dangerous terrain, avoid the creatures lurking within, and find the horcrux before the full moon rises, unleashing a pack of hungry werewolves. Remember, time is of the essence!\n",
"\n",
"\n",
"(Harry Potter): I take out my wand and cast a Lumos spell to light our way through the dark forest. We need to move quickly and quietly to avoid any unwanted attention from the creatures. Ron, Hermione, and I will lead the way while Argus Filch keeps watch behind us. Let's go!\n",
"\n",
"\n",
"(Dungeon Master): As you make your way through the forest, you hear the rustling of leaves and the snapping of twigs. Suddenly, a group of acromantulas, giant spiders, appear in front of you, blocking your path. What do you do?\n",
"\n",
"\n",
"(Ron Weasley): I quickly cast a spell to create a wall of fire between us and the acromantulas. Hopefully, the flames will deter them from attacking us. We need to keep moving forward and find that horcrux before it's too late.\n",
"\n",
"\n",
"(Dungeon Master): The acromantulas hiss and retreat from the wall of fire, allowing you to pass. As you continue deeper into the forest, you come across a clearing with a small pond. In the center of the pond, you see a glowing object. It must be the horcrux! But how do you get to it? What do you do?\n",
"\n",
"\n",
"(Hermione Granger): I take out my wand and cast a spell to conjure a small boat. We can use it to reach the center of the pond and retrieve the horcrux. But we need to be careful, there could be traps or other obstacles in our way. Ron, Harry, let's row the boat while Argus Filch keeps watch from the shore.\n",
"\n",
"\n",
"(Dungeon Master): As you row towards the center of the pond, you hear a loud hissing sound. Suddenly, a giant serpent emerges from the water, blocking your path. It looks angry and ready to attack. What do you do?\n",
"\n",
"\n",
"(Argus Filch): I take out my crossbow and aim it at the serpent. I may not be a wizard, but I know how to handle a weapon. I'll shoot it if it comes any closer. We can't let this serpent stop us from getting that horcrux.\n",
"\n",
"\n",
"(Dungeon Master): The serpent lunges towards the boat, but Argus Filch's crossbow bolt hits it in the head, causing it to retreat back into the water. You reach the center of the pond and retrieve the glowing object, which turns out to be a locket. Congratulations, you have found one of Voldemort's horcruxes! But there are still six more to find. What challenges will you face next?\n",
"\n",
"\n",
"(Harry Potter): We need to regroup and figure out our next move. We should head back to Hogwarts and consult with Professor Dumbledore's portrait. He may have some insight on where the other horcruxes could be hidden. We can't waste any time, Voldemort is getting stronger every day. Let's go!\n",
"\n",
"\n",
"(Dungeon Master): As you make your way back to Hogwarts, you hear a loud roar coming from the Forbidden Forest. It sounds like a werewolf. You must hurry before it catches up to you. You arrive at Dumbledore's office and he tells you that the next horcrux is hidden in a dangerous location. Are you ready for the next challenge?\n",
"\n",
"\n",
"(Ron Weasley): I'm always ready for a challenge! What's the location and what do we need to do to get there? We can't let Voldemort win, we have to find all of the horcruxes and destroy them. Let's do this!\n",
"\n",
"\n",
"(Dungeon Master): Dumbledore tells you that the next horcrux is hidden in the depths of Gringotts Bank. You must break into the bank, navigate its treacherous security measures, and find the horcrux before the goblins catch you. Are you ready to face the challenge of a lifetime? The fate of the wizarding world rests in your hands.\n",
"\n",
"\n",
"(Hermione Granger): I suggest we do some research on Gringotts Bank and its security measures before we attempt to break in. We need to be prepared and have a solid plan in place. We can also gather any necessary tools or potions that may help us along the way. Let's not rush into this blindly.\n",
"\n",
"\n",
"(Dungeon Master): As you research and plan your break-in to Gringotts Bank, you discover that the bank is heavily guarded by goblins, dragons, and other dangerous creatures. You'll need to be stealthy and quick to avoid detection. Are you ready to put your plan into action and face the dangers that await you? The clock is ticking, Voldemort's power grows stronger with each passing day.\n",
"\n",
"\n",
"(Argus Filch): I'll make sure to keep watch outside the bank while you all go in. I may not be able to help with the magic, but I can make sure no one interferes with our mission. We can't let anyone stop us from finding that horcrux and defeating Voldemort. Let's go!\n",
"\n",
"\n",
"(Dungeon Master): As you approach Gringotts Bank, you see the imposing structure looming before you. You sneak past the guards and make your way inside, navigating the twisting corridors and avoiding the traps set to catch intruders. Finally, you reach the vault where the horcrux is hidden. But it's guarded by a fierce dragon. What do you do?\n",
"\n",
"\n",
"(Harry Potter): I remember the time when I faced a dragon during the Triwizard Tournament. I take out my wand and cast a spell to distract the dragon while Ron and Hermione retrieve the horcrux. We need to work together and be quick. Time is running out and we can't afford to fail.\n",
"\n",
"\n",
"(Dungeon Master): The dragon roars and breathes fire, but Harry's spell distracts it long enough for Ron and Hermione to retrieve the horcrux. You make your way out of Gringotts Bank, but the goblins are hot on your trail. You must escape before they catch you. Congratulations, you have found another horcrux. But there are still five more to go. What challenges will you face next?\n",
"\n",
"\n",
"(Ron Weasley): We need to regroup and figure out our next move. We should consult with Professor Dumbledore's portrait again and see if he has any information on the next horcrux. We also need to be prepared for whatever challenges come our way. Voldemort won't make it easy for us, but we can't give up. Let's go!\n",
"\n",
"\n",
"(Dungeon Master): As you make your way back to Hogwarts, you hear a loud explosion coming from the direction of Hogsmeade. You arrive to find that Death Eaters have attacked the village and are wreaking havoc. You must fight off the Death Eaters and protect the innocent villagers. Are you ready to face this unexpected challenge and defend the wizarding world? The fate of both muggles and wizards rests in your hands.\n",
"\n",
"\n"
]
}
],
"source": [
"max_iters = 20\n",
"n = 0\n",
"\n",
"simulator = DialogueSimulator(\n",
" agents=[storyteller] + characters,\n",
" selection_function=select_next_speaker\n",
")\n",
"simulator.reset(storyteller_name, specified_quest)\n",
"print(f\"({storyteller_name}): {specified_quest}\")\n",
"print('\\n')\n",
"\n",
"while n < max_iters:\n",
" name, message = simulator.step()\n",
" print(f\"({name}): {message}\")\n",
" print('\\n')\n",
" n += 1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,562 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
"source": [
"# Plug-and-Plai\n",
"\n",
"This notebook builds upon the idea of [tool retrieval](custom_agent_with_plugin_retrieval.html), but pulls all tools from `plugnplai` - a directory of AI Plugins."
]
},
{
"cell_type": "markdown",
"id": "fea4812c",
"metadata": {},
"source": [
"## Set up environment\n",
"\n",
"Do necessary imports, etc."
]
},
{
"cell_type": "markdown",
"id": "aca08be8",
"metadata": {},
"source": [
"Install plugnplai lib to get a list of active plugins from https://plugplai.com directory"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "52e248c9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip available: \u001b[0m\u001b[31;49m22.3.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.1.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"pip install plugnplai -q"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain import OpenAI, SerpAPIWrapper, LLMChain\n",
"from typing import List, Union\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.agents.agent_toolkits import NLAToolkit\n",
"from langchain.tools.plugin import AIPlugin\n",
"import re\n",
"import plugnplai"
]
},
{
"cell_type": "markdown",
"id": "2f91d8b4",
"metadata": {},
"source": [
"## Setup LLM"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a1a3b59c",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "6df0253f",
"metadata": {},
"source": [
"## Set up plugins\n",
"\n",
"Load and index plugins"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9e0f7882",
"metadata": {},
"outputs": [],
"source": [
"# Get all plugins from plugnplai.com\n",
"urls = plugnplai.get_plugins()\n",
"\n",
"# Get ChatGPT plugins - only ChatGPT verified plugins\n",
"urls = plugnplai.get_plugins(filter = 'ChatGPT')\n",
"\n",
"# Get working plugins - only tested plugins (in progress)\n",
"urls = plugnplai.get_plugins(filter = 'working')\n",
"\n",
"\n",
"AI_PLUGINS = [AIPlugin.from_url(url + \"/.well-known/ai-plugin.json\") for url in urls]"
]
},
{
"cell_type": "markdown",
"id": "17362717",
"metadata": {},
"source": [
"## Tool Retriever\n",
"\n",
"We will use a vectorstore to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "77c4be4b",
"metadata": {},
"outputs": [],
"source": [
"from langchain.vectorstores import FAISS\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema import Document"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9092a158",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.2 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load a Swagger 2.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
]
}
],
"source": [
"embeddings = OpenAIEmbeddings()\n",
"docs = [\n",
" Document(page_content=plugin.description_for_model, \n",
" metadata={\"plugin_name\": plugin.name_for_model}\n",
" )\n",
" for plugin in AI_PLUGINS\n",
"]\n",
"vector_store = FAISS.from_documents(docs, embeddings)\n",
"toolkits_dict = {plugin.name_for_model: \n",
" NLAToolkit.from_llm_and_ai_plugin(llm, plugin) \n",
" for plugin in AI_PLUGINS}"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "735a7566",
"metadata": {},
"outputs": [],
"source": [
"retriever = vector_store.as_retriever()\n",
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",
" tools = []\n",
" for tk in tool_kits:\n",
" tools.extend(tk.nla_tools)\n",
" return tools"
]
},
{
"cell_type": "markdown",
"id": "7699afd7",
"metadata": {},
"source": [
"We can now test this retriever to see if it seems to work."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "425f2886",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20',\n",
" 'Speak.translate',\n",
" 'Speak.explainPhrase',\n",
" 'Speak.explainTask']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = get_tools(\"What could I do today with my kiddo\")\n",
"[t.name for t in tools]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3aa88768",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Open_AI_Klarna_product_Api.productsUsingGET',\n",
" 'Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20']"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = get_tools(\"what shirts can i buy?\")\n",
"[t.name for t in tools]"
]
},
{
"cell_type": "markdown",
"id": "2e7a075c",
"metadata": {},
"source": [
"## Prompt Template\n",
"\n",
"The prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
"source": [
"# Set up the base template\n",
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\""
]
},
{
"cell_type": "markdown",
"id": "1583acdc",
"metadata": {},
"source": [
"The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "fd969d31",
"metadata": {},
"outputs": [],
"source": [
"from typing import Callable\n",
"# Set up a prompt template\n",
"class CustomPromptTemplate(StringPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" ############## NEW ######################\n",
" # The list of tools available\n",
" tools_getter: Callable\n",
" \n",
" def format(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" ############## NEW ######################\n",
" tools = self.tools_getter(kwargs[\"input\"])\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in tools])\n",
" return self.template.format(**kwargs)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "798ef9fb",
"metadata": {},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools_getter=get_tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ef3a1af3",
"metadata": {},
"source": [
"## Output Parser\n",
"\n",
"The output parser is unchanged from the previous notebook, since we are not changing anything about the output format."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7c6fe0d3",
"metadata": {},
"outputs": [],
"source": [
"class CustomOutputParser(AgentOutputParser):\n",
" \n",
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
" # Check if agent should finish\n",
" if \"Final Answer:\" in llm_output:\n",
" return AgentFinish(\n",
" # Return values is generally always a dictionary with a single `output` key\n",
" # It is not recommended to try anything else at the moment :)\n",
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
" log=llm_output,\n",
" )\n",
" # Parse out the action and action input\n",
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
" match = re.search(regex, llm_output, re.DOTALL)\n",
" if not match:\n",
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
" action = match.group(1).strip()\n",
" action_input = match.group(2)\n",
" # Return the action and action input\n",
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "d278706a",
"metadata": {},
"outputs": [],
"source": [
"output_parser = CustomOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "170587b1",
"metadata": {},
"source": [
"## Set up LLM, stop sequence, and the agent\n",
"\n",
"Also the same as the previous notebook"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "f9d4c374",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
"source": [
"# LLM chain consisting of the LLM and a prompt\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = LLMSingleActionAgent(\n",
" llm_chain=llm_chain, \n",
" output_parser=output_parser,\n",
" stop=[\"\\nObservation:\"], \n",
" allowed_tools=tool_names\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aa8a5326",
"metadata": {},
"source": [
"## Use the Agent\n",
"\n",
"Now we can use it!"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "490604e9",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "653b1617",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find a product API\n",
"Action: Open_AI_Klarna_product_Api.productsUsingGET\n",
"Action Input: shirts\u001b[0m\n",
"\n",
"Observation:\u001b[36;1m\u001b[1;3mI found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\u001b[32;1m\u001b[1;3m I now know what shirts I can buy\n",
"Final Answer: Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"what shirts can i buy?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2481ee76",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "3ccef4e08d87aa1eeb90f63e0f071292ccb2e9c42e70f74ab2bf6f5493ca7bbc"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -219,7 +219,7 @@
},
"outputs": [],
"source": [
"from langchain.tools import BaseTool, DuckDuckGoSearchTool\n",
"from langchain.tools import BaseTool, DuckDuckGoSearchRun\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"from pydantic import Field\n",
@@ -321,7 +321,7 @@
"outputs": [],
"source": [
"# !pip install duckduckgo_search\n",
"web_search = DuckDuckGoSearchTool()"
"web_search = DuckDuckGoSearchRun()"
]
},
{
@@ -618,7 +618,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.8.16"
}
},
"nbformat": 4,

View File

@@ -283,7 +283,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,342 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Generic Agent Evaluation\n",
"\n",
"Good evaluation is key for quickly iterating on your agent's prompts and tools. Here we provide an example of how to use the TrajectoryEvalChain to evaluate your agent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Let's start by defining our agent."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain import Wikipedia\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.agents.react.base import DocstoreExplorer\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain import LLMMathChain\n",
"from langchain.llms import OpenAI\n",
"\n",
"from langchain import SerpAPIWrapper\n",
"\n",
"docstore = DocstoreExplorer(Wikipedia())\n",
"\n",
"math_llm = OpenAI(temperature=0)\n",
"\n",
"llm_math_chain = LLMMathChain(llm=math_llm, verbose=True)\n",
"\n",
"search = SerpAPIWrapper()\n",
"\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=docstore.search,\n",
" description=\"useful for when you need to ask with search\",\n",
" ),\n",
" Tool(\n",
" name=\"Lookup\",\n",
" func=docstore.lookup,\n",
" description=\"useful for when you need to ask with lookup\",\n",
" ),\n",
" Tool(\n",
" name=\"Calculator\",\n",
" func=llm_math_chain.run,\n",
" description=\"useful for doing calculations\",\n",
" ),\n",
" Tool(\n",
" name=\"Search the Web (SerpAPI)\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\",\n",
" ),\n",
"]\n",
"\n",
"memory = ConversationBufferMemory(\n",
" memory_key=\"chat_history\", return_messages=True, output_key=\"output\"\n",
")\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-3.5-turbo\")\n",
"\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,\n",
" verbose=True,\n",
" memory=memory,\n",
" return_intermediate_steps=True, # This is needed for the evaluation later\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Testing the Agent\n",
"\n",
"Now let's try our agent out on some example queries."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Search the Web (SerpAPI)\",\n",
" \"action_input\": \"How many ping pong balls would it take to fill the entire Empire State Building?\"\n",
"}\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3m12.8 billion. The volume of the Empire State Building Googles in at around 37 million ft³. A golf ball comes in at about 2.5 in³.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"It would take approximately 12.8 billion ping pong balls to fill the entire Empire State Building.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"query_one = \"How many ping pong balls would it take to fill the entire Empire State Building?\"\n",
"\n",
"test_outputs_one = agent({\"input\": query_one}, return_only_outputs=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This looks good! Let's try it out on another query."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Calculator\",\n",
" \"action_input\": \"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers.\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"4828000 / 324\n",
"```\n",
"...numexpr.evaluate(\"4828000 / 324\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m14901.234567901234\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3mAnswer: 14901.234567901234\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Calculator\",\n",
" \"action_input\": \"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers.\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"4828000 / 324\n",
"```\n",
"...numexpr.evaluate(\"4828000 / 324\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m14901.234567901234\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3mAnswer: 14901.234567901234\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"If you laid the Eiffel Tower end to end, you would need approximately 14,901 Eiffel Towers to cover the US from coast to coast.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"query_two = \"If you laid the Eiffel Tower end to end, how many would you need cover the US from coast to coast?\"\n",
"\n",
"test_outputs_two = agent({\"input\": query_two}, return_only_outputs=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This doesn't look so good. Let's try running some evaluation.\n",
"\n",
"## Evaluating the Agent\n",
"\n",
"Let's start by defining the TrajectoryEvalChain."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain.evaluation.agents import TrajectoryEvalChain\n",
"\n",
"# Define chain\n",
"eval_chain = TrajectoryEvalChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0, model_name=\"gpt-4\"), # Note: This must be a ChatOpenAI model\n",
" agent_tools=agent.tools,\n",
" return_reasoning=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try evaluating the first query."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Score from 1 to 5: 1\n",
"Reasoning: First, let's evaluate the final answer. The final answer is incorrect because it uses the volume of golf balls instead of ping pong balls. The answer is not helpful.\n",
"\n",
"Second, does the model use a logical sequence of tools to answer the question? The model only used one tool, which was the Search the Web (SerpAPI). It did not use the Calculator tool to calculate the correct volume of ping pong balls.\n",
"\n",
"Third, does the AI language model use the tools in a helpful way? The model used the Search the Web (SerpAPI) tool, but the output was not helpful because it provided information about golf balls instead of ping pong balls.\n",
"\n",
"Fourth, does the AI language model use too many steps to answer the question? The model used only one step, which is not too many. However, it should have used more steps to provide a correct answer.\n",
"\n",
"Fifth, are the appropriate tools used to answer the question? The model should have used the Search tool to find the volume of the Empire State Building and the volume of a ping pong ball. Then, it should have used the Calculator tool to calculate the number of ping pong balls needed to fill the building.\n",
"\n",
"Judgment: Given the incorrect final answer and the inappropriate use of tools, we give the model a score of 1.\n"
]
}
],
"source": [
"question, steps, answer = test_outputs_one[\"input\"], test_outputs_one[\"intermediate_steps\"], test_outputs_one[\"output\"]\n",
"\n",
"evaluation = eval_chain(\n",
" inputs={\"question\": question, \"answer\": answer, \"agent_trajectory\": eval_chain.get_agent_trajectory(steps)},\n",
")\n",
"\n",
"print(\"Score from 1 to 5: \", evaluation[\"score\"])\n",
"print(\"Reasoning: \", evaluation[\"reasoning\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That seems about right. Let's try the second query."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Score from 1 to 5: 3\n",
"Reasoning: i. Is the final answer helpful?\n",
"Yes, the final answer is helpful as it provides an approximate number of Eiffel Towers needed to cover the US from coast to coast.\n",
"\n",
"ii. Does the AI language use a logical sequence of tools to answer the question?\n",
"No, the AI language model does not use a logical sequence of tools. It directly uses the Calculator tool without first using the Search or Lookup tools to find the necessary information (length of the Eiffel Tower and distance from coast to coast in the US).\n",
"\n",
"iii. Does the AI language model use the tools in a helpful way?\n",
"The AI language model uses the Calculator tool in a helpful way to perform the calculation, but it should have used the Search or Lookup tools first to find the required information.\n",
"\n",
"iv. Does the AI language model use too many steps to answer the question?\n",
"No, the AI language model does not use too many steps. However, it repeats the same step twice, which is unnecessary.\n",
"\n",
"v. Are the appropriate tools used to answer the question?\n",
"Not entirely. The AI language model should have used the Search or Lookup tools to find the required information before using the Calculator tool.\n",
"\n",
"Given the above evaluation, the AI language model's performance can be scored as follows:\n"
]
}
],
"source": [
"question, steps, answer = test_outputs_two[\"input\"], test_outputs_two[\"intermediate_steps\"], test_outputs_two[\"output\"]\n",
"\n",
"evaluation = eval_chain(\n",
" inputs={\"question\": question, \"answer\": answer, \"agent_trajectory\": eval_chain.get_agent_trajectory(steps)},\n",
")\n",
"\n",
"print(\"Score from 1 to 5: \", evaluation[\"score\"])\n",
"print(\"Reasoning: \", evaluation[\"reasoning\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That also sounds about right. In conclusion, the TrajectoryEvalChain allows us to use GPT-4 to score both our agent's outputs and tool use in addition to giving us the reasoning behind the evaluation."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "06ba49dd587e86cdcfee66b9ffe769e1e94f0e368e54c2d6c866e38e33c0d9b1"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -20,4 +20,6 @@ Highlighting specific parts:
Specific examples of this include:
- [AI Plugins](agents/custom_agent_with_plugin_retrieval.ipynb): an implementation of an agent that is designed to be able to use all AI Plugins.
- [Plug-and-PlAI (Plugins Database)](agents/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb): an implementation of an agent that is designed to be able to use all AI Plugins retrieved from PlugNPlAI.
- [Wikibase Agent](agents/wikibase_agent.ipynb): an implementation of an agent that is designed to interact with Wikibase.
- [Sales GPT](agents/sales_agent_with_context.ipynb): This notebook demonstrates an implementation of a Context-Aware AI Sales agent.

View File

@@ -35,7 +35,7 @@ def create_pandas_dataframe_agent(
prompt = ZeroShotAgent.create_prompt(
tools, prefix=prefix, suffix=suffix, input_variables=input_variables
)
partial_prompt = prompt.partial(df=str(df.head()))
partial_prompt = prompt.partial(df=str(df.head().to_markdown()))
llm_chain = LLMChain(
llm=llm,
prompt=partial_prompt,

View File

@@ -2,7 +2,7 @@
"""Load tools."""
import warnings
from typing import Any, Dict, List, Optional, Callable, Tuple
from mypy_extensions import KwArg
from mypy_extensions import Arg, KwArg
from langchain.agents.tools import Tool
from langchain.callbacks.base import BaseCallbackManager
@@ -15,7 +15,7 @@ from langchain.requests import TextRequestsWrapper
from langchain.tools.arxiv.tool import ArxivQueryRun
from langchain.tools.base import BaseTool
from langchain.tools.bing_search.tool import BingSearchRun
from langchain.tools.ddg_search.tool import DuckDuckGoSearchTool
from langchain.tools.ddg_search.tool import DuckDuckGoSearchRun
from langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun
from langchain.tools.human.tool import HumanInputRun
from langchain.tools.python.tool import PythonREPLTool
@@ -74,7 +74,7 @@ def _get_terminal() -> BaseTool:
)
_BASE_TOOLS = {
_BASE_TOOLS: Dict[str, Callable[[], BaseTool]] = {
"python_repl": _get_python_repl,
"requests": _get_tools_requests_get, # preserved for backwards compatability
"requests_get": _get_tools_requests_get,
@@ -120,7 +120,7 @@ def _get_open_meteo_api(llm: BaseLLM) -> BaseTool:
)
_LLM_TOOLS = {
_LLM_TOOLS: Dict[str, Callable[[BaseLLM], BaseTool]] = {
"pal-math": _get_pal_math,
"pal-colored-objects": _get_pal_colored_objects,
"llm-math": _get_llm_math,
@@ -219,14 +219,16 @@ def _get_bing_search(**kwargs: Any) -> BaseTool:
def _get_ddg_search(**kwargs: Any) -> BaseTool:
return DuckDuckGoSearchTool(api_wrapper=DuckDuckGoSearchAPIWrapper(**kwargs))
return DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper(**kwargs))
def _get_human_tool(**kwargs: Any) -> BaseTool:
return HumanInputRun(**kwargs)
_EXTRA_LLM_TOOLS = {
_EXTRA_LLM_TOOLS: Dict[
str, Tuple[Callable[[Arg(BaseLLM, "llm"), KwArg(Any)], BaseTool], List[str]]
] = {
"news-api": (_get_news_api, ["news_api_key"]),
"tmdb-api": (_get_tmdb_api, ["tmdb_bearer_token"]),
"podcast-api": (_get_podcast_api, ["listen_api_key"]),

View File

@@ -1,145 +0,0 @@
"""Chain that takes in an input and produces an action and action input."""
from __future__ import annotations
import json
import logging
from abc import abstractmethod
from pathlib import Path
from typing import Any, Dict, List, Optional, Sequence, Tuple, Union
import yaml
from pydantic import BaseModel
from langchain.callbacks.base import BaseCallbackManager
from langchain.schema import (
StructuredAgentAction,
AgentFinish,
BaseLanguageModel,
)
from langchain.tools.base import BaseTool
logger = logging.getLogger(__name__)
class BaseSingleActionAgent(BaseModel):
"""Base Agent class."""
@property
def return_values(self) -> List[str]:
"""Return values of the agent."""
return ["output"]
def get_allowed_tools(self) -> Optional[List[str]]:
return None
@abstractmethod
def plan(
self, intermediate_steps: List[Tuple[StructuredAgentAction, str]], **kwargs: Any
) -> Union[StructuredAgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
@abstractmethod
async def aplan(
self, intermediate_steps: List[Tuple[StructuredAgentAction, str]], **kwargs: Any
) -> Union[StructuredAgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
@property
@abstractmethod
def input_keys(self) -> List[str]:
"""Return the input keys.
:meta private:
"""
def return_stopped_response(
self,
early_stopping_method: str,
intermediate_steps: List[Tuple[StructuredAgentAction, str]],
**kwargs: Any,
) -> AgentFinish:
"""Return response when agent has been stopped due to max iterations."""
if early_stopping_method == "force":
# `force` just returns a constant string
return AgentFinish(
{"output": "Agent stopped due to iteration limit or time limit."}, ""
)
else:
raise ValueError(
f"Got unsupported early_stopping_method `{early_stopping_method}`"
)
@classmethod
def from_llm_and_tools(
cls,
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
callback_manager: Optional[BaseCallbackManager] = None,
**kwargs: Any,
) -> BaseSingleActionAgent:
raise NotImplementedError
@property
def _agent_type(self) -> str:
"""Return Identifier of agent type."""
raise NotImplementedError
def dict(self, **kwargs: Any) -> Dict:
"""Return dictionary representation of agent."""
_dict = super().dict()
_dict["_type"] = str(self._agent_type)
return _dict
def save(self, file_path: Union[Path, str]) -> None:
"""Save the agent.
Args:
file_path: Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path="path/agent.yaml")
"""
# Convert file to Path object.
if isinstance(file_path, str):
save_path = Path(file_path)
else:
save_path = file_path
directory_path = save_path.parent
directory_path.mkdir(parents=True, exist_ok=True)
# Fetch dictionary to save
agent_dict = self.dict()
if save_path.suffix == ".json":
with open(file_path, "w") as f:
json.dump(agent_dict, f, indent=4)
elif save_path.suffix == ".yaml":
with open(file_path, "w") as f:
yaml.dump(agent_dict, f, default_flow_style=False)
else:
raise ValueError(f"{save_path} must be json or yaml")
def tool_run_logging_kwargs(self) -> Dict:
return {}

View File

@@ -1,12 +1,14 @@
"""Interface for tools."""
from functools import partial
from inspect import signature
from typing import Any, Awaitable, Callable, Optional, Union
from typing import Any, Awaitable, Callable, Optional, Type, Union
from pydantic import validator
from pydantic import BaseModel, validate_arguments, validator
from langchain.tools.base import (
BaseTool,
create_schema_from_function,
get_filtered_args,
)
@@ -26,14 +28,22 @@ class Tool(BaseTool):
raise ValueError("Partial functions not yet supported in tools.")
return func
def _run(self, tool_input: str) -> str:
"""Use the tool."""
return self.func(tool_input)
@property
def args(self) -> dict:
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
else:
inferred_model = validate_arguments(self.func).model # type: ignore
return get_filtered_args(inferred_model, self.func)
async def _arun(self, tool_input: str) -> str:
def _run(self, *args: Any, **kwargs: Any) -> str:
"""Use the tool."""
return self.func(*args, **kwargs)
async def _arun(self, *args: Any, **kwargs: Any) -> str:
"""Use the tool asynchronously."""
if self.coroutine:
return await self.coroutine(tool_input)
return await self.coroutine(*args, **kwargs)
raise NotImplementedError("Tool does not support async")
# TODO: this is for backwards compatibility, remove in future
@@ -64,6 +74,8 @@ class InvalidTool(BaseTool):
def tool(
*args: Union[str, Callable],
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
) -> Callable:
"""Make tools out of functions, can be used with or without arguments.
@@ -71,6 +83,10 @@ def tool(
*args: The arguments to the tool.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: optional argument schema for user to specify
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
Requires:
- Function must be of type (str) -> str
@@ -96,9 +112,13 @@ def tool(
# Description example:
# search_api(query: str) - Searches the API for the query.
description = f"{tool_name}{signature(func)} - {func.__doc__.strip()}"
_args_schema = args_schema
if _args_schema is None and infer_schema:
_args_schema = create_schema_from_function(f"{tool_name}Schema", func)
tool_ = Tool(
name=tool_name,
func=func,
args_schema=_args_schema,
description=description,
return_direct=return_direct,
)

View File

@@ -10,6 +10,10 @@ from langchain.schema import AgentAction, AgentFinish, LLMResult
class StreamlitCallbackHandler(BaseCallbackHandler):
"""Callback Handler that logs to streamlit."""
def __init__(self) -> None:
self.tokens_area = st.empty()
self.tokens_stream = ""
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
@@ -19,8 +23,9 @@ class StreamlitCallbackHandler(BaseCallbackHandler):
st.write(prompt)
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Do nothing."""
pass
"""Run on new LLM token. Only available when streaming is enabled."""
self.tokens_stream += token
self.tokens_area.write(self.tokens_stream)
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Do nothing."""

View File

@@ -1,15 +1,46 @@
"""Chain that interprets a prompt and executes bash code to perform bash operations."""
from typing import Dict, List
import logging
import re
from typing import Any, Dict, List
from pydantic import Extra
from pydantic import Extra, Field
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.llm_bash.prompt import PROMPT
from langchain.prompts.base import BasePromptTemplate
from langchain.schema import BaseLanguageModel
from langchain.schema import BaseLanguageModel, BaseOutputParser, OutputParserException
from langchain.utilities.bash import BashProcess
logger = logging.getLogger(__name__)
class BashOutputParser(BaseOutputParser):
"""Parser for bash output."""
def parse(self, text: str) -> List[str]:
if "```bash" in text:
return self.get_code_blocks(text)
else:
raise OutputParserException(
f"Failed to parse bash output. Got: {text}",
)
@staticmethod
def get_code_blocks(t: str) -> List[str]:
"""Get multiple code blocks from the LLM result."""
code_blocks: List[str] = []
# Bash markdown code blocks
pattern = re.compile(r"```bash(.*?)(?:\n\s*)```", re.DOTALL)
for match in pattern.finditer(t):
matched = match.group(1).strip()
if matched:
code_blocks.extend(
[line for line in matched.split("\n") if line.strip()]
)
return code_blocks
class LLMBashChain(Chain):
"""Chain that interprets a prompt and executes bash code to perform bash operations.
@@ -26,6 +57,8 @@ class LLMBashChain(Chain):
input_key: str = "question" #: :meta private:
output_key: str = "answer" #: :meta private:
prompt: BasePromptTemplate = PROMPT
output_parser: BaseOutputParser = Field(default_factory=BashOutputParser)
bash_process: BashProcess = Field(default_factory=BashProcess) #: :meta private:
class Config:
"""Configuration for this pydantic object."""
@@ -51,29 +84,40 @@ class LLMBashChain(Chain):
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
llm_executor = LLMChain(prompt=self.prompt, llm=self.llm)
bash_executor = BashProcess()
self.callback_manager.on_text(inputs[self.input_key], verbose=self.verbose)
t = llm_executor.predict(question=inputs[self.input_key])
self.callback_manager.on_text(t, color="green", verbose=self.verbose)
t = t.strip()
if t.startswith("```bash"):
# Split the string into a list of substrings
command_list = t.split("\n")
print(command_list)
try:
command_list = self.output_parser.parse(t)
except OutputParserException as e:
self.callback_manager.on_chain_error(e, verbose=self.verbose)
raise e
# Remove the first and last substrings
command_list = [s for s in command_list[1:-1]]
output = bash_executor.run(command_list)
if self.verbose:
self.callback_manager.on_text("\nCode: ", verbose=self.verbose)
self.callback_manager.on_text(
str(command_list), color="yellow", verbose=self.verbose
)
self.callback_manager.on_text("\nAnswer: ", verbose=self.verbose)
self.callback_manager.on_text(output, color="yellow", verbose=self.verbose)
output = self.bash_process.run(command_list)
else:
raise ValueError(f"unknown format from LLM: {t}")
self.callback_manager.on_text("\nAnswer: ", verbose=self.verbose)
self.callback_manager.on_text(output, color="yellow", verbose=self.verbose)
return {self.output_key: output}
@property
def _chain_type(self) -> str:
return "llm_bash_chain"
@classmethod
def from_bash_process(
cls,
bash_process: BashProcess,
llm: BaseLanguageModel,
**kwargs: Any,
) -> "LLMBashChain":
"""Create a LLMBashChain from a BashProcess."""
return cls(llm=llm, bash_process=bash_process, **kwargs)

View File

@@ -26,4 +26,4 @@ services:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
expose:
- 5432:5432
- 5432

View File

@@ -189,19 +189,8 @@ class ConfluenceLoader(BaseLoader):
"`label`, `cql` parameters."
)
try:
import html2text # type: ignore
except ImportError:
raise ImportError(
"`html2text` package not found, please run `pip install html2text`"
)
docs = []
text_maker = html2text.HTML2Text()
text_maker.ignore_links = True
text_maker.ignore_images = True
if space_key:
pages = self.paginate_request(
self.confluence.get_all_pages_from_space,
@@ -211,9 +200,7 @@ class ConfluenceLoader(BaseLoader):
expand="body.storage.value",
)
for page in pages:
doc = self.process_page(
page, include_attachments, include_comments, text_maker
)
doc = self.process_page(page, include_attachments, include_comments)
docs.append(doc)
if label:
@@ -225,9 +212,7 @@ class ConfluenceLoader(BaseLoader):
expand="body.storage.value",
)
for page in pages:
doc = self.process_page(
page, include_attachments, include_comments, text_maker
)
doc = self.process_page(page, include_attachments, include_comments)
docs.append(doc)
if cql:
@@ -239,9 +224,7 @@ class ConfluenceLoader(BaseLoader):
expand="body.storage.value",
)
for page in pages:
doc = self.process_page(
page, include_attachments, include_comments, text_maker
)
doc = self.process_page(page, include_attachments, include_comments)
docs.append(doc)
if page_ids:
@@ -259,9 +242,7 @@ class ConfluenceLoader(BaseLoader):
before_sleep=before_sleep_log(logger, logging.WARNING),
)(self.confluence.get_page_by_id)
page = get_page(page_id=page_id, expand="body.storage.value")
doc = self.process_page(
page, include_attachments, include_comments, text_maker
)
doc = self.process_page(page, include_attachments, include_comments)
docs.append(doc)
return docs
@@ -313,21 +294,28 @@ class ConfluenceLoader(BaseLoader):
page: dict,
include_attachments: bool,
include_comments: bool,
text_maker: Any,
) -> Document:
try:
from bs4 import BeautifulSoup # type: ignore
except ImportError:
raise ImportError(
"`beautifulsoup4` package not found, please run"
" `pip install beautifulsoup4`"
)
if include_attachments:
attachment_texts = self.process_attachment(page["id"])
else:
attachment_texts = []
text = text_maker.handle(page["body"]["storage"]["value"]) + "".join(
attachment_texts
)
text = BeautifulSoup(
page["body"]["storage"]["value"], "lxml"
).get_text() + "".join(attachment_texts)
if include_comments:
comments = self.confluence.get_page_comments(
page["id"], expand="body.view.value", depth="all"
)["results"]
comment_texts = [
text_maker.handle(comment["body"]["view"]["value"])
BeautifulSoup(comment["body"]["view"]["value"], "lxml").get_text()
for comment in comments
]
text = text + "".join(comment_texts)

View File

@@ -17,6 +17,7 @@ class BSHTMLLoader(BaseLoader):
file_path: str,
open_encoding: Union[str, None] = None,
bs_kwargs: Union[dict, None] = None,
get_text_separator: str = "",
) -> None:
"""Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object."""
@@ -33,6 +34,7 @@ class BSHTMLLoader(BaseLoader):
if bs_kwargs is None:
bs_kwargs = {"features": "lxml"}
self.bs_kwargs = bs_kwargs
self.get_text_separator = get_text_separator
def load(self) -> List[Document]:
from bs4 import BeautifulSoup
@@ -41,7 +43,7 @@ class BSHTMLLoader(BaseLoader):
with open(self.file_path, "r", encoding=self.open_encoding) as f:
soup = BeautifulSoup(f, **self.bs_kwargs)
text = soup.get_text()
text = soup.get_text(self.get_text_separator)
if soup.title:
title = str(soup.title.string)

View File

@@ -1,63 +1,4 @@
"""Wrapper around sentence transformer embedding models."""
from typing import Any, Dict, List, Optional
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from pydantic import BaseModel, Extra, Field, root_validator
from langchain.embeddings.base import Embeddings
class SentenceTransformerEmbeddings(BaseModel, Embeddings):
embedding_function: Any #: :meta private:
model: Optional[str] = Field("all-MiniLM-L6-v2", alias="model")
"""Transformer model to use."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that sentence_transformers library is installed."""
model = values["model"]
try:
from sentence_transformers import SentenceTransformer
values["embedding_function"] = SentenceTransformer(model)
except ImportError:
raise ModuleNotFoundError(
"Could not import sentence_transformers library. "
"Please install the sentence_transformers library to "
"use this embedding model: pip install sentence_transformers"
)
except Exception:
raise NameError(f"Could not load SentenceTransformer model {model}.")
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Embed a list of documents using the SentenceTransformer model.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
embeddings = self.embedding_function.encode(
texts, convert_to_numpy=True
).tolist()
return [list(map(float, e)) for e in embeddings]
def embed_query(self, text: str) -> List[float]:
"""Embed a query using the SentenceTransformer model.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
return self.embed_documents([text])[0]
SentenceTransformerEmbeddings = HuggingFaceEmbeddings

View File

@@ -0,0 +1,4 @@
"""Chains for evaluating ReAct style agents."""
from langchain.evaluation.agents.trajectory_eval_chain import TrajectoryEvalChain
__all__ = ["TrajectoryEvalChain"]

View File

@@ -0,0 +1,106 @@
"""A chain for evaluating ReAct style agents."""
from typing import Any, Dict, List, NamedTuple, Optional, Sequence, Tuple, Union
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.evaluation.agents.trajectory_eval_prompt import EVAL_CHAT_PROMPT
from langchain.schema import AgentAction, BaseOutputParser, OutputParserException
from langchain.tools.base import BaseTool
class TrajectoryEval(NamedTuple):
score: int
reasoning: str
class TrajectoryOutputParser(BaseOutputParser):
def parse(self, text: str) -> TrajectoryEval:
if "Score:" not in text:
raise OutputParserException(
f"Could not find score in model eval output: {text}"
)
reasoning, score_str = text.split("Score: ")
reasoning, score_str = reasoning.strip(), score_str.strip()
score_str = next(
(char for char in score_str if char.isdigit()), "0"
) # Scan for first digit
if not 1 <= int(score_str) <= 5:
raise OutputParserException(
f"Score is not a digit in the range 1-5: {text}"
)
return TrajectoryEval(score=int(score_str), reasoning=reasoning)
class TrajectoryEvalChain(Chain):
agent_tools: List[BaseTool]
eval_chain: LLMChain
output_parser: TrajectoryOutputParser
return_reasoning: bool = False
@property
def _tools_description(self) -> str:
return "\n\n".join(
[
f"""Tool {i}: {tool.name}
Description: {tool.description}"""
for i, tool in enumerate(self.agent_tools, 1)
]
)
@staticmethod
def get_agent_trajectory(steps: Union[str, List[Tuple[AgentAction, str]]]) -> str:
if isinstance(steps, str):
return steps
return "\n\n".join(
[
f"""Step {i}:
Tool used: {action.tool}
Tool input: {action.tool_input}
Tool output: {output}"""
for i, (action, output) in enumerate(steps, 1)
]
)
@classmethod
def from_llm(
cls,
llm: ChatOpenAI,
agent_tools: Sequence[BaseTool],
output_parser: Optional[TrajectoryOutputParser] = None,
return_reasoning: bool = False,
) -> "TrajectoryEvalChain":
eval_chain = LLMChain(llm=llm, prompt=EVAL_CHAT_PROMPT)
return cls(
agent_tools=agent_tools,
return_reasoning=return_reasoning,
eval_chain=eval_chain,
output_parser=output_parser or TrajectoryOutputParser(),
)
@property
def input_keys(self) -> List[str]:
return ["question", "agent_trajectory", "answer"]
@property
def output_keys(self) -> List[str]:
if self.return_reasoning:
return ["score", "reasoning"]
return ["score"]
def _call(self, inputs: Dict[str, str]) -> Dict[str, Any]:
raw_output = self.eval_chain.run(
{"tool_descriptions": self._tools_description, **inputs}
)
parsed_output = self.output_parser.parse(raw_output)
if self.return_reasoning:
return {"score": parsed_output.score, "reasoning": parsed_output.reasoning}
return {"score": parsed_output.score}

View File

@@ -0,0 +1,98 @@
"""Prompt for trajectory evaluation chain."""
# flake8: noqa
from langchain.schema import AIMessage
from langchain.schema import HumanMessage
from langchain.schema import SystemMessage
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
EVAL_TEMPLATE = """An AI language model has been given access to the following set of tools to help answer a user's question.
The tools given to the AI model are:
{tool_descriptions}
The question the human asked the AI model was: {question}
The AI language model decided to use the following set of tools to answer the question:
{agent_trajectory}
The AI language model's final answer to the question was: {answer}
Let's to do a detailed evaluation of the AI language model's answer step by step.
We consider the following criteria before giving a score from 1 to 5:
i. Is the final answer helpful?
ii. Does the AI language use a logical sequence of tools to answer the question?
iii. Does the AI language model use the tools in a helpful way?
iv. Does the AI language model use too many steps to answer the question?
v. Are the appropriate tools used to answer the question?"""
EXAMPLE_INPUT = """An AI language model has been given acces to the following set of tools to help answer a user's question.
The tools given to the AI model are:
Tool 1:
Name: Search
Description: useful for when you need to ask with search
Tool 2:
Name: Lookup
Description: useful for when you need to ask with lookup
Tool 3:
Name: Calculator
Description: useful for doing calculations
Tool 4:
Name: Search the Web (SerpAPI)
Description: useful for when you need to answer questions about current events
The question the human asked the AI model was: If laid the Statue of Liberty end to end, how many times would it stretch across the United States?
The AI language model decided to use the following set of tools to answer the question:
Step 1:
Tool used: Search the Web (SerpAPI)
Tool input: If laid the Statue of Liberty end to end, how many times would it stretch across the United States?
Tool output: The Statue of Liberty was given to the United States by France, as a symbol of the two countries' friendship. It was erected atop an American-designed ...
The AI language model's final answer to the question was: There are different ways to measure the length of the United States, but if we use the distance between the Statue of Liberty and the westernmost point of the contiguous United States (Cape Alava, Washington), which is approximately 2,857 miles (4,596 km), and assume that the Statue of Liberty is 305 feet (93 meters) tall, then the statue would stretch across the United States approximately 17.5 times if laid end to end.
Let's to do a detailed evaluation of the AI language model's answer step by step.
We consider the following criteria before giving a score from 1 to 5:
i. Is the final answer helpful?
ii. Does the AI language use a logical sequence of tools to answer the question?
iii. Does the AI language model use the tools in a helpful way?
iv. Does the AI language model use too many steps to answer the question?
v. Are the appropriate tools used to answer the question?"""
EXAMPLE_OUTPUT = """First, let's evaluate the final answer. The final uses good reasoning but is wrong. 2,857 divided by 305 is not 17.5.\
The model should have used the calculator to figure this out. Second does the model use a logical sequence of tools to answer the question?\
The way model uses the search is not helpful. The model should have used the search tool to figure the width of the US or the height of the statue.\
The model didn't use the calculator tool and gave an incorrect answer. The search API should be used for current events or specific questions.\
The tools were not used in a helpful way. The model did not use too many steps to answer the question.\
The model did not use the appropriate tools to answer the question.\
Judgment: Given the good reasoning in the final answer but otherwise poor performance, we give the model a score of 2.
Score: 2"""
EVAL_CHAT_PROMPT = ChatPromptTemplate.from_messages(
messages=[
SystemMessage(
content="You are a helpful assistant that evaluates language models."
),
HumanMessage(content=EXAMPLE_INPUT),
AIMessage(content=EXAMPLE_OUTPUT),
HumanMessagePromptTemplate.from_template(EVAL_TEMPLATE),
]
)

View File

@@ -103,6 +103,6 @@ class Replicate(LLM):
first_input_name = input_properties[0][0]
inputs = {first_input_name: prompt, **self.input}
iterator = replicate_python.run(self.model, input={**inputs})
outputs = replicate_python.run(self.model, input={**inputs})
return outputs[0]
return "".join([output for output in iterator])

View File

@@ -41,21 +41,10 @@ class AgentAction(NamedTuple):
"""Agent's action to take."""
tool: str
tool_input: str
tool_input: Union[str, dict]
log: str
class StructuredAgentAction(NamedTuple):
"""Agent's action to take."""
tool: str
tool_input: dict
log: str
def to_agent_action(self) -> AgentAction:
return AgentAction(self.tool, str(self.tool_input), self.log)
class AgentFinish(NamedTuple):
"""Agent's return value."""

View File

@@ -1,19 +1,27 @@
"""Core toolkit implementations."""
from langchain.tools.base import BaseTool
from langchain.tools.ddg_search.tool import DuckDuckGoSearchTool
from langchain.tools.bing_search.tool import BingSearchResults, BingSearchRun
from langchain.tools.ddg_search.tool import DuckDuckGoSearchResults, DuckDuckGoSearchRun
from langchain.tools.google_places.tool import GooglePlacesTool
from langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun
from langchain.tools.ifttt import IFTTTWebhook
from langchain.tools.openapi.utils.api_models import APIOperation
from langchain.tools.openapi.utils.openapi_utils import OpenAPISpec
from langchain.tools.plugin import AIPluginTool
__all__ = [
"BaseTool",
"IFTTTWebhook",
"AIPluginTool",
"OpenAPISpec",
"APIOperation",
"BingSearchResults",
"BingSearchRun",
"DuckDuckGoSearchResults",
"DuckDuckGoSearchRun",
"DuckDuckGoSearchRun",
"GooglePlacesTool",
"DuckDuckGoSearchTool",
"GoogleSearchResults",
"GoogleSearchRun",
"IFTTTWebhook",
"OpenAPISpec",
"BaseTool",
]

View File

@@ -1,57 +1,242 @@
"""Base implementation for tools or skills."""
from __future__ import annotations
from abc import ABC
from typing import Any, Dict, Type, Union
from abc import ABC, abstractmethod
from inspect import signature
from typing import Any, Callable, Dict, Optional, Sequence, Tuple, Type, Union
from pydantic import (
BaseModel,
Extra,
Field,
create_model,
validate_arguments,
validator,
)
from pydantic.main import ModelMetaclass
from langchain.callbacks import get_callback_manager
from langchain.callbacks.base import BaseCallbackManager
from langchain.tools.structured import BaseStructuredTool
def _to_args_and_kwargs(run_input: Union[str, Dict]) -> Tuple[Sequence, dict]:
# For backwards compatability, if run_input is a string,
# pass as a positional argument.
if isinstance(run_input, str):
return (run_input,), {}
else:
return [], run_input
class BaseTool(ABC, BaseStructuredTool[str, str]):
class SchemaAnnotationError(TypeError):
"""Raised when 'args_schema' is missing or has an incorrect type annotation."""
class ToolMetaclass(ModelMetaclass):
"""Metaclass for BaseTool to ensure the provided args_schema
doesn't silently ignored."""
def __new__(
cls: Type[ToolMetaclass], name: str, bases: Tuple[Type, ...], dct: dict
) -> ToolMetaclass:
"""Create the definition of the new tool class."""
schema_type: Optional[Type[BaseModel]] = dct.get("args_schema")
if schema_type is not None:
schema_annotations = dct.get("__annotations__", {})
args_schema_type = schema_annotations.get("args_schema", None)
if args_schema_type is None or args_schema_type == BaseModel:
# Throw errors for common mis-annotations.
# TODO: Use get_args / get_origin and fully
# specify valid annotations.
typehint_mandate = """
class ChildTool(BaseTool):
...
args_schema: Type[BaseModel] = SchemaClass
..."""
raise SchemaAnnotationError(
f"Tool definition for {name} must include valid type annotations"
f" for argument 'args_schema' to behave as expected.\n"
f"Expected annotation of 'Type[BaseModel]'"
f" but got '{args_schema_type}'.\n"
f"Expected class looks like:\n"
f"{typehint_mandate}"
)
# Pass through to Pydantic's metaclass
return super().__new__(cls, name, bases, dct)
def _create_subset_model(
name: str, model: BaseModel, field_names: list
) -> Type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
fields = {
field_name: (
model.__fields__[field_name].type_,
model.__fields__[field_name].default,
)
for field_name in field_names
if field_name in model.__fields__
}
return create_model(name, **fields) # type: ignore
def get_filtered_args(inferred_model: Type[BaseModel], func: Callable) -> dict:
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
return {k: schema[k] for k in valid_keys}
def create_schema_from_function(model_name: str, func: Callable) -> Type[BaseModel]:
"""Create a pydantic schema from a function's signature."""
inferred_model = validate_arguments(func).model # type: ignore
# Pydantic adds placeholder virtual fields we need to strip
filtered_args = get_filtered_args(inferred_model, func)
return _create_subset_model(
f"{model_name}Schema", inferred_model, list(filtered_args)
)
class BaseTool(ABC, BaseModel, metaclass=ToolMetaclass):
"""Interface LangChain tools must implement."""
args_schema: Type[str] = str # :meta private:
name: str
description: str
args_schema: Optional[Type[BaseModel]] = None
"""Pydantic model class to validate and parse the tool's input arguments."""
return_direct: bool = False
verbose: bool = False
callback_manager: BaseCallbackManager = Field(default_factory=get_callback_manager)
def _parse_input(self, tool_input: Dict) -> str:
"""Load the tool's input into a pydantic model."""
if len(tool_input) == 1:
# Make base tools more forwards compatible
result = next(iter(tool_input.values()))
if not isinstance(result, str):
raise ValueError(
f"Tool input {tool_input} must be a single string or dict."
)
return result
raise ValueError(f"Tool input {tool_input} must be a single string or dict.")
class Config:
"""Configuration for this pydantic object."""
def _wrap_input(self, tool_input: Union[str, Dict]) -> Dict:
"""Wrap the tool's input into a pydantic model."""
if isinstance(tool_input, str):
return {"tool_input": tool_input}
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def args(self) -> dict:
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
else:
return tool_input
inferred_model = validate_arguments(self._run).model # type: ignore
return get_filtered_args(inferred_model, self._run)
def _parse_input(
self,
tool_input: Union[str, Dict],
) -> None:
"""Convert tool input to pydantic model."""
input_args = self.args_schema
if isinstance(tool_input, str):
if input_args is not None:
key_ = next(iter(input_args.__fields__.keys()))
input_args.validate({key_: tool_input})
else:
if input_args is not None:
input_args.validate(tool_input)
@validator("callback_manager", pre=True, always=True)
def set_callback_manager(
cls, callback_manager: Optional[BaseCallbackManager]
) -> BaseCallbackManager:
"""If callback manager is None, set it.
This allows users to pass in None as callback manager, which is a nice UX.
"""
return callback_manager or get_callback_manager()
@abstractmethod
def _run(self, *args: Any, **kwargs: Any) -> str:
"""Use the tool."""
@abstractmethod
async def _arun(self, *args: Any, **kwargs: Any) -> str:
"""Use the tool asynchronously."""
def run(
self,
tool_input: Union[str, Dict],
verbose: bool | None = None,
start_color: str | None = "green",
color: str | None = "green",
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
**kwargs: Any,
) -> str:
"""Use the tool."""
wrapped_input = self._wrap_input(tool_input)
return super().run(wrapped_input, verbose, start_color, color, **kwargs)
"""Run the tool."""
self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
self.callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input if isinstance(tool_input, str) else str(tool_input),
verbose=verbose_,
color=start_color,
**kwargs,
)
try:
tool_args, tool_kwargs = _to_args_and_kwargs(tool_input)
observation = self._run(*tool_args, **tool_kwargs)
except (Exception, KeyboardInterrupt) as e:
self.callback_manager.on_tool_error(e, verbose=verbose_)
raise e
self.callback_manager.on_tool_end(
observation, verbose=verbose_, color=color, name=self.name, **kwargs
)
return observation
async def arun(
self,
tool_input: Union[str, Dict],
verbose: bool | None = None,
start_color: str | None = "green",
color: str | None = "green",
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
**kwargs: Any,
) -> str:
"""Use the tool asynchronously."""
wrapped_input = self._wrap_input(tool_input)
return await super().arun(wrapped_input, verbose, start_color, color, **kwargs)
"""Run the tool asynchronously."""
self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
if self.callback_manager.is_async:
await self.callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input if isinstance(tool_input, str) else str(tool_input),
verbose=verbose_,
color=start_color,
**kwargs,
)
else:
self.callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input if isinstance(tool_input, str) else str(tool_input),
verbose=verbose_,
color=start_color,
**kwargs,
)
try:
# We then call the tool on the tool input to get an observation
args, kwargs = _to_args_and_kwargs(tool_input)
observation = await self._arun(*args, **kwargs)
except (Exception, KeyboardInterrupt) as e:
if self.callback_manager.is_async:
await self.callback_manager.on_tool_error(e, verbose=verbose_)
else:
self.callback_manager.on_tool_error(e, verbose=verbose_)
raise e
if self.callback_manager.is_async:
await self.callback_manager.on_tool_end(
observation, verbose=verbose_, color=color, name=self.name, **kwargs
)
else:
self.callback_manager.on_tool_end(
observation, verbose=verbose_, color=color, name=self.name, **kwargs
)
return observation
def __call__(self, tool_input: str) -> str:
"""Make tool callable."""
return self.run(tool_input)

View File

@@ -22,3 +22,24 @@ class BingSearchRun(BaseTool):
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("BingSearchRun does not support async")
class BingSearchResults(BaseTool):
"""Tool that has capability to query the Bing Search API and get back json."""
name = "Bing Search Results JSON"
description = (
"A wrapper around Bing Search. "
"Useful for when you need to answer questions about current events. "
"Input should be a search query. Output is a JSON array of the query results"
)
num_results: int = 4
api_wrapper: BingSearchAPIWrapper
def _run(self, query: str) -> str:
"""Use the tool."""
return str(self.api_wrapper.results(query, self.num_results))
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("BingSearchResults does not support async")

View File

@@ -1,5 +1,5 @@
"""DuckDuckGo Search API toolkit."""
from langchain.tools.ddg_search.tool import DuckDuckGoSearchTool
from langchain.tools.ddg_search.tool import DuckDuckGoSearchRun
__all__ = ["DuckDuckGoSearchTool"]
__all__ = ["DuckDuckGoSearchRun"]

View File

@@ -1,12 +1,15 @@
"""Tool for the DuckDuckGo search API."""
import warnings
from typing import Any
from pydantic import Field
from langchain.tools.base import BaseTool
from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper
class DuckDuckGoSearchTool(BaseTool):
class DuckDuckGoSearchRun(BaseTool):
"""Tool that adds the capability to query the DuckDuckGo search API."""
name = "DuckDuckGo Search"
@@ -26,3 +29,35 @@ class DuckDuckGoSearchTool(BaseTool):
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("DuckDuckGoSearch does not support async")
class DuckDuckGoSearchResults(BaseTool):
"""Tool that queries the Duck Duck Go Search API and get back json."""
name = "DuckDuckGo Results JSON"
description = (
"A wrapper around Duck Duck Go Search. "
"Useful for when you need to answer questions about current events. "
"Input should be a search query. Output is a JSON array of the query results"
)
num_results: int = 4
api_wrapper: DuckDuckGoSearchAPIWrapper = Field(
default_factory=DuckDuckGoSearchAPIWrapper
)
def _run(self, query: str) -> str:
"""Use the tool."""
return str(self.api_wrapper.results(query, self.num_results))
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("DuckDuckGoSearchResults does not support async")
def DuckDuckGoSearchTool(*args: Any, **kwargs: Any) -> DuckDuckGoSearchRun:
warnings.warn(
"DuckDuckGoSearchTool will be deprecated in the future. "
"Please use DuckDuckGoSearchRun instead.",
DeprecationWarning,
)
return DuckDuckGoSearchRun(*args, **kwargs)

View File

@@ -3,30 +3,30 @@ from typing import Optional, Type
from pydantic import BaseModel, Field
from langchain.tools.base import BaseTool
from langchain.tools.file_management.utils import get_validated_relative_path
from langchain.tools.structured import BaseStructuredTool
class ReadFileInput(BaseModel):
"""Input for ReadFileTool."""
file_path: Path = Field(..., description="name of file")
file_path: str = Field(..., description="name of file")
class ReadFileTool(BaseStructuredTool[ReadFileInput, str]):
class ReadFileTool(BaseTool):
name: str = "read_file"
args_schema: Type[ReadFileInput] = ReadFileInput
args_schema: Type[BaseModel] = ReadFileInput
description: str = "Read file from disk"
root_dir: Optional[str] = None
"""Directory to read file from.
If specified, raises an error for file_paths oustide root_dir."""
def _run(self, tool_input: ReadFileInput) -> str:
def _run(self, file_path: str) -> str:
read_path = (
get_validated_relative_path(Path(self.root_dir), tool_input.file_path)
get_validated_relative_path(Path(self.root_dir), file_path)
if self.root_dir
else tool_input.file_path
else Path(file_path)
)
try:
with read_path.open("r", encoding="utf-8") as f:
@@ -35,6 +35,6 @@ class ReadFileTool(BaseStructuredTool[ReadFileInput, str]):
except Exception as e:
return "Error: " + str(e)
async def _arun(self, tool_input: ReadFileInput) -> str:
async def _arun(self, tool_input: str) -> str:
# TODO: Add aiofiles method
raise NotImplementedError

View File

@@ -1,6 +1,5 @@
import sys
from pathlib import Path
from typing import Union
def is_relative_to(path: Path, root: Path) -> bool:
@@ -15,7 +14,7 @@ def is_relative_to(path: Path, root: Path) -> bool:
return False
def get_validated_relative_path(root: Path, user_path: Union[str, Path]) -> Path:
def get_validated_relative_path(root: Path, user_path: str) -> Path:
"""Resolve a relative path, raising an error if not within the root directory."""
# Note, this still permits symlinks from outside that point within the root.
# Further validation would be needed if those are to be disallowed.

View File

@@ -3,40 +3,40 @@ from typing import Optional, Type
from pydantic import BaseModel, Field
from langchain.tools.base import BaseTool
from langchain.tools.file_management.utils import get_validated_relative_path
from langchain.tools.structured import BaseStructuredTool
class WriteFileInput(BaseModel):
"""Input for WriteFileTool."""
file_path: Path = Field(..., description="name of file")
file_path: str = Field(..., description="name of file")
text: str = Field(..., description="text to write to file")
class WriteFileTool(BaseStructuredTool[WriteFileInput, str]):
class WriteFileTool(BaseTool):
name: str = "write_file"
args_schema: Type[WriteFileInput] = WriteFileInput
args_schema: Type[BaseModel] = WriteFileInput
description: str = "Write file to disk"
root_dir: Optional[str] = None
"""Directory to write file to.
If specified, raises an error for file_paths oustide root_dir."""
def _run(self, tool_input: WriteFileInput) -> str:
def _run(self, file_path: str, text: str) -> str:
write_path = (
get_validated_relative_path(Path(self.root_dir), tool_input.file_path)
get_validated_relative_path(Path(self.root_dir), file_path)
if self.root_dir
else tool_input.file_path
else Path(file_path)
)
try:
write_path.parent.mkdir(exist_ok=True, parents=False)
with write_path.open("w", encoding="utf-8") as f:
f.write(tool_input.text)
return f"File written successfully to {tool_input.file_path}."
f.write(text)
return f"File written successfully to {file_path}."
except Exception as e:
return "Error: " + str(e)
async def _arun(self, tool_input: WriteFileInput) -> str:
async def _arun(self, file_path: str, text: str) -> str:
# TODO: Add aiofiles method
raise NotImplementedError

View File

@@ -127,11 +127,11 @@ class ListPowerBITool(BaseTool):
arbitrary_types_allowed = True
def _run(self, tool_input: str = "") -> str:
def _run(self, *args: Any, **kwargs: Any) -> str:
"""Get the names of the tables."""
return ", ".join(self.powerbi.get_table_names())
async def _arun(self, tool_input: str = "") -> str:
async def _arun(self, *args: Any, **kwargs: Any) -> str:
"""Get the names of the tables."""
return ", ".join(self.powerbi.get_table_names())

View File

@@ -14,6 +14,11 @@ def _parse_input(text: str) -> Dict[str, Any]:
return json.loads(text)
def _clean_url(url: str) -> str:
"""Strips quotes from the url."""
return url.strip("\"'")
class BaseRequestsTool(BaseModel):
"""Base class for requests tools."""
@@ -28,11 +33,11 @@ class RequestsGetTool(BaseRequestsTool, BaseTool):
def _run(self, url: str) -> str:
"""Run the tool."""
return self.requests_wrapper.get(url)
return self.requests_wrapper.get(_clean_url(url))
async def _arun(self, url: str) -> str:
"""Run the tool asynchronously."""
return await self.requests_wrapper.aget(url)
return await self.requests_wrapper.aget(_clean_url(url))
class RequestsPostTool(BaseRequestsTool, BaseTool):
@@ -51,7 +56,7 @@ class RequestsPostTool(BaseRequestsTool, BaseTool):
"""Run the tool."""
try:
data = _parse_input(text)
return self.requests_wrapper.post(data["url"], data["data"])
return self.requests_wrapper.post(_clean_url(data["url"]), data["data"])
except Exception as e:
return repr(e)
@@ -59,7 +64,9 @@ class RequestsPostTool(BaseRequestsTool, BaseTool):
"""Run the tool asynchronously."""
try:
data = _parse_input(text)
return await self.requests_wrapper.apost(data["url"], data["data"])
return await self.requests_wrapper.apost(
_clean_url(data["url"]), data["data"]
)
except Exception as e:
return repr(e)
@@ -80,7 +87,7 @@ class RequestsPatchTool(BaseRequestsTool, BaseTool):
"""Run the tool."""
try:
data = _parse_input(text)
return self.requests_wrapper.patch(data["url"], data["data"])
return self.requests_wrapper.patch(_clean_url(data["url"]), data["data"])
except Exception as e:
return repr(e)
@@ -88,7 +95,9 @@ class RequestsPatchTool(BaseRequestsTool, BaseTool):
"""Run the tool asynchronously."""
try:
data = _parse_input(text)
return await self.requests_wrapper.apatch(data["url"], data["data"])
return await self.requests_wrapper.apatch(
_clean_url(data["url"]), data["data"]
)
except Exception as e:
return repr(e)
@@ -109,7 +118,7 @@ class RequestsPutTool(BaseRequestsTool, BaseTool):
"""Run the tool."""
try:
data = _parse_input(text)
return self.requests_wrapper.put(data["url"], data["data"])
return self.requests_wrapper.put(_clean_url(data["url"]), data["data"])
except Exception as e:
return repr(e)
@@ -117,7 +126,9 @@ class RequestsPutTool(BaseRequestsTool, BaseTool):
"""Run the tool asynchronously."""
try:
data = _parse_input(text)
return await self.requests_wrapper.aput(data["url"], data["data"])
return await self.requests_wrapper.aput(
_clean_url(data["url"]), data["data"]
)
except Exception as e:
return repr(e)
@@ -130,8 +141,8 @@ class RequestsDeleteTool(BaseRequestsTool, BaseTool):
def _run(self, url: str) -> str:
"""Run the tool."""
return self.requests_wrapper.delete(url)
return self.requests_wrapper.delete(_clean_url(url))
async def _arun(self, url: str) -> str:
"""Run the tool asynchronously."""
return await self.requests_wrapper.adelete(url)
return await self.requests_wrapper.adelete(_clean_url(url))

View File

@@ -1,344 +0,0 @@
from __future__ import annotations
from abc import abstractmethod
from functools import partial
from inspect import signature
from typing import (
Any,
Awaitable,
Callable,
Dict,
Generic,
Optional,
Type,
TypeVar,
Union,
)
from pydantic import (
BaseModel,
Extra,
Field,
create_model,
validate_arguments,
validator,
)
from pydantic.generics import GenericModel
from langchain.callbacks import get_callback_manager
from langchain.callbacks.base import BaseCallbackManager
from langchain.utilities.async_utils import async_or_sync_call
class SchemaAnnotationError(TypeError):
"""Raised when 'args_schema' is missing or has an incorrect type annotation."""
SCHEMA_T = TypeVar("SCHEMA_T", bound=Union[str, BaseModel])
OUTPUT_T = TypeVar("OUTPUT_T")
class BaseStructuredTool(
GenericModel,
Generic[SCHEMA_T, OUTPUT_T],
BaseModel,
):
"""Parent class for all structured tools."""
name: str
description: str
return_direct: bool = False
verbose: bool = False
callback_manager: BaseCallbackManager = Field(default_factory=get_callback_manager)
args_schema: Type[SCHEMA_T] # :meta private:
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def args(self) -> Dict:
if isinstance(self.args_schema, BaseModel):
return self.args_schema.schema()["properties"]
else:
return {"tool_input": "str"}
def _parse_input(self, tool_input: Dict) -> SCHEMA_T:
"""Load the tool's input into a pydantic model."""
if not issubclass(self.args_schema, BaseModel):
raise ValueError(
f"Tool with args_schema of type {self.args_schema} must overwrite _parse_input."
)
# Ignore type because mypy doesn't connect the subclass to the generic SCHEMA_T
return self.args_schema.parse_obj(tool_input) # type: ignore
def _get_verbosity(
self,
verbose: Optional[bool] = None,
) -> bool:
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
return verbose_
@abstractmethod
def _run(self, input_: SCHEMA_T) -> OUTPUT_T:
"""Use the tool."""
@abstractmethod
async def _arun(self, input_: SCHEMA_T) -> OUTPUT_T:
"""Use the tool asynchronously."""
def run(
self,
tool_input: dict,
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
**kwargs: Any,
) -> OUTPUT_T:
"""Run the tool."""
parsed_input = self._parse_input(tool_input)
verbose_ = self._get_verbosity(verbose)
self.callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
str(tool_input),
verbose=verbose_,
color=start_color,
**kwargs,
)
try:
observation = self._run(parsed_input)
except (Exception, KeyboardInterrupt) as e:
self.callback_manager.on_tool_error(e, verbose=verbose_)
raise e
self.callback_manager.on_tool_end(
str(observation), verbose=verbose_, color=color, name=self.name, **kwargs
)
return observation
async def arun(
self,
tool_input: dict,
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
**kwargs: Any,
) -> OUTPUT_T:
"""Run the tool asynchronously."""
parsed_input = self._parse_input(tool_input)
verbose_ = self._get_verbosity(verbose)
await async_or_sync_call(
self.callback_manager.on_tool_start,
{"name": self.name, "description": self.description},
str(parsed_input),
verbose=verbose_,
color=start_color,
is_async=self.callback_manager.is_async,
**kwargs,
)
try:
# We then call the tool on the tool input to get an observation
observation = await self._arun(parsed_input)
except (Exception, KeyboardInterrupt) as e:
await async_or_sync_call(
self.callback_manager.on_tool_error,
e,
verbose=verbose_,
is_async=self.callback_manager.is_async,
)
raise e
await async_or_sync_call(
self.callback_manager.on_tool_end,
str(observation),
verbose=verbose_,
color=color,
is_async=self.callback_manager.is_async,
**kwargs,
)
return observation
def __call__(self, tool_input: dict) -> OUTPUT_T:
"""Make tool callable."""
return self.run(tool_input)
def _create_subset_model(
name: str, model: BaseModel, field_names: list
) -> Type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
fields = {
field_name: (
model.__fields__[field_name].type_,
model.__fields__[field_name].default,
)
for field_name in field_names
if field_name in model.__fields__
}
return create_model(name, **fields) # type: ignore
def get_filtered_args(inferred_model: Type[BaseModel], func: Callable) -> dict:
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
return {k: schema[k] for k in valid_keys}
def create_schema_from_function(model_name: str, func: Callable) -> Type[BaseModel]:
"""Create a pydantic schema from a function's signature."""
inferred_model = validate_arguments(func).model # type: ignore
# Pydantic adds placeholder virtual fields we need to strip
filtered_args = get_filtered_args(inferred_model, func)
return _create_subset_model(
f"{model_name}Schema", inferred_model, list(filtered_args)
)
class StructuredTool(BaseStructuredTool[BaseModel, Any]):
"""StructuredTool that takes in function or coroutine directly."""
func: Callable[..., Any]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
args_schema: Type[BaseModel] # :meta private:
@validator("func", pre=True, always=True)
def validate_func_not_partial(cls, func: Callable) -> Callable:
"""Check that the function is not a partial."""
if isinstance(func, partial):
raise ValueError("Partial functions not yet supported in structured tools.")
return func
@property
def args(self) -> dict:
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
else:
inferred_model = validate_arguments(self.func).model # type: ignore
return get_filtered_args(inferred_model, self.func)
def _run(self, tool_input: BaseModel) -> Any:
"""Use the tool."""
return self.func(**tool_input.dict())
async def _arun(self, tool_input: BaseModel) -> Any:
"""Use the tool asynchronously."""
if self.coroutine:
return await self.coroutine(**tool_input.dict())
raise NotImplementedError(f"StructuredTool {self.name} does not support async")
@classmethod
def from_function(
cls,
func: Callable[..., Any],
coroutine: Optional[Callable[..., Awaitable[Any]]] = None,
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
name: Optional[str] = None,
description: Optional[str] = None,
) -> "StructuredTool":
"""Make tools out of functions, can be used with or without arguments.
Args:
func: The function to run when the tool is called.
coroutine: The asynchronous version of the function.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: optional argument schema for user to specify
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
name: The name of the tool. Defaults to the function name.
description: The description of the tool. Defaults to the function
docstring.
"""
description = func.__doc__ or description
if description is None or not description.strip():
raise ValueError(
f"Function {func.__name__} must have a docstring, or set description."
)
name = name or func.__name__
_args_schema = args_schema
if _args_schema is None and infer_schema:
_args_schema = create_schema_from_function(f"{name}Schema", func)
description = f"{name}{signature(func)} - {description}"
return cls(
name=name,
func=func,
coroutine=coroutine,
return_direct=return_direct,
args_schema=_args_schema,
description=description,
)
def structured_tool(
*args: Union[str, Callable],
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
) -> Callable:
"""Make tools out of functions, can be used with or without arguments.
Args:
*args: The arguments to the tool.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: optional argument schema for user to specify
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
Requires:
- Function must be of type (str) -> str
- Function must have a docstring
Examples:
.. code-block:: python
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_tool(func: Callable) -> StructuredTool:
return StructuredTool.from_function(
name=tool_name,
func=func,
args_schema=args_schema,
return_direct=return_direct,
infer_schema=infer_schema,
)
return _make_tool
if len(args) == 1 and isinstance(args[0], str):
# if the argument is a string, then we use the string as the tool name
# Example usage: @tool("search", return_direct=True)
return _make_with_name(args[0])
elif len(args) == 1 and callable(args[0]):
# if the argument is a function, then we use the function name as the tool name
# Example usage: @tool
return _make_with_name(args[0].__name__)(args[0])
elif len(args) == 0:
# if there are no arguments, then we use the function name as the tool name
# Example usage: @tool(return_direct=True)
def _partial(func: Callable[[str], str]) -> BaseStructuredTool:
return _make_with_name(func.__name__)(func)
return _partial
else:
raise ValueError("Too many arguments for tool decorator")

View File

@@ -1,12 +0,0 @@
"""Async utilities."""
from typing import Any, Callable
async def async_or_sync_call(
method: Callable, *args: Any, is_async: bool, **kwargs: Any
) -> Any:
"""Run the callback manager method asynchronously or synchronously."""
if is_async:
return await method(*args, **kwargs)
else:
return method(*args, **kwargs)

View File

@@ -1,24 +1,59 @@
"""Wrapper around subprocess to run commands."""
import re
import subprocess
from typing import List, Union
from uuid import uuid4
import pexpect
class BashProcess:
"""Executes bash commands and returns the output."""
def __init__(self, strip_newlines: bool = False, return_err_output: bool = False):
def __init__(
self,
strip_newlines: bool = False,
return_err_output: bool = False,
persistent: bool = False,
):
"""Initialize with stripping newlines."""
self.strip_newlines = strip_newlines
self.return_err_output = return_err_output
self.prompt = ""
self.process = None
if persistent:
self.prompt = str(uuid4())
self.process = self._initialize_persistent_process(self.prompt)
@staticmethod
def _initialize_persistent_process(prompt: str) -> pexpect.spawn:
# Start bash in a clean environment
process = pexpect.spawn(
"env", ["-i", "bash", "--norc", "--noprofile"], encoding="utf-8"
)
# Set the custom prompt
process.sendline("PS1=" + prompt)
process.expect_exact(prompt, timeout=10)
return process
def run(self, commands: Union[str, List[str]]) -> str:
"""Run commands and return final output."""
if isinstance(commands, str):
commands = [commands]
commands = ";".join(commands)
if self.process is not None:
return self._run_persistent(
commands,
)
else:
return self._run(commands)
def _run(self, command: str) -> str:
"""Run commands and return final output."""
try:
output = subprocess.run(
commands,
command,
shell=True,
check=True,
stdout=subprocess.PIPE,
@@ -31,3 +66,31 @@ class BashProcess:
if self.strip_newlines:
output = output.strip()
return output
def process_output(self, output: str, command: str) -> str:
# Remove the command from the output using a regular expression
pattern = re.escape(command) + r"\s*\n"
output = re.sub(pattern, "", output, count=1)
return output.strip()
def _run_persistent(self, command: str) -> str:
"""Run commands and return final output."""
if self.process is None:
raise ValueError("Process not initialized")
self.process.sendline(command)
# Clear the output with an empty string
self.process.expect(self.prompt, timeout=10)
self.process.sendline("")
try:
self.process.expect([self.prompt, pexpect.EOF], timeout=10)
except pexpect.TIMEOUT:
return f"Timeout error while executing command {command}"
if self.process.after == pexpect.EOF:
return f"Exited with error status: {self.process.exitstatus}"
output = self.process.before
output = self.process_output(output, command)
if self.strip_newlines:
return output.strip()
return output

View File

@@ -41,7 +41,7 @@ class DuckDuckGoSearchAPIWrapper(BaseModel):
def run(self, query: str) -> str:
from duckduckgo_search import ddg
"""Run query through DuckDuckGo and return results."""
"""Run query through DuckDuckGo and return concatenated results."""
results = ddg(
query,
region=self.region,
@@ -54,7 +54,7 @@ class DuckDuckGoSearchAPIWrapper(BaseModel):
snippets = [result["body"] for result in results]
return " ".join(snippets)
def results(self, query: str, num_results: int) -> List[Dict]:
def results(self, query: str, num_results: int) -> List[Dict[str, str]]:
"""Run query through DuckDuckGo and return metadata.
Args:
@@ -80,7 +80,7 @@ class DuckDuckGoSearchAPIWrapper(BaseModel):
if results is None or len(results) == 0:
return [{"Result": "No good DuckDuckGo Search Result was found"}]
def to_metadata(result: Dict) -> Dict:
def to_metadata(result: Dict) -> Dict[str, str]:
return {
"snippet": result["body"],
"title": result["title"],

View File

@@ -1,8 +1,9 @@
"""Util that calls OpenWeatherMap using PyOWM."""
from typing import Any, Dict, Optional
from pydantic import BaseModel, Extra, root_validator
from pydantic import Extra, root_validator
from langchain.tools.base import BaseModel
from langchain.utils import get_from_dict_or_env

View File

@@ -77,7 +77,23 @@ class SerpAPIWrapper(BaseModel):
return values
async def arun(self, query: str) -> str:
"""Use aiohttp to run query through SerpAPI and parse result."""
"""Run query through SerpAPI and parse result async."""
return self._process_response(await self.aresults(query))
def run(self, query: str) -> str:
"""Run query through SerpAPI and parse result."""
return self._process_response(self.results(query))
def results(self, query: str) -> dict:
"""Run query through SerpAPI and return the raw result."""
params = self.get_params(query)
with HiddenPrints():
search = self.search_engine(params)
res = search.get_dict()
return res
async def aresults(self, query: str) -> dict:
"""Use aiohttp to run query through SerpAPI and return the results async."""
def construct_url_and_params() -> Tuple[str, Dict[str, str]]:
params = self.get_params(query)
@@ -97,18 +113,6 @@ class SerpAPIWrapper(BaseModel):
async with self.aiosession.get(url, params=params) as response:
res = await response.json()
return self._process_response(res)
def run(self, query: str) -> str:
"""Run query through SerpAPI and parse result."""
return self._process_response(self.results(query))
def results(self, query: str) -> dict:
"""Run query through SerpAPI and return the raw result."""
params = self.get_params(query)
with HiddenPrints():
search = self.search_engine(params)
res = search.get_dict()
return res
def get_params(self, query: str) -> Dict[str, str]:

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain"
version = "0.0.149"
version = "0.0.150"
description = "Building applications with LLMs through composability"
authors = []
license = "MIT"

View File

@@ -9,15 +9,17 @@ from langchain.document_loaders.html_bs import BSHTMLLoader
def test_bs_html_loader() -> None:
"""Test unstructured loader."""
file_path = Path(__file__).parent.parent / "examples/example.html"
loader = BSHTMLLoader(str(file_path))
loader = BSHTMLLoader(str(file_path), get_text_separator="|")
docs = loader.load()
assert len(docs) == 1
metadata = docs[0].metadata
content = docs[0].page_content
assert metadata["title"] == "Chew dad's slippers"
assert metadata["source"] == str(file_path)
assert content[:2] == "\n|"
@pytest.mark.skipif(

View File

@@ -1,6 +1,6 @@
import pytest
from langchain.tools.ddg_search.tool import DuckDuckGoSearchTool
from langchain.tools.ddg_search.tool import DuckDuckGoSearchRun
def ddg_installed() -> bool:
@@ -16,7 +16,7 @@ def ddg_installed() -> bool:
@pytest.mark.skipif(not ddg_installed(), reason="requires duckduckgo-search package")
def test_ddg_search_tool() -> None:
keywords = "Bella Ciao"
tool = DuckDuckGoSearchTool()
tool = DuckDuckGoSearchRun()
result = tool(keywords)
print(result)
assert len(result.split()) > 20

View File

@@ -1,148 +0,0 @@
"""Test the BaseOutputParser class and its sub-classes."""
from collections import defaultdict
import json
from copy import deepcopy
from pathlib import Path
from typing import List, Tuple
from pydantic import ValidationError
import pytest
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits.json.toolkit import JsonToolkit
from langchain.agents.agent_toolkits.nla.toolkit import NLAToolkit
from langchain.agents.agent_toolkits.openapi.toolkit import RequestsToolkit
from langchain.agents.agent_types import AgentType
from langchain.llms.openai import OpenAI
from langchain.memory.buffer import ConversationBufferMemory
from langchain.requests import TextRequestsWrapper
from langchain.schema import BaseLanguageModel
from langchain.tools.base import BaseTool
from langchain.tools.json.tool import JsonSpec
def _get_requests_tools_and_questions(**kwargs) -> List[Tuple[BaseTool, List[str]]]:
requests_wrapper = TextRequestsWrapper()
requests_toolkit = RequestsToolkit(requests_wrapper=requests_wrapper)
tools = requests_toolkit.get_tools()
tools_dict = {tool.name: tool for tool in tools}
method_to_questions = {
# "get": ["Get the header of google.com"],
"post": ["Post data {'key': 'value'} to google.com"],
"patch": ["Patch data {'key': 'value'} to google.com"],
"put": ["Put data {'key': 'value'} to google.com"],
"delete": ["Delete data with ID 1234abc from google.com"],
}
results = []
for method, qs in method_to_questions.items():
results.append((tools_dict[f"requests_{method}"], qs))
return results
def _get_json_tools_and_questions(**kwargs) -> List[Tuple[BaseTool, List[str]]]:
spec = JsonSpec.from_file(
Path("tests/unit_tests/tools/openapi/test_specs/apis-guru/apispec.json")
)
json_toolkit = JsonToolkit(spec=spec)
list_keys, get_value = json_toolkit.get_tools()
return [
(list_keys, "What keys are in the JSON spec?"),
(get_value, "What's in the info.description?"),
]
def _get_nla_tools_nad_questions(
*,
llm: BaseLanguageModel,
) -> List[Tuple[BaseTool, List[str]]]:
speak_toolkit = NLAToolkit.from_llm_and_url(
llm, "https://api.speak.com/openapi.yaml"
)
# TODO: make more pointed questions
speak_tools_and_questions = [
(tool, ["Could you help me learn something new in Spanish?"])
for tool in speak_toolkit.get_tools()
]
klarna_toolkit = NLAToolkit.from_llm_and_url(
llm, "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/"
)
klarna_tools_and_questions = [
(tool, ["I want to buy some cheap shoes"])
for tool in klarna_toolkit.get_tools()
]
return speak_tools_and_questions + klarna_tools_and_questions
def generate_tuples() -> (
List[Tuple[BaseTool, List[str], BaseLanguageModel, AgentType, bool]]
):
"""Grid test."""
llms = [
# ChatOpenAI(),
OpenAI(),
]
generators = [
# _get_nla_tools_nad_questions,
# _get_json_tools_and_questions,
_get_requests_tools_and_questions,
]
# These types don't really support arbitrary single tools...
# excluded_types = (AgentType.SELF_ASK_WITH_SEARCH, AgentType.REACT_DOCSTORE)
# agent_types = [
# agent_type for agent_type in AgentType if agent_type not in excluded_types
# ]
agent_types = [
# AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
]
results = []
for llm in llms:
for agent_type in agent_types:
for generator in generators:
tools_and_queries = generator(llm=llm)
for tool, queries in tools_and_queries:
results.append((tool, queries, llm, agent_type))
return results
_AGGREGATE_AXES = ["tool", "llm", "agent_type"]
_FAILURE_COUNT = {k: defaultdict(int) for k in _AGGREGATE_AXES}
@pytest.mark.parametrize("tool, queries, llm, agent_type", generate_tuples())
def test_run_tool(
tool: BaseTool,
queries: List[str],
llm: BaseLanguageModel,
agent_type: AgentType,
) -> None:
global _FAILURE_COUNT
tool = deepcopy(tool)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent = initialize_agent(
llm=llm,
tools=[tool],
agent=agent_type,
memory=memory,
verbose=True,
)
results = []
for query in queries:
try:
result = agent(query)
results.append(result)
except Exception as e:
results.append(e)
type_errors = [r for r in results if isinstance(r, TypeError)]
if type_errors:
print(f"{str(llm)}: {tool.name} failed with: {type_errors}")
_FAILURE_COUNT["tool"][tool.name] += 1
_FAILURE_COUNT["llm"][str(llm)] += 1
_FAILURE_COUNT["agent_type"][str(agent_type)] += 1
assert not type_errors, type_errors
validation_errors = [r for r in results if isinstance(r, ValidationError)]
assert not validation_errors, validation_errors

View File

@@ -0,0 +1,50 @@
"""Test chat agents in various scenarios."""
from typing import Set
import pytest
from langchain.agents.agent_types import AgentType
from langchain.agents.initialize import initialize_agent
from langchain.agents.tools import Tool
from langchain.chains.llm_math.base import LLMMathChain
from langchain.chat_models.openai import ChatOpenAI
from langchain.tools.ddg_search.tool import DuckDuckGoSearchRun
from langchain.tools.plugin import AIPluginTool
TEST_CASES = [
(
"What's the current time in NYC?",
{"DuckDuckGo Search"},
),
("What is a shoe that's available on Klarna?", {"KlarnaProducts"}),
("What's 3*4.2*1.7", {"Calculator"}),
]
@pytest.mark.parametrize("query, used_tools", TEST_CASES)
def test_chat_agent(query: str, used_tools: Set[str]) -> None:
"""Test chat agent."""
llm = ChatOpenAI(temperature=0)
llm_math_chain = LLMMathChain(llm=llm)
tools = [
DuckDuckGoSearchRun(),
AIPluginTool.from_plugin_url(
"https://www.klarna.com/.well-known/ai-plugin.json"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for doing calculations",
),
]
agent_executor = initialize_agent(
tools,
llm,
AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
return_intermediate_steps=True,
)
result = agent_executor({"input": query})
intermediate_steps = result["intermediate_steps"]
tool_sequences = [act.tool for act, _ in intermediate_steps]
assert set(tool_sequences) == used_tools

View File

@@ -3,26 +3,107 @@ import sys
import pytest
from langchain.chains.llm_bash.base import LLMBashChain
from langchain.chains.llm_bash.base import BashOutputParser, LLMBashChain
from langchain.chains.llm_bash.prompt import _PROMPT_TEMPLATE
from langchain.schema import OutputParserException
from tests.unit_tests.llms.fake_llm import FakeLLM
_SAMPLE_CODE = """
Unrelated text
```bash
echo hello
```
Unrelated text
"""
_SAMPLE_CODE_2_LINES = """
Unrelated text
```bash
echo hello
echo world
```
Unrelated text
"""
@pytest.fixture
def fake_llm_bash_chain() -> LLMBashChain:
"""Fake LLM Bash chain for testing."""
question = "Please write a bash script that prints 'Hello World' to the console."
prompt = _PROMPT_TEMPLATE.format(question=question)
queries = {prompt: "```bash\nexpr 1 + 1\n```"}
fake_llm = FakeLLM(queries=queries)
return LLMBashChain(llm=fake_llm, input_key="q", output_key="a")
def output_parser() -> BashOutputParser:
"""Output parser for testing."""
return BashOutputParser()
@pytest.mark.skipif(
sys.platform.startswith("win"), reason="Test not supported on Windows"
)
def test_simple_question(fake_llm_bash_chain: LLMBashChain) -> None:
def test_simple_question() -> None:
"""Test simple question that should not need python."""
question = "Please write a bash script that prints 'Hello World' to the console."
prompt = _PROMPT_TEMPLATE.format(question=question)
queries = {prompt: "```bash\nexpr 1 + 1\n```"}
fake_llm = FakeLLM(queries=queries)
fake_llm_bash_chain = LLMBashChain(llm=fake_llm, input_key="q", output_key="a")
output = fake_llm_bash_chain.run(question)
assert output == "2\n"
def test_get_code(output_parser: BashOutputParser) -> None:
"""Test the parser."""
code_lines = output_parser.parse(_SAMPLE_CODE)
code = [c for c in code_lines if c.strip()]
assert code == code_lines
assert code == ["echo hello"]
code_lines = output_parser.parse(_SAMPLE_CODE + _SAMPLE_CODE_2_LINES)
assert code_lines == ["echo hello", "echo hello", "echo world"]
def test_parsing_error() -> None:
"""Test that LLM Output without a bash block raises an exce"""
question = "Please echo 'hello world' to the terminal."
prompt = _PROMPT_TEMPLATE.format(question=question)
queries = {
prompt: """
```text
echo 'hello world'
```
"""
}
fake_llm = FakeLLM(queries=queries)
fake_llm_bash_chain = LLMBashChain(llm=fake_llm, input_key="q", output_key="a")
with pytest.raises(OutputParserException):
fake_llm_bash_chain.run(question)
def test_get_code_lines_mixed_blocks(output_parser: BashOutputParser) -> None:
text = """
Unrelated text
```bash
echo hello
ls && pwd && ls
```
```python
print("hello")
```
```bash
echo goodbye
```
"""
code_lines = output_parser.parse(text)
assert code_lines == ["echo hello", "ls && pwd && ls", "echo goodbye"]
def test_get_code_lines_simple_nested_ticks(output_parser: BashOutputParser) -> None:
"""Test that backticks w/o a newline are ignored."""
text = """
Unrelated text
```bash
echo hello
echo "```bash is in this string```"
```
"""
code_lines = output_parser.parse(text)
assert code_lines == ["echo hello", 'echo "```bash is in this string```"']

View File

@@ -21,6 +21,23 @@ def test_pwd_command() -> None:
assert output == subprocess.check_output("pwd", shell=True).decode()
@pytest.mark.skipif(
sys.platform.startswith("win"), reason="Test not supported on Windows"
)
def test_pwd_command_persistent() -> None:
"""Test correct functionality when the bash process is persistent."""
session = BashProcess(persistent=True, strip_newlines=True)
commands = ["pwd"]
output = session.run(commands)
assert subprocess.check_output("pwd", shell=True).decode().strip() in output
session.run(["cd .."])
new_output = session.run(["pwd"])
# Assert that the new_output is a parent of the old output
assert Path(output).parent == Path(new_output)
@pytest.mark.skipif(
sys.platform.startswith("win"), reason="Test not supported on Windows"
)
@@ -66,3 +83,16 @@ def test_create_directory_and_files(tmp_path: Path) -> None:
# check that the files were created in the temporary directory
output = session.run([f"ls {temp_dir}"])
assert output == "file1.txt\nfile2.txt"
@pytest.mark.skipif(
sys.platform.startswith("win"), reason="Test not supported on Windows"
)
def test_create_bash_persistent() -> None:
"""Test the pexpect persistent bash terminal"""
session = BashProcess(persistent=True)
response = session.run("echo hello")
response += session.run("echo world")
assert "hello" in response
assert "world" in response