Compare commits

...

7 Commits

Author SHA1 Message Date
Bagatur
039bf5221f cr 2023-10-21 11:16:21 -04:00
Bagatur
99057725b2 cr 2023-10-21 11:14:10 -04:00
Vasek Mlejnsky
4a0c9fee9b Rename variable 2023-10-20 17:33:13 -07:00
Vasek Mlejnsky
48fcf3ef1c Fix path 2023-10-20 17:32:35 -07:00
Vasek Mlejnsky
4d7705aba6 Stream artifacts and output 2023-10-20 17:31:08 -07:00
Vasek Mlejnsky
4d1a91746d Add print statement to the LLM generated code 2023-10-19 19:56:36 -07:00
Vasek Mlejnsky
6f8b32ffc2 Expose more sandbox methods 2023-10-19 19:21:26 -07:00
6 changed files with 855 additions and 265 deletions

View File

@@ -0,0 +1,367 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# E2B Data Analysis\n",
"\n",
"[E2B's cloud environments](https://e2b.dev) are great runtime sandboxes for LLMs.\n",
"\n",
"E2B's Data Analysis sandbox allows for safe code execution in a sandboxed environment. This is ideal for building tools such as code interpreters, or Advanced Data Analysis like in ChatGPT.\n",
"\n",
"E2B Data Analysis sandbox allows you to:\n",
"- Run Python code\n",
"- Generate charts via matplotlib\n",
"- Install Python packages dynamically durint runtime\n",
"- Install system packages dynamically during runtime\n",
"- Run shell commands\n",
"- Upload and download files\n",
"\n",
"We'll create a simple OpenAI agent that will use E2B's Data Analysis sandbox to perform analysis on a uploaded files using Python."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Get your OpenAI API key and [E2B API key here](https://e2b.dev/docs/getting-started/api-key) and set them as environment variables.\n",
"\n",
"You can find the full API documentation [here](https://e2b.dev/docs).\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You'll need to install `e2b` to get started:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain e2b"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.tools import E2BDataAnalysisTool\n",
"from langchain.agents import initialize_agent, AgentType\n",
"\n",
"os.environ[\"E2B_API_KEY\"] = \"<E2B_API_KEY>\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"<OPENAI_API_KEY>\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"When creating an instance of the `E2BDataAnalysisTool`, you can pass callbacks to listen to the output of the sandbox. This is useful, for example, when creating more responsive UI. Especially with the combination of streaming output from LLMs."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Artifacts are charts created by matplotlib when `plt.show()` is called\n",
"def save_artifact(artifact):\n",
" print(\"New matplotlib chart generated:\", artifact.name)\n",
" # Download the artifact as `bytes` and leave it up to the user to display them (on frontend, for example)\n",
" file = artifact.download()\n",
" basename = os.path.basename(artifact.name)\n",
"\n",
" # Save the chart to the `charts` directory\n",
" with open(f\"./charts/{basename}\", \"wb\") as f:\n",
" f.write(file)\n",
"\n",
"e2b_data_analysis_tool = E2BDataAnalysisTool(\n",
" # Pass environment variables to the sandbox\n",
" env_vars={\"MY_SECRET\": \"secret_value\"},\n",
" on_stdout=lambda stdout: print(\"stdout:\", stdout),\n",
" on_stderr=lambda stderr: print(\"stderr:\", stderr),\n",
" on_artifact=save_artifact,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Upload an example CSV data file to the sandbox so we can analyze it with our agent. You can use for example [this file](https://storage.googleapis.com/e2b-examples/netflix.csv) about Netflix tv shows."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"name='netflix.csv' remote_path='/home/user/netflix.csv' description='Data about Netflix tv shows including their title, category, director, release date, casting, age rating, etc.'\n"
]
}
],
"source": [
"with open(\"./netflix.csv\") as f:\n",
" remote_path = e2b_data_analysis_tool.upload_file(\n",
" file=f,\n",
" description=\"Data about Netflix tv shows including their title, category, director, release date, casting, age rating, etc.\",\n",
" )\n",
" print(remote_path)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a `Tool` object and initialize the Langchain agent."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"tools = [e2b_data_analysis_tool.as_tool()]\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, handle_parsing_errors=True\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can ask the agent questions about the CSV file we uploaded earlier."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `e2b_data_analysis` with `{'python_code': \"import pandas as pd\\n\\n# Load the data\\nnetflix_data = pd.read_csv('/home/user/netflix.csv')\\n\\n# Convert the 'release_year' column to integer\\nnetflix_data['release_year'] = netflix_data['release_year'].astype(int)\\n\\n# Filter the data for movies released between 2000 and 2010\\nfiltered_data = netflix_data[(netflix_data['release_year'] >= 2000) & (netflix_data['release_year'] <= 2010) & (netflix_data['type'] == 'Movie')]\\n\\n# Remove rows where 'duration' is not available\\nfiltered_data = filtered_data[filtered_data['duration'].notna()]\\n\\n# Convert the 'duration' column to integer\\nfiltered_data['duration'] = filtered_data['duration'].str.replace(' min','').astype(int)\\n\\n# Get the top 5 longest movies\\nlongest_movies = filtered_data.nlargest(5, 'duration')\\n\\n# Create a bar chart\\nimport matplotlib.pyplot as plt\\n\\nplt.figure(figsize=(10,5))\\nplt.barh(longest_movies['title'], longest_movies['duration'], color='skyblue')\\nplt.xlabel('Duration (minutes)')\\nplt.title('Top 5 Longest Movies on Netflix (2000-2010)')\\nplt.gca().invert_yaxis()\\nplt.savefig('/home/user/longest_movies.png')\\n\\nlongest_movies[['title', 'duration']]\"}`\n",
"\n",
"\n",
"\u001b[0mstdout: title duration\n",
"stdout: 1019 Lagaan 224\n",
"stdout: 4573 Jodhaa Akbar 214\n",
"stdout: 2731 Kabhi Khushi Kabhie Gham 209\n",
"stdout: 2632 No Direction Home: Bob Dylan 208\n",
"stdout: 2126 What's Your Raashee? 203\n",
"\u001b[36;1m\u001b[1;3m{'stdout': \" title duration\\n1019 Lagaan 224\\n4573 Jodhaa Akbar 214\\n2731 Kabhi Khushi Kabhie Gham 209\\n2632 No Direction Home: Bob Dylan 208\\n2126 What's Your Raashee? 203\", 'stderr': ''}\u001b[0m\u001b[32;1m\u001b[1;3mThe 5 longest movies on Netflix released between 2000 and 2010 are:\n",
"\n",
"1. Lagaan - 224 minutes\n",
"2. Jodhaa Akbar - 214 minutes\n",
"3. Kabhi Khushi Kabhie Gham - 209 minutes\n",
"4. No Direction Home: Bob Dylan - 208 minutes\n",
"5. What's Your Raashee? - 203 minutes\n",
"\n",
"Here is the chart showing their lengths:\n",
"\n",
"![Longest Movies](sandbox:/home/user/longest_movies.png)\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"The 5 longest movies on Netflix released between 2000 and 2010 are:\\n\\n1. Lagaan - 224 minutes\\n2. Jodhaa Akbar - 214 minutes\\n3. Kabhi Khushi Kabhie Gham - 209 minutes\\n4. No Direction Home: Bob Dylan - 208 minutes\\n5. What's Your Raashee? - 203 minutes\\n\\nHere is the chart showing their lengths:\\n\\n![Longest Movies](sandbox:/home/user/longest_movies.png)\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What are the 5 longest movies on netflix released between 2000 and 2010? Create a chart with their lengths.\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"E2B also allows you to install both Python and system (via `apt`) packages dynamically during runtime like this:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"stdout: Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (2.1.1)\n",
"stdout: Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas) (2.8.2)\n",
"stdout: Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas) (2023.3.post1)\n",
"stdout: Requirement already satisfied: numpy>=1.22.4 in /usr/local/lib/python3.10/dist-packages (from pandas) (1.26.1)\n",
"stdout: Requirement already satisfied: tzdata>=2022.1 in /usr/local/lib/python3.10/dist-packages (from pandas) (2023.3)\n",
"stdout: Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.2->pandas) (1.16.0)\n"
]
}
],
"source": [
"# Install Python package\n",
"e2b_data_analysis_tool.install_python_packages('pandas')"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Additionally, you can download any file from the sandbox like this:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# The path is a remote path in the sandbox\n",
"files_in_bytes = e2b_data_analysis_tool.download_file('/home/user/netflix.csv')"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly, you can run any shell command inside the sandbox via `run_command`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"stderr: \n",
"stderr: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n",
"stderr: \n",
"stdout: Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease\n",
"stdout: Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease\n",
"stdout: Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease\n",
"stdout: Hit:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease\n",
"stdout: Reading package lists...\n",
"stdout: Building dependency tree...\n",
"stdout: Reading state information...\n",
"stdout: All packages are up to date.\n",
"stdout: Reading package lists...\n",
"stdout: Building dependency tree...\n",
"stdout: Reading state information...\n",
"stdout: Suggested packages:\n",
"stdout: sqlite3-doc\n",
"stdout: The following NEW packages will be installed:\n",
"stdout: sqlite3\n",
"stdout: 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.\n",
"stdout: Need to get 768 kB of archives.\n",
"stdout: After this operation, 1873 kB of additional disk space will be used.\n",
"stdout: Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 sqlite3 amd64 3.37.2-2ubuntu0.1 [768 kB]\n",
"stderr: debconf: delaying package configuration, since apt-utils is not installed\n",
"stdout: Fetched 768 kB in 0s (2258 kB/s)\n",
"stdout: Selecting previously unselected package sqlite3.\n",
"(Reading database ... 23999 files and directories currently installed.)\n",
"stdout: Preparing to unpack .../sqlite3_3.37.2-2ubuntu0.1_amd64.deb ...\n",
"stdout: Unpacking sqlite3 (3.37.2-2ubuntu0.1) ...\n",
"stdout: Setting up sqlite3 (3.37.2-2ubuntu0.1) ...\n",
"stdout: 3.37.2 2022-01-06 13:25:41 872ba256cbf61d9290b571c0e6d82a20c224ca3ad82971edc46b29818d5dalt1\n",
"version: 3.37.2 2022-01-06 13:25:41 872ba256cbf61d9290b571c0e6d82a20c224ca3ad82971edc46b29818d5dalt1\n",
"error: \n",
"exit code: 0\n"
]
}
],
"source": [
"# Install SQLite\n",
"e2b_data_analysis_tool.run_command(\"sudo apt update\")\n",
"e2b_data_analysis_tool.install_system_packages(\"sqlite3\")\n",
"\n",
"# Check the SQLite version\n",
"output = e2b_data_analysis_tool.run_command(\"sqlite3 --version\")\n",
"print(\"version: \", output[\"stdout\"])\n",
"print(\"error: \", output[\"stderr\"])\n",
"print(\"exit code: \", output[\"exit_code\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"When your agent is finished, don't forget to close the sandbox"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"e2b_data_analysis_tool.close()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -639,6 +639,12 @@ def _import_bearly_tool() -> Any:
return BearlyInterpreterTool
def _import_e2b_data_analysis() -> Any:
from langchain.tools.e2b_data_analysis.tool import E2BDataAnalysisTool
return E2BDataAnalysisTool
def __getattr__(name: str) -> Any:
if name == "AINAppOps":
return _import_ainetwork_app()
@@ -846,6 +852,8 @@ def __getattr__(name: str) -> Any:
return _import_zapier_tool_ZapierNLARunAction()
elif name == "BearlyInterpreterTool":
return _import_bearly_tool()
elif name == "E2BDataAnalysisTool":
return _import_e2b_data_analysis()
else:
raise AttributeError(f"Could not find: {name}")
@@ -958,4 +966,5 @@ __all__ = [
"tool",
"format_tool_to_openai_function",
"BearlyInterpreterTool",
"E2BDataAnalysisTool",
]

View File

@@ -0,0 +1,209 @@
from __future__ import annotations
import ast
import os
from typing import IO, TYPE_CHECKING, Any, Callable, List, Optional, Type
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import BaseTool, Tool
if TYPE_CHECKING:
from e2b import EnvVars
from e2b.templates.data_analysis import Artifact
base_description = """Evaluates python code in a sandbox environment. \
The environment is long running and exists accross multiple executions. \
You must send the whole script every time and print your outputs. \
Script should be pure python code that can be evaluated. \
It should be in python format NOT markdown. \
The code should NOT be wrapped in backticks. \
All python packages including requests, matplotlib, scipy, numpy, pandas, \
etc are available. \
If you have any files outputted write them to "/home/user" directory \
path."""
def add_last_line_print(code: str) -> str:
"""Add print statement to the last line if it's missing.
Sometimes, the LLM-generated code doesn't have `print(variable_name)`, instead the
LLM tries to print the variable only by writing `variable_name` (as you would in
REPL, for example).
This methods checks the AST of the generated Python code and adds the print
statement to the last line if it's missing.
"""
tree = ast.parse(code)
node = tree.body[-1]
if isinstance(node, ast.Expr) and isinstance(node.value, ast.Call):
if isinstance(node.value.func, ast.Name) and node.value.func.id == "print":
return tree
tree.body[-1] = ast.Expr(
value=ast.Call(
func=ast.Name(id="print", ctx=ast.Load()),
args=[node.value],
keywords=[],
)
)
return ast.unparse(tree)
class UploadedFile(BaseModel):
"""Description of the uploaded path with its remote path."""
name: str
remote_path: str
description: str
class E2BDataAnalysisToolArguments(BaseModel):
"""Arguments for the E2BDataAnalysisTool."""
python_code: str = Field(
...,
example="print('Hello World')",
description=(
"The python script to be evaluated. "
"The contents will be in main.py. "
"It should not be in markdown format."
),
)
class E2BDataAnalysisTool(BaseTool):
"""Tool for running python code in a sandboxed environment for data analysis."""
name = "e2b_data_analysis"
args_schema: Type[BaseModel] = E2BDataAnalysisToolArguments
session: Any
_uploaded_files: List[str] = Field(default_factory=list)
def __init__(
self,
api_key: Optional[str] = None,
cwd: Optional[str] = None,
env_vars: Optional[EnvVars] = None,
on_stdout: Optional[Callable[[str], Any]] = None,
on_stderr: Optional[Callable[[str], Any]] = None,
on_artifact: Optional[Callable[[Artifact], Any]] = None,
on_exit: Optional[Callable[[int], Any]] = None,
**kwargs: Any,
):
try:
from e2b import DataAnalysis
except ImportError as e:
raise ImportError(
"Unable to import e2b, please install with `pip install e2b`."
) from e
# If no API key is provided, E2B will try to read it from the environment
# variable E2B_API_KEY
session = DataAnalysis(
api_key=api_key,
cwd=cwd,
env_vars=env_vars,
on_stdout=on_stdout,
on_stderr=on_stderr,
on_exit=on_exit,
on_artifact=on_artifact,
)
super().__init__(session=session, **kwargs)
self.description = (
base_description + "\n\n" + self.uploaded_files_description
).strip()
def close(self) -> None:
"""Close the cloud sandbox."""
self._uploaded_files = []
self.session.close()
@property
def uploaded_files_description(self) -> str:
if len(self._uploaded_files) == 0:
return ""
lines = ["The following files available in the sandbox:"]
for f in self._uploaded_files:
if f.description == "":
lines.append(f"- path: `{f.remote_path}`")
else:
lines.append(
f"- path: `{f.remote_path}` \n description: `{f.description}`"
)
return "\n".join(lines)
def _run(
self, python_code: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> dict:
python_code = add_last_line_print(python_code)
stdout, stderr, _ = self.session.run_python(python_code)
return {
"stdout": stdout,
"stderr": stderr,
}
async def _arun(
self,
python_code: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
raise NotImplementedError("e2b_data_analysis does not support async")
def run_command(
self,
cmd: str,
) -> dict:
"""Run shell command in the sandbox."""
proc = self.session.process.start(cmd)
output = proc.wait()
return {
"stdout": output.stdout,
"stderr": output.stderr,
"exit_code": output.exit_code,
}
def install_python_packages(self, package_names: str | List[str]) -> None:
"""Install python packages in the sandbox."""
self.session.install_python_packages(package_names)
def install_system_packages(self, package_names: str | List[str]) -> None:
"""Install system packages (via apt) in the sandbox."""
self.session.install_system_packages(package_names)
def download_file(self, remote_path: str) -> bytes:
"""Download file from the sandbox."""
return self.session.download_file(remote_path)
def upload_file(self, file: IO, description: str) -> UploadedFile:
"""Upload file to the sandbox.
The file is uploaded to the '/home/user/<filename>' path."""
remote_path = self.session.upload_file(file)
f = UploadedFile(
name=os.path.basename(file.name),
remote_path=remote_path,
description=description,
)
self._uploaded_files.append(f)
return f
def remove_uploaded_file(self, uploaded_file: UploadedFile) -> None:
"""Remove uploaded file from the sandbox."""
self.session.filesystem.remove(uploaded_file.remote_path)
self._uploaded_files.filter(
lambda f: f.remote_path != uploaded_file.remote_path
)
def as_tool(self) -> Tool:
return Tool.from_function(
func=self._run,
name=self.name,
description=self.description,
args_schema=self.args_schema,
)

View File

@@ -18,128 +18,128 @@ SQLAlchemy = ">=1.4,<3"
requests = "^2"
PyYAML = ">=5.3"
numpy = "^1"
azure-core = {version = "^1.26.4", optional=true}
tqdm = {version = ">=4.48.0", optional = true}
openapi-pydantic = {version = "^0.3.2", optional = true}
faiss-cpu = {version = "^1", optional = true}
wikipedia = {version = "^1", optional = true}
elasticsearch = {version = "^8", optional = true}
opensearch-py = {version = "^2.0.0", optional = true}
redis = {version = "^4", optional = true}
manifest-ml = {version = "^0.0.1", optional = true}
nltk = {version = "^3", optional = true}
transformers = {version = "^4", optional = true}
beautifulsoup4 = {version = "^4", optional = true}
torch = {version = ">=1,<3", optional = true}
jinja2 = {version = "^3", optional = true}
tiktoken = {version = ">=0.3.2,<0.6.0", optional = true, python=">=3.9"}
pinecone-client = {version = "^2", optional = true}
pinecone-text = {version = "^0.4.2", optional = true}
pymongo = {version = "^4.3.3", optional = true}
clickhouse-connect = {version="^0.5.14", optional=true}
weaviate-client = {version = "^3", optional = true}
marqo = {version = "^1.2.4", optional=true}
google-api-python-client = {version = "2.70.0", optional = true}
google-auth = {version = "^2.18.1", optional = true}
wolframalpha = {version = "5.0.0", optional = true}
qdrant-client = {version = "^1.3.1", optional = true, python = ">=3.8.1,<3.12"}
azure-core = { version = "^1.26.4", optional = true }
tqdm = { version = ">=4.48.0", optional = true }
openapi-pydantic = { version = "^0.3.2", optional = true }
faiss-cpu = { version = "^1", optional = true }
wikipedia = { version = "^1", optional = true }
elasticsearch = { version = "^8", optional = true }
opensearch-py = { version = "^2.0.0", optional = true }
redis = { version = "^4", optional = true }
manifest-ml = { version = "^0.0.1", optional = true }
nltk = { version = "^3", optional = true }
transformers = { version = "^4", optional = true }
beautifulsoup4 = { version = "^4", optional = true }
torch = { version = ">=1,<3", optional = true }
jinja2 = { version = "^3", optional = true }
tiktoken = { version = ">=0.3.2,<0.6.0", optional = true, python = ">=3.9" }
pinecone-client = { version = "^2", optional = true }
pinecone-text = { version = "^0.4.2", optional = true }
pymongo = { version = "^4.3.3", optional = true }
clickhouse-connect = { version = "^0.5.14", optional = true }
weaviate-client = { version = "^3", optional = true }
marqo = { version = "^1.2.4", optional = true }
google-api-python-client = { version = "2.70.0", optional = true }
google-auth = { version = "^2.18.1", optional = true }
wolframalpha = { version = "5.0.0", optional = true }
qdrant-client = { version = "^1.3.1", optional = true, python = ">=3.8.1,<3.12" }
dataclasses-json = ">= 0.5.7, < 0.7"
tensorflow-text = {version = "^2.11.0", optional = true, python = "^3.10, <3.12"}
tensorflow-text = { version = "^2.11.0", optional = true, python = "^3.10, <3.12" }
tenacity = "^8.1.0"
cohere = {version = "^4", optional = true}
openai = {version = "^0", optional = true}
nlpcloud = {version = "^1", optional = true}
nomic = {version = "^1.0.43", optional = true}
huggingface_hub = {version = "^0", optional = true}
google-search-results = {version = "^2", optional = true}
sentence-transformers = {version = "^2", optional = true}
cohere = { version = "^4", optional = true }
openai = { version = "^0", optional = true }
nlpcloud = { version = "^1", optional = true }
nomic = { version = "^1.0.43", optional = true }
huggingface_hub = { version = "^0", optional = true }
google-search-results = { version = "^2", optional = true }
sentence-transformers = { version = "^2", optional = true }
aiohttp = "^3.8.3"
arxiv = {version = "^1.4", optional = true}
pypdf = {version = "^3.4.0", optional = true}
networkx = {version=">=2.6.3, <4", optional = true}
aleph-alpha-client = {version="^2.15.0", optional = true}
deeplake = {version = "^3.6.8", optional = true}
libdeeplake = {version = "^0.0.60", optional = true}
pgvector = {version = "^0.1.6", optional = true}
psycopg2-binary = {version = "^2.9.5", optional = true}
pyowm = {version = "^3.3.0", optional = true}
async-timeout = {version = "^4.0.0", python = "<3.11"}
azure-identity = {version = "^1.12.0", optional=true}
gptcache = {version = ">=0.1.7", optional = true}
atlassian-python-api = {version = "^3.36.0", optional=true}
pytesseract = {version = "^0.3.10", optional=true}
html2text = {version="^2020.1.16", optional=true}
numexpr = {version="^2.8.6", optional=true}
duckduckgo-search = {version="^3.8.3", optional=true}
azure-cosmos = {version="^4.4.0b1", optional=true}
lark = {version="^1.1.5", optional=true}
lancedb = {version = "^0.1", optional = true}
pexpect = {version = "^4.8.0", optional = true}
pyvespa = {version = "^0.33.0", optional = true}
O365 = {version = "^2.0.26", optional = true}
jq = {version = "^1.4.1", optional = true}
pdfminer-six = {version = "^20221105", optional = true}
docarray = {version="^0.32.0", extras=["hnswlib"], optional=true}
lxml = {version = "^4.9.2", optional = true}
pymupdf = {version = "^1.22.3", optional = true}
rapidocr-onnxruntime = {version = "^1.3.2", optional = true, python = ">=3.8.1,<3.12"}
pypdfium2 = {version = "^4.10.0", optional = true}
gql = {version = "^3.4.1", optional = true}
pandas = {version = "^2.0.1", optional = true}
telethon = {version = "^1.28.5", optional = true}
neo4j = {version = "^5.8.1", optional = true}
langkit = {version = ">=0.0.6, <0.1.0", optional = true}
chardet = {version="^5.1.0", optional=true}
requests-toolbelt = {version = "^1.0.0", optional = true}
openlm = {version = "^0.0.5", optional = true}
scikit-learn = {version = "^1.2.2", optional = true}
azure-ai-formrecognizer = {version = "^3.2.1", optional = true}
azure-ai-vision = {version = "^0.11.1b1", optional = true}
azure-cognitiveservices-speech = {version = "^1.28.0", optional = true}
py-trello = {version = "^0.19.0", optional = true}
momento = {version = "^1.10.1", optional = true}
bibtexparser = {version = "^1.4.0", optional = true}
singlestoredb = {version = "^0.7.1", optional = true}
pyspark = {version = "^3.4.0", optional = true}
clarifai = {version = ">=9.1.0", optional = true}
tigrisdb = {version = "^1.0.0b6", optional = true}
nebula3-python = {version = "^3.4.0", optional = true}
mwparserfromhell = {version = "^0.6.4", optional = true}
mwxml = {version = "^0.3.3", optional = true}
awadb = {version = "^0.3.9", optional = true}
azure-search-documents = {version = "11.4.0b8", optional = true}
esprima = {version = "^4.0.1", optional = true}
streamlit = {version = "^1.18.0", optional = true, python = ">=3.8.1,<3.9.7 || >3.9.7,<4.0"}
psychicapi = {version = "^0.8.0", optional = true}
cassio = {version = "^0.1.0", optional = true}
rdflib = {version = "^6.3.2", optional = true}
sympy = {version = "^1.12", optional = true}
rapidfuzz = {version = "^3.1.1", optional = true}
arxiv = { version = "^1.4", optional = true }
pypdf = { version = "^3.4.0", optional = true }
networkx = { version = ">=2.6.3, <4", optional = true }
aleph-alpha-client = { version = "^2.15.0", optional = true }
deeplake = { version = "^3.6.8", optional = true }
libdeeplake = { version = "^0.0.60", optional = true }
pgvector = { version = "^0.1.6", optional = true }
psycopg2-binary = { version = "^2.9.5", optional = true }
pyowm = { version = "^3.3.0", optional = true }
async-timeout = { version = "^4.0.0", python = "<3.11" }
azure-identity = { version = "^1.12.0", optional = true }
gptcache = { version = ">=0.1.7", optional = true }
atlassian-python-api = { version = "^3.36.0", optional = true }
pytesseract = { version = "^0.3.10", optional = true }
html2text = { version = "^2020.1.16", optional = true }
numexpr = { version = "^2.8.6", optional = true }
duckduckgo-search = { version = "^3.8.3", optional = true }
azure-cosmos = { version = "^4.4.0b1", optional = true }
lark = { version = "^1.1.5", optional = true }
lancedb = { version = "^0.1", optional = true }
pexpect = { version = "^4.8.0", optional = true }
pyvespa = { version = "^0.33.0", optional = true }
O365 = { version = "^2.0.26", optional = true }
jq = { version = "^1.4.1", optional = true }
pdfminer-six = { version = "^20221105", optional = true }
docarray = { version = "^0.32.0", extras = ["hnswlib"], optional = true }
lxml = { version = "^4.9.2", optional = true }
pymupdf = { version = "^1.22.3", optional = true }
rapidocr-onnxruntime = { version = "^1.3.2", optional = true, python = ">=3.8.1,<3.12" }
pypdfium2 = { version = "^4.10.0", optional = true }
gql = { version = "^3.4.1", optional = true }
pandas = { version = "^2.0.1", optional = true }
telethon = { version = "^1.28.5", optional = true }
neo4j = { version = "^5.8.1", optional = true }
langkit = { version = ">=0.0.6, <0.1.0", optional = true }
chardet = { version = "^5.1.0", optional = true }
requests-toolbelt = { version = "^1.0.0", optional = true }
openlm = { version = "^0.0.5", optional = true }
scikit-learn = { version = "^1.2.2", optional = true }
azure-ai-formrecognizer = { version = "^3.2.1", optional = true }
azure-ai-vision = { version = "^0.11.1b1", optional = true }
azure-cognitiveservices-speech = { version = "^1.28.0", optional = true }
py-trello = { version = "^0.19.0", optional = true }
momento = { version = "^1.10.1", optional = true }
bibtexparser = { version = "^1.4.0", optional = true }
singlestoredb = { version = "^0.7.1", optional = true }
pyspark = { version = "^3.4.0", optional = true }
clarifai = { version = ">=9.1.0", optional = true }
tigrisdb = { version = "^1.0.0b6", optional = true }
nebula3-python = { version = "^3.4.0", optional = true }
mwparserfromhell = { version = "^0.6.4", optional = true }
mwxml = { version = "^0.3.3", optional = true }
awadb = { version = "^0.3.9", optional = true }
azure-search-documents = { version = "11.4.0b8", optional = true }
esprima = { version = "^4.0.1", optional = true }
streamlit = { version = "^1.18.0", optional = true, python = ">=3.8.1,<3.9.7 || >3.9.7,<4.0" }
psychicapi = { version = "^0.8.0", optional = true }
cassio = { version = "^0.1.0", optional = true }
rdflib = { version = "^6.3.2", optional = true }
sympy = { version = "^1.12", optional = true }
rapidfuzz = { version = "^3.1.1", optional = true }
langsmith = "~0.0.43"
rank-bm25 = {version = "^0.2.2", optional = true}
amadeus = {version = ">=8.1.0", optional = true}
geopandas = {version = "^0.13.1", optional = true}
python-arango = {version = "^7.5.9", optional = true}
gitpython = {version = "^3.1.32", optional = true}
librosa = {version="^0.10.0.post2", optional = true }
feedparser = {version = "^6.0.10", optional = true}
newspaper3k = {version = "^0.2.8", optional = true}
amazon-textract-caller = {version = "<2", optional = true}
xata = {version = "^1.0.0a7", optional = true}
xmltodict = {version = "^0.13.0", optional = true}
markdownify = {version = "^0.11.6", optional = true}
assemblyai = {version = "^0.17.0", optional = true}
dashvector = {version = "^1.0.1", optional = true}
sqlite-vss = {version = "^0.1.2", optional = true}
motor = {version = "^3.3.1", optional = true}
rank-bm25 = { version = "^0.2.2", optional = true }
amadeus = { version = ">=8.1.0", optional = true }
geopandas = { version = "^0.13.1", optional = true }
python-arango = { version = "^7.5.9", optional = true }
gitpython = { version = "^3.1.32", optional = true }
librosa = { version = "^0.10.0.post2", optional = true }
feedparser = { version = "^6.0.10", optional = true }
newspaper3k = { version = "^0.2.8", optional = true }
amazon-textract-caller = { version = "<2", optional = true }
xata = { version = "^1.0.0a7", optional = true }
xmltodict = { version = "^0.13.0", optional = true }
markdownify = { version = "^0.11.6", optional = true }
assemblyai = { version = "^0.17.0", optional = true }
dashvector = { version = "^1.0.1", optional = true }
sqlite-vss = { version = "^0.1.2", optional = true }
motor = { version = "^3.3.1", optional = true }
anyio = "<4.0"
jsonpatch = "^1.33"
timescale-vector = {version = "^0.0.1", optional = true}
typer = {version= "^0.9.0", optional = true}
anthropic = {version = "^0.3.11", optional = true}
aiosqlite = {version = "^0.19.0", optional = true}
rspace_client = {version = "^2.5.0", optional = true}
upstash-redis = {version = "^0.15.0", optional = true}
timescale-vector = { version = "^0.0.1", optional = true }
typer = { version = "^0.9.0", optional = true }
anthropic = { version = "^0.3.11", optional = true }
aiosqlite = { version = "^0.19.0", optional = true }
rspace_client = { version = "^2.5.0", optional = true }
upstash-redis = { version = "^0.15.0", optional = true }
[tool.poetry.group.test.dependencies]
@@ -156,7 +156,7 @@ responses = "^0.22.0"
pytest-asyncio = "^0.20.3"
lark = "^1.1.5"
pandas = "^2.0.0"
pytest-mock = "^3.10.0"
pytest-mock = "^3.10.0"
pytest-socket = "^0.6.0"
syrupy = "^4.0.2"
@@ -213,7 +213,17 @@ playwright = "^1.28.0"
setuptools = "^67.6.1"
[tool.poetry.extras]
llms = ["clarifai", "cohere", "openai", "openlm", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
llms = [
"clarifai",
"cohere",
"openai",
"openlm",
"nlpcloud",
"huggingface_hub",
"manifest-ml",
"torch",
"transformers",
]
qdrant = ["qdrant-client"]
openai = ["openai", "tiktoken"]
text_helpers = ["chardet"]
@@ -223,164 +233,160 @@ docarray = ["docarray"]
embeddings = ["sentence-transformers"]
javascript = ["esprima"]
azure = [
"azure-identity",
"azure-cosmos",
"openai",
"azure-core",
"azure-ai-formrecognizer",
"azure-ai-vision",
"azure-cognitiveservices-speech",
"azure-search-documents",
"azure-identity",
"azure-cosmos",
"openai",
"azure-core",
"azure-ai-formrecognizer",
"azure-ai-vision",
"azure-cognitiveservices-speech",
"azure-search-documents",
]
all = [
"clarifai",
"cohere",
"openai",
"nlpcloud",
"huggingface_hub",
"manifest-ml",
"elasticsearch",
"opensearch-py",
"google-search-results",
"faiss-cpu",
"sentence-transformers",
"transformers",
"nltk",
"wikipedia",
"beautifulsoup4",
"tiktoken",
"torch",
"jinja2",
"pinecone-client",
"pinecone-text",
"marqo",
"pymongo",
"weaviate-client",
"redis",
"google-api-python-client",
"google-auth",
"wolframalpha",
"qdrant-client",
"tensorflow-text",
"pypdf",
"networkx",
"nomic",
"aleph-alpha-client",
"deeplake",
"libdeeplake",
"pgvector",
"psycopg2-binary",
"pyowm",
"pytesseract",
"html2text",
"atlassian-python-api",
"gptcache",
"duckduckgo-search",
"arxiv",
"azure-identity",
"clickhouse-connect",
"azure-cosmos",
"lancedb",
"langkit",
"lark",
"pexpect",
"pyvespa",
"O365",
"jq",
"docarray",
"pdfminer-six",
"lxml",
"requests-toolbelt",
"neo4j",
"openlm",
"azure-ai-formrecognizer",
"azure-ai-vision",
"azure-cognitiveservices-speech",
"momento",
"singlestoredb",
"tigrisdb",
"nebula3-python",
"awadb",
"esprima",
"rdflib",
"amadeus",
"librosa",
"python-arango",
"clarifai",
"cohere",
"openai",
"nlpcloud",
"huggingface_hub",
"manifest-ml",
"elasticsearch",
"opensearch-py",
"google-search-results",
"faiss-cpu",
"sentence-transformers",
"transformers",
"nltk",
"wikipedia",
"beautifulsoup4",
"tiktoken",
"torch",
"jinja2",
"pinecone-client",
"pinecone-text",
"marqo",
"pymongo",
"weaviate-client",
"redis",
"google-api-python-client",
"google-auth",
"wolframalpha",
"qdrant-client",
"tensorflow-text",
"pypdf",
"networkx",
"nomic",
"aleph-alpha-client",
"deeplake",
"libdeeplake",
"pgvector",
"psycopg2-binary",
"pyowm",
"pytesseract",
"html2text",
"atlassian-python-api",
"gptcache",
"duckduckgo-search",
"arxiv",
"azure-identity",
"clickhouse-connect",
"azure-cosmos",
"lancedb",
"langkit",
"lark",
"pexpect",
"pyvespa",
"O365",
"jq",
"docarray",
"pdfminer-six",
"lxml",
"requests-toolbelt",
"neo4j",
"openlm",
"azure-ai-formrecognizer",
"azure-ai-vision",
"azure-cognitiveservices-speech",
"momento",
"singlestoredb",
"tigrisdb",
"nebula3-python",
"awadb",
"esprima",
"rdflib",
"amadeus",
"librosa",
"python-arango",
]
cli = [
"typer"
]
cli = ["typer"]
# An extra used to be able to add extended testing.
# Please use new-line on formatting to make it easier to add new packages without
# merge-conflicts
extended_testing = [
"amazon-textract-caller",
"aiosqlite",
"assemblyai",
"beautifulsoup4",
"bibtexparser",
"cassio",
"chardet",
"esprima",
"jq",
"pdfminer-six",
"pgvector",
"pypdf",
"pymupdf",
"pypdfium2",
"tqdm",
"lxml",
"atlassian-python-api",
"mwparserfromhell",
"mwxml",
"pandas",
"telethon",
"psychicapi",
"gql",
"requests-toolbelt",
"html2text",
"numexpr",
"py-trello",
"scikit-learn",
"streamlit",
"pyspark",
"openai",
"sympy",
"rapidfuzz",
"openai",
"rank-bm25",
"geopandas",
"jinja2",
"gitpython",
"newspaper3k",
"feedparser",
"xata",
"xmltodict",
"faiss-cpu",
"openapi-pydantic",
"markdownify",
"arxiv",
"dashvector",
"sqlite-vss",
"rapidocr-onnxruntime",
"motor",
"timescale-vector",
"anthropic",
"upstash-redis",
"rspace_client",
"amazon-textract-caller",
"aiosqlite",
"assemblyai",
"beautifulsoup4",
"bibtexparser",
"cassio",
"chardet",
"esprima",
"jq",
"pdfminer-six",
"pgvector",
"pypdf",
"pymupdf",
"pypdfium2",
"tqdm",
"lxml",
"atlassian-python-api",
"mwparserfromhell",
"mwxml",
"pandas",
"telethon",
"psychicapi",
"gql",
"requests-toolbelt",
"html2text",
"numexpr",
"py-trello",
"scikit-learn",
"streamlit",
"pyspark",
"openai",
"sympy",
"rapidfuzz",
"openai",
"rank-bm25",
"geopandas",
"jinja2",
"gitpython",
"newspaper3k",
"feedparser",
"xata",
"xmltodict",
"faiss-cpu",
"openapi-pydantic",
"markdownify",
"arxiv",
"dashvector",
"sqlite-vss",
"rapidocr-onnxruntime",
"motor",
"timescale-vector",
"anthropic",
"upstash-redis",
"rspace_client",
]
[tool.ruff]
select = [
"E", # pycodestyle
"F", # pyflakes
"I", # isort
]
exclude = [
"tests/integration_tests/examples/non-utf8-encoding.py",
"E", # pycodestyle
"F", # pyflakes
"I", # isort
]
exclude = ["tests/integration_tests/examples/non-utf8-encoding.py"]
[tool.mypy]
ignore_missing_imports = "True"
@@ -388,9 +394,7 @@ disallow_untyped_defs = "True"
exclude = ["notebooks", "examples", "example_data"]
[tool.coverage.run]
omit = [
"tests/*",
]
omit = ["tests/*"]
[build-system]
requires = ["poetry-core>=1.0.0"]

View File

@@ -109,6 +109,7 @@ _EXPECTED = [
"format_tool_to_openai_function",
"tool",
"BearlyInterpreterTool",
"E2BDataAnalysisTool",
]