add use cases: tool_use_with_plugin, and how to write a plugin.

This commit is contained in:
xuyuan23 2023-06-14 17:24:31 +08:00
parent 44df4f2509
commit 621a859be0
6 changed files with 182 additions and 4 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 745 KiB

View File

@ -57,6 +57,11 @@ If you have difficulty with this step, you can also directly use the model from
$ python pilot/server/llmserver.py $ python pilot/server/llmserver.py
``` ```
Starting `llmserver.py` with the following command will result in a relatively stable Python service with multiple processes.
```bash
$ gunicorn llmserver:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 &
```
Run gradio webui Run gradio webui
```bash ```bash

View File

@ -86,4 +86,25 @@ class ChatGLMChatAdapter(BaseChatAdpter):
return chatglm_generate_stream return chatglm_generate_stream
``` ```
if you want to integrate your own model, just need to inheriting BaseLLMAdaper and BaseChatAdpter and implement the methods if you want to integrate your own model, just need to inheriting BaseLLMAdaper and BaseChatAdpter and implement the methods
## Multi Proxy LLMs
### 1. Openai proxy
If you haven't deployed a private infrastructure for a large model, or if you want to use DB-GPT in a low-cost and high-efficiency way, you can also use OpenAI's large model as your underlying model.
- If your environment deploying DB-GPT has access to OpenAI, then modify the .env configuration file as below will work.
```
LLM_MODEL=proxy_llm
MODEL_SERVER=127.0.0.1:8000
PROXY_API_KEY=sk-xxx
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
```
- If you can't access OpenAI locally but have an OpenAI proxy service, you can configure as follows.
```
LLM_MODEL=proxy_llm
MODEL_SERVER=127.0.0.1:8000
PROXY_API_KEY=sk-xxx
PROXY_SERVER_URL={your-openai-proxy-server/v1/chat/completions}
```

View File

@ -1,3 +1,98 @@
# Plugins # Plugins
The ability of Agent and Plugin is the core of whether large models can be automated. In this project, we natively support the plugin mode, and large models can automatically achieve their goals. At the same time, in order to give full play to the advantages of the community, the plugins used in this project natively support the Auto-GPT plugin ecology, that is, Auto-GPT plugins can directly run in our project. The ability of Agent and Plugin is the core of whether large models can be automated. In this project, we natively support the plugin mode, and large models can automatically achieve their goals. At the same time, in order to give full play to the advantages of the community, the plugins used in this project natively support the Auto-GPT plugin ecology, that is, Auto-GPT plugins can directly run in our project.
## Local Plugins
### 1.1 How to write local plugins.
- Local plugins use the Auto-GPT plugin template. A simple example is as follows: first write a plugin file called "sql_executor.py".
```python
import pymysql
import pymysql.cursors
def get_conn():
return pymysql.connect(
host="127.0.0.1",
port=int("2883"),
user="mock",
password="mock",
database="mock",
charset="utf8mb4",
ssl_ca=None,
)
def ob_sql_executor(sql: str):
try:
conn = get_conn()
with conn.cursor() as cursor:
cursor.execute(sql)
result = cursor.fetchall()
field_names = tuple(i[0] for i in cursor.description)
result = list(result)
result.insert(0, field_names)
return result
except pymysql.err.ProgrammingError as e:
return str(e)
```
Then set the "can_handle_post_prompt" method of the plugin template to True. In the "post_prompt" method, write the prompt information and the mapped plugin function.
```python
"""This is a template for DB-GPT plugins."""
from typing import Any, Dict, List, Optional, Tuple, TypeVar, TypedDict
from auto_gpt_plugin_template import AutoGPTPluginTemplate
PromptGenerator = TypeVar("PromptGenerator")
class Message(TypedDict):
role: str
content: str
class DBGPTOceanBase(AutoGPTPluginTemplate):
"""
This is an DB-GPT plugin to connect OceanBase.
"""
def __init__(self):
super().__init__()
self._name = "DB-GPT-OB-Serverless-Plugin"
self._version = "0.1.0"
self._description = "This is an DB-GPT plugin to connect OceanBase."
def can_handle_post_prompt(self) -> bool:
return True
def post_prompt(self, prompt: PromptGenerator) -> PromptGenerator:
from .sql_executor import ob_sql_executor
prompt.add_command(
"ob_sql_executor",
"Execute SQL in OceanBase Database.",
{"sql": "<sql>"},
ob_sql_executor,
)
return prompt
...
```
### 1.2 How to use local plugins
- Pack your plugin project into `your-plugin.zip` and place it in the `/plugins/` directory of the DB-GPT project. After starting the webserver, you can select and use it in the `Plugin Model` section.
## Public Plugins
### 1.1 How to use public plugins
- By default, after launching the webserver, plugins from the public plugin library `DB-GPT-Plugins` will be automatically loaded. For more details, please refer to [DB-GPT-Plugins](https://github.com/csunny/DB-GPT-Plugins)
### 1.2 Contribute to the DB-GPT-Plugins repository
- Please refer to the plugin development process in the public plugin library, and put the configuration parameters in `.plugin_env`
- We warmly welcome everyone to contribute plugins to the public plugin library!

View File

@ -1 +1,57 @@
# Tool use with plugin # Tool use with plugin
- DB-GPT supports a variety of plug-ins, such as MySQL, MongoDB, ClickHouse and other database tool plug-ins. In addition, some database management platforms can also package their interfaces and package them into plug-ins, and use the model to realize the ability of "single-sentence requirements"
## DB-GPT-DASHBOARD-PLUGIN
[](https://github.com/csunny/DB-GPT-Plugins/blob/main/src/dbgpt_plugins/Readme.md)
- This is a DB-GPT plugin to generate data analysis charts, if you want to use the test sample data, please first pull the code of [DB-GPT-Plugins](https://github.com/csunny/DB-GPT-Plugins), run the command to generate test DuckDB data, and then copy the generated data file to the `/pilot/mock_datas` directory of the DB-GPT project.
```bash
git clone https://github.com/csunny/DB-GPT-Plugins.git
pip install -r requirements.txt
python /DB-GPT-Plugins/src/dbgpt_plugins/db_dashboard/mock_datas.py
cp /DB-GPT-Plugins/src/dbgpt_plugins/db_dashboard/mock_datas/db-gpt-test.db /DB-GPT/pilot/mock_datas/
python /DB-GPT/pilot/llmserver.py
python /DB-GPT/pilot/webserver.py
```
- Test Case: Use a histogram to analyze the total order amount of users in different cities.
<p align="center">
<img src="../../assets/chart_db_city_users.png" width="680px" />
</p>
- More detail see: [DB-DASHBOARD](https://github.com/csunny/DB-GPT-Plugins/blob/main/src/dbgpt_plugins/Readme.md)
## DB-GPT-SQL-Execution-Plugin
- This is an DbGPT plugin to connect Generic Db And Execute SQL.
## DB-GPT-Bytebase-Plugin
- To use a tool or platform plugin, you should first deploy a plugin. Taking the open-source database management platform Bytebase as an example, you can deploy your Bytebase service with one click using Docker and access it at http://127.0.0.1:5678. More details can be found at https://github.com/bytebase/bytebase.
```bash
docker run --init \
--name bytebase \
--platform linux/amd64 \
--restart always \
--publish 5678:8080 \
--health-cmd "curl --fail http://localhost:5678/healthz || exit 1" \
--health-interval 5m \
--health-timeout 60s \
--volume ~/.bytebase/data:/var/opt/bytebase \
bytebase/bytebase:2.2.0 \
--data /var/opt/bytebase \
--port 8080
```
Note: If your machine's CPU architecture is `ARM`, please use `--platform linux/arm64` instead.
- Select the plugin on DB-GPTAll built-in plugins are from our repository: https://github.com/csunny/DB-GPT-Pluginschoose DB-GPT-Bytebase-Plugin.
Supporting functions include creating projects, creating environments, creating database instances, creating databases, database DDL/DML operations, and ticket approval process, etc.

View File

@ -10,6 +10,7 @@ if "pytest" in sys.argv or "pytest" in sys.modules or os.getenv("CI"):
# Load the users .env file into environment variables # Load the users .env file into environment variables
load_dotenv(verbose=True, override=True) load_dotenv(verbose=True, override=True)
load_dotenv(".plugin_env") ROOT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
load_dotenv(os.path.join(ROOT_PATH, ".plugin_env"))
del load_dotenv del load_dotenv