docs: Add documents for dbgpt core lib (#1129)

Co-authored-by: Fangyin Cheng <staneyffer@gmail.com>
This commit is contained in:
magic.chen 2024-01-29 18:55:28 +08:00 committed by GitHub
parent be6718849f
commit a75f42c35e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
37 changed files with 651 additions and 42 deletions

View File

@ -16,17 +16,14 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.8", "3.9", "3.10"]
python-version: ["3.10", "3.11"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -U black isort
- name: check the code lint
run: |
black . --check
- name: Install dependencies and setup environment
run: make setup
- name: Check Python code style
run: make fmt-check

View File

@ -51,9 +51,18 @@ fmt: setup ## Format Python code
$(VENV_BIN)/flake8 dbgpt/core/
# TODO: More package checks with flake8.
.PHONY: fmt-check
fmt-check: setup ## Check Python code formatting and style without making changes
$(VENV_BIN)/isort --check-only dbgpt/
$(VENV_BIN)/isort --check-only --extend-skip="examples/notebook" examples
$(VENV_BIN)/black --check --extend-exclude="examples/notebook" .
$(VENV_BIN)/blackdoc --check dbgpt examples
$(VENV_BIN)/flake8 dbgpt/core/
# $(VENV_BIN)/blackdoc --check dbgpt examples
# $(VENV_BIN)/flake8 dbgpt/core/
.PHONY: pre-commit
pre-commit: fmt test test-doc mypy ## Run formatting and unit tests before committing
pre-commit: fmt-check test test-doc mypy ## Run formatting and unit tests before committing
test: $(VENV)/.testenv ## Run unit tests
$(VENV_BIN)/pytest dbgpt

View File

@ -184,6 +184,8 @@ DB-GPT是一个开源的数据域大模型框架。目的是构建大模型领
<a href="https://github.com/eosphoros-ai/DB-GPT/graphs/contributors">
<img src="https://contrib.rocks/image?repo=eosphoros-ai/DB-GPT&max=200" />
</a>
## Licence
The MIT License (MIT)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 219 KiB

After

Width:  |  Height:  |  Size: 199 KiB

View File

@ -0,0 +1,71 @@
# Crawl data analysis agents
In this case, the usage of an agent that automatcally writes programs to scrape internet data and perform analysis is demonstrated. One can observe through natural language interaction how the agent step by step completes the code writing process, and accomplishes the task handling. Unlike data analysis agents, the agent handles everything from code writing to data scraping and analysis autonomously, supporting direct data crawling from the internet for analysis.
## How to use?
Below are the steps for using the data scraping and analysis agent:
- **Write the agent**: in this case, we have already prepared the code writing assistant CodeAssistantAgent, with the source code located at dbgpt/agent/agents/expand/code_assistant_agent.py
- **Insert Metadata**
- **Select Dialogue Scenario**
- **Start Dialogue**
### Write the agent
In this case, the agent has already been programmed in the code, and the detailed code path is `dbgpt/agent/agents/expand/code_assistant_agent.py`. The specifics of the code are as follows.
:::info note
At the same time, under the `dbgpt/agent/agents/expand` path, several other Agents have been implemented. Interested students can expand on their own.
:::
<p align="left">
<img src={'/img/agents/code_agent.png'} width="720px" />
</p>
### Insert Metadata
The purpose of inserting metadata is to enable us to interact with the agent through the interactive interface.
```sql
INSERT INTO dbgpt.gpts_instance
(gpts_name, gpts_describe, resource_db, resource_internet, resource_knowledge, gpts_agents, gpts_models, `language`, user_code, sys_code, created_at, updated_at, team_mode, is_sustainable)
VALUES (
'互联网数据分析助手',
'互联网数据分析助手',
'',
'{"type": "\\u4e92\\u8054\\u7f51\\u6570\\u636e", "name": "\\u6240\\u6709\\u6765\\u6e90\\u4e92\\u8054\\u7f51\\u7684\\u6570\\u636e", "introduce": "string"}',
'{"type": "\\u6587\\u6863\\u7a7a\\u95f4", "name": "TY", "introduce": " MYSQL\\u6570\\u636e\\u5e93\\u7684\\u5b98\\u65b9\\u64cd\\u4f5c\\u624b\\u518c"}',
'[ "CodeEngineer"]',
'{"DataScientist": ["vicuna-13b-v1.5", "tongyi_proxyllm", "chatgpt_proxyllm"], "CodeEngineer": ["chatgpt_proxyllm", "tongyi_proxyllm", "vicuna-13b-v1.5"], "default": ["chatgpt_proxyllm", "tongyi_proxyllm", "vicuna-13b-v1.5"]}',
'en',
'',
'',
'2023-12-19 01:52:30',
'2023-12-19 01:52:30',
'auto_plan',
0
);
```
### Select Dialogue Scenario
We choose `Agent Chat` scene.
<p align="left">
<img src={'/img/agents/agent_scene.png'} width="720px" />
</p>
After entering the scene, select the `Internet Data Analysis Assistant Agent` that we have just prepared, and then you can fulfill the requirements through a dialogue.
<p align="left">
<img src={'/img/agents/crawl_agent.png'} width="720px" />
</p>
### Start Dialogue
> To obtain and analyze the issue data for the 'eosphoros-ai/DB-GPT' repository over the past week and create a Markdown table grouped by day and status.
<p align="left">
<img src={'/img/agents/crawl_agent_issue.png'} width="720px" />
</p>

View File

@ -0,0 +1,50 @@
# Local Data Analysis Agents
In this case, we will show you how to use a data analysis agents, serving as a typical `GBI(Generative Business Intelligence)` application scenario. One can observe how Agents step by step analyze and solve problems through natural language interaction.
## How to use?
- **Data Preparation**
- **Add Data Source**
- **Insert Metadata**
- **Select Dialogue Scenario**
- **Select Agent**
- **Start Dialogue**
### Data Preparation
For data preparation, we can reuse the test data from the introductory tutorial; for detailed preparation steps, please refer to: [Data Preparation](/docs/application/started_tutorial/chat_dashboard#data-preparation).
### Add Data Source
Similarly, you may refer to the introductory tutorial on [how to add a data source](/docs/application/started_tutorial/chat_dashboard#add-data-source).
### Insert Metadata
Execute the following SQL statement to insert metadata.
```SQL
INSERT INTO dbgpt.gpts_instance
( gpts_name, gpts_describe, resource_db, resource_internet, resource_knowledge, gpts_agents, gpts_models, `language`, user_code, sys_code, created_at, updated_at, team_mode, is_sustainable)
VALUES('数据分析AI助手', '数据分析AI助手', '{"type": "\\u672c\\u5730\\u6570\\u636e\\u5e93", "name": "dbgpt_test", "introduce": ""}', '{"type": "\\u672c\\u5730\\u6570\\u636e\\u5e93", "name": "dbgpt_test", "introduce": ""}', '{"type": "\\u6587\\u6863\\u7a7a\\u95f4", "name": "TY", "introduce": " MYSQL\\u6570\\u636e\\u5e93\\u7684\\u5b98\\u65b9\\u64cd\\u4f5c\\u624b\\u518c"}', '["DataScientist", "Reporter"]', '{"DataScientist": ["vicuna-13b-v1.5", "tongyi_proxyllm", "chatgpt_proxyllm"], "Reporter": ["chatgpt_proxyllm", "tongyi_proxyllm","vicuna-13b-v1.5"], "default": ["chatgpt_proxyllm", "tongyi_proxyllm", "vicuna-13b-v1.5"]}', 'en', '', '', '2023-12-15 06:58:29', '2023-12-15 06:58:29', 'auto_plan', 0);
```
### Select Dialogue Scenario
<p align="left">
<img src={'/img/agents/agent_scene.png'} width="720px" />
</p>
### Select Agent
<p align="left">
<img src={'/img/agents/data_analysis_agent.png'} width="720px" />
</p>
### Start conversation
> 构建销售报表,分析用户的订单,从至少三个维度分析
<p align="left">
<img src={'/img/agents/data_agents_charts.png'} width="720px" />
</p>
<p align="left">
<img src={'/img/agents/data_agents_gif.gif'} width="720px" />
</p>

View File

@ -1,4 +1,4 @@
# AWEL(Agentic Workflow Expression Language)
# What is AWEL?
Agentic Workflow Expression Language(AWEL) is a set of intelligent agent workflow expression language specially designed for large model application
development. It provides great functionality and flexibility. Through the AWEL API, you can focus on the development of business logic for LLMs applications

View File

@ -0,0 +1 @@
# Build Data analysis Copilot use AWEL

View File

@ -0,0 +1,3 @@
# Multi-Round Chat with LLMs

View File

@ -0,0 +1,47 @@
# QuickStart Basic AWEL Workflow
## Install
At first, install dbgpt, and necessary dependencies:
```python
pip install dbgpt --upgrade
pip install openai
```
Create a python file `simple_sdk_llm_example_dag.py` and write the following content:
```python
from dbgpt.core import BaseOutputParser
from dbgpt.core.awel import DAG
from dbgpt.core.operator import (
PromptBuilderOperator,
RequestBuilderOperator,
)
from dbgpt.model.proxy import OpenAILLMClient
from dbgpt.model.operators import LLMOperator
with DAG("simple_sdk_llm_example_dag") as dag:
prompt_task = PromptBuilderOperator(
"Write a SQL of {dialect} to query all data of {table_name}."
)
model_pre_handle_task = RequestBuilderOperator(model="gpt-3.5-turbo")
llm_task = LLMOperator(OpenAILLMClient())
out_parse_task = BaseOutputParser()
prompt_task >> model_pre_handle_task >> llm_task >> out_parse_task
```
Support OpenAI key and address:
```bash
export OPENAI_API_KEY=sk-xx
export OPENAI_API_BASE=https://xx:80/v1
```
Run this python script for test
```bash
python simple_sdk_llm_example_dag.py
```
Ok, You have already mastered the basic usage of AWEL. For more examples, please refer to the **[cookbook](/docs/awel/cookbook/)**

View File

@ -0,0 +1 @@
# Use RAG and SchemaLinking Generate SQL

View File

@ -0,0 +1,218 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: Get started\n",
"keywords: [awel.dag]\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"AWEL(Agentic Workflow Expression Language) makes it easy to build complex llm apps, and it provides great functionality and flexibility. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic example use AWEL: http request + output rewrite\n",
"\n",
"The basic usage about AWEL is to build a http request and rewrite some output value. To see how this works, let's see an example.\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### DAG Planning\n",
"First, let's look at an introductory example of basic AWEL orchestration. The core function of the example is the handling of input and output for an HTTP request. Thus, the entire orchestration consists of only two steps:\n",
"- HTTP Request\n",
"- Processing HTTP Response Result\n",
"\n",
"In DB-GPT, some basic dependent operators have already been encapsulated and can be referenced directly."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dbgpt._private.pydantic import BaseModel, Field\n",
"from dbgpt.core.awel import DAG, HttpTrigger, MapOperator"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom Operator\n",
"\n",
"Define an HTTP request body that accepts two parameters: name and age.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class TriggerReqBody(BaseModel):\n",
" name: str = Field(..., description=\"User name\")\n",
" age: int = Field(18, description=\"User age\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Define a Request handler operator called `RequestHandleOperator`, which is an operator that extends the basic `MapOperator`. The actions of the `RequestHandleOperator` are straightforward: parse the request body and extract the name and age fields, then concatenate them into a sentence. For example:\n",
"> \"Hello, zhangsan, your age is 18.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class RequestHandleOperator(MapOperator[TriggerReqBody, str]):\n",
" def __init__(self, **kwargs):\n",
" super().__init__(**kwargs)\n",
"\n",
" async def map(self, input_value: TriggerReqBody) -> str:\n",
" print(f\"Receive input value: {input_value}\")\n",
" return f\"Hello, {input_value.name}, your age is {input_value.age}\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### DAG Pipeline\n",
"\n",
"After writing the above operators, they can be assembled into a DAG orchestration. This DAG has a total of two nodes: the first node is an `HttpTrigger`, which primarily processes HTTP requests (this operator is built into DB-GPT), and the second node is the newly defined `RequestHandleOperator` that processes the request body. The DAG code below can be used to link the two nodes together.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with DAG(\"simple_dag_example\") as dag:\n",
" trigger = HttpTrigger(\"/examples/hello\", request_body=TriggerReqBody)\n",
" map_node = RequestHandleOperator()\n",
" trigger >> map_node"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Access Verification\n",
"\n",
"Before performing access verification, the project needs to be started first: `python dbgpt/app/dbgpt_server.py`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "powershell"
}
},
"outputs": [],
"source": [
"\n",
"% curl -X GET http://127.0.0.1:5000/api/v1/awel/trigger/examples/hello\\?name\\=zhangsan\n",
"\"Hello, zhangsan, your age is 18\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Of course, to make it more convenient for users to test, we also provide a test environment. This test environment allows testing without starting the dbgpt_server. Add the following code below simple_dag_example, then directly run the simple_dag_example.py script to run the test script without starting the project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "powershell"
}
},
"outputs": [],
"source": [
"if __name__ == \"__main__\":\n",
" if dag.leaf_nodes[0].dev_mode:\n",
" # Development mode, you can run the dag locally for debugging.\n",
" from dbgpt.core.awel import setup_dev_environment\n",
" setup_dev_environment([dag], port=5555)\n",
" else:\n",
" # Production mode, DB-GPT will automatically load and execute the current file after startup.\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "powershell"
}
},
"outputs": [],
"source": [
"curl -X GET http://127.0.0.1:5555/api/v1/awel/trigger/examples/hello\\?name\\=zhangsan\n",
"\"Hello, zhangsan, your age is 18\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"[simple_dag_example](/examples/awel/simple_dag_example.py)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "dbgpt_env",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ]"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "f8b6b0e04f284afd2fbb5e4163e7d03bbdc845eaeb6e8c78fae04fce6b51dae6"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,83 @@
# Get Started
AWEL(Agentic Workflow Expression Language) makes it easy to build complex llm apps, and it provides great functionality and flexibility.
## Basic example use AWEL: http request + output rewrite
The basic usage about AWEL is to build a http request and rewrite some output value. To see how this works, let's see an example.
### DAG Planning
First, let's look at an introductory example of basic AWEL orchestration. The core function of the example is the handling of input and output for an HTTP request. Thus, the entire orchestration consists of only two steps:
- HTTP Request
- Processing HTTP Response Result
In DB-GPT, some basic dependent operators have already been encapsulated and can be referenced directly.
```python
from dbgpt._private.pydantic import BaseModel, Field
from dbgpt.core.awel import DAG, HttpTrigger, MapOperator
```
### Custom Operator
Define an HTTP request body that accepts two parameters: name and age.
```python
class TriggerReqBody(BaseModel):
name: str = Field(..., description="User name")
age: int = Field(18, description="User age")
```
Define a Request handler operator called `RequestHandleOperator`, which is an operator that extends the basic `MapOperator`. The actions of the `RequestHandleOperator` are straightforward: parse the request body and extract the name and age fields, then concatenate them into a sentence. For example:
> "Hello, zhangsan, your age is 18."
```python
class RequestHandleOperator(MapOperator[TriggerReqBody, str]):
def __init__(self, **kwargs):
super().__init__(**kwargs)
async def map(self, input_value: TriggerReqBody) -> str:
print(f"Receive input value: {input_value}")
return f"Hello, {input_value.name}, your age is {input_value.age}"
```
### DAG Pipeline
After writing the above operators, they can be assembled into a DAG orchestration. This DAG has a total of two nodes: the first node is an `HttpTrigger`, which primarily processes HTTP requests (this operator is built into DB-GPT), and the second node is the newly defined `RequestHandleOperator` that processes the request body. The DAG code below can be used to link the two nodes together.
```python
with DAG("simple_dag_example") as dag:
trigger = HttpTrigger("/examples/hello", request_body=TriggerReqBody)
map_node = RequestHandleOperator()
trigger >> map_node
```
### Access Verification
Before performing access verification, the project needs to be started first: `python dbgpt/app/dbgpt_server.py`
```bash
% curl -X GET http://127.0.0.1:5000/api/v1/awel/trigger/examples/hello\?name\=zhangsan
"Hello, zhangsan, your age is 18"
```
Of course, to make it more convenient for users to test, we also provide a test environment. This test environment allows testing without starting the dbgpt_server. Add the following code below simple_dag_example, then directly run the simple_dag_example.py script to run the test script without starting the project.
```python
if __name__ == "__main__":
if dag.leaf_nodes[0].dev_mode:
# Development mode, you can run the dag locally for debugging.
from dbgpt.core.awel import setup_dev_environment
setup_dev_environment([dag], port=5555)
else:
# Production mode, DB-GPT will automatically load and execute the current file after startup.
pass
```
```bash
curl -X GET http://127.0.0.1:5555/api/v1/awel/trigger/examples/hello\?name\=zhangsan
"Hello, zhangsan, your age is 18"
```
[simple_dag_example](/examples/awel/simple_dag_example.py)

View File

@ -0,0 +1 @@
# Join Operator

View File

@ -0,0 +1 @@
# Map Operator

View File

@ -0,0 +1 @@
# Reduce Operator

View File

@ -0,0 +1 @@
# Streamify Abstract Operator

View File

@ -0,0 +1 @@
# Trigger Operator

View File

@ -0,0 +1 @@
# AWEL DAG Workflow

View File

@ -0,0 +1,10 @@
# Why use AWEL?
AWEL (Agentic Workflow Expression Language) is an intelligent agent workflow expression language specifically designed for the development of LLMs applications. In the design of DB-GPT, Agents are considered first-class citizens. RAGs, Datasources (DS), SMMF(Service-oriented Multi-model Management Framework), and Plugins are all resources that agents depend on.
We currently also see that the auto-orchestration capabilities of multi-agents are greatly limited by the model's capabilities, and at the same time, for scenarios that require determinism. For instance, tasks like pipeline work do not need to utilize the auto-orchestration capabilities of large models. Therefore, in DB-GPT, the integration of AWEL with agents can satisfy the implementation of a production-level pipeline and the auto-orchestration of agents systems that address open-ended problems.
Through the orchestration capabilities of AWEL, it is possible to develop large language model applications with a minimal amount of code.
**AWEL and agents are all you need**.

View File

@ -1,16 +1,14 @@
Installation FAQ
==================================
# Installation FAQ
##### Q1: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
### Q1: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
make sure you pull latest code or create directory with mkdir pilot/data
##### Q2: The model keeps getting killed.
### Q2: The model keeps getting killed.
your GPU VRAM size is not enough, try replace your hardware or replace other llms.
##### Q3: How to access website on the public network
### Q3: How to access website on the public network
You can try to use gradio's [network](https://github.com/gradio-app/gradio/blob/main/gradio/networking.py) to achieve.
```python
@ -25,7 +23,7 @@ time.sleep(60 * 60 * 24)
Open `url` with your browser to see the website.
##### Q4: (Windows) execute `pip install -e .` error
### Q4: (Windows) execute `pip install -e .` error
The error log like the following:
```
@ -50,7 +48,7 @@ Download and install `Microsoft C++ Build Tools` from [visual-cpp-build-tools](h
##### Q5: `Torch not compiled with CUDA enabled`
### Q5: `Torch not compiled with CUDA enabled`
```
2023-08-19 16:24:30 | ERROR | stderr | raise AssertionError("Torch not compiled with CUDA enabled")
@ -61,18 +59,18 @@ Download and install `Microsoft C++ Build Tools` from [visual-cpp-build-tools](h
2. Reinstall PyTorch [start-locally](https://pytorch.org/get-started/locally/#start-locally) with CUDA support.
##### Q6: `How to migrate meta table chat_history and connect_config from duckdb to sqlite`
### Q6: `How to migrate meta table chat_history and connect_config from duckdb to sqlite`
```commandline
python docker/examples/metadata/duckdb2sqlite.py
```
##### Q7: `How to migrate meta table chat_history and connect_config from duckdb to mysql`
### Q7: `How to migrate meta table chat_history and connect_config from duckdb to mysql`
```commandline
1. update your mysql username and password in docker/examples/metadata/duckdb2mysql.py
2. python docker/examples/metadata/duckdb2mysql.py
```
##### Q8: `How to manage and migrate my database`
### Q8: `How to manage and migrate my database`
You can use the command of `dbgpt db migration` to manage and migrate your database.
@ -104,7 +102,7 @@ dbgpt db migration upgrade
```
##### Q9: `alembic.util.exc.CommandError: Target database is not up to date.`
### Q9: `alembic.util.exc.CommandError: Target database is not up to date.`
**Solution 1:**

View File

@ -1,7 +1,6 @@
KBQA FAQ
==================================
# KBQA FAQ
##### Q1: text2vec-large-chinese not found
### Q1: text2vec-large-chinese not found
make sure you have download text2vec-large-chinese embedding model in right way
@ -15,7 +14,7 @@ cd models
git lfs clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
```
##### Q2:How to change Vector DB Type in DB-GPT.
### Q2:How to change Vector DB Type in DB-GPT.
Update .env file and set VECTOR_STORE_TYPE.
@ -35,14 +34,14 @@ VECTOR_STORE_TYPE=Chroma
#WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network
```
##### Q3:When I use vicuna-13b, found some illegal character like this.
### Q3:When I use vicuna-13b, found some illegal character like this.
<p align="left">
<img src="https://github.com/eosphoros-ai/DB-GPT/assets/13723926/088d1967-88e3-4f72-9ad7-6c4307baa2f8" width="800px" />
</p>
Set KNOWLEDGE_SEARCH_TOP_SIZE smaller or set KNOWLEDGE_CHUNK_SIZE smaller, and reboot server.
##### Q4:space add error (pymysql.err.OperationalError) (1054, "Unknown column 'knowledge_space.context' in 'field list'")
### Q4:space add error (pymysql.err.OperationalError) (1054, "Unknown column 'knowledge_space.context' in 'field list'")
1.shutdown dbgpt_server(ctrl c)
@ -61,7 +60,7 @@ mysql> ALTER TABLE knowledge_space ADD COLUMN context TEXT COMMENT "arguments co
4.restart dbgpt serve
##### Q5:Use Mysql, how to use DB-GPT KBQA
### Q5:Use Mysql, how to use DB-GPT KBQA
build Mysql KBQA system database schema.

View File

@ -1,6 +1,6 @@
LLM USE FAQ
==================================
##### Q1:how to use openai chatgpt service
# LLM USE FAQ
### Q1:how to use openai chatgpt service
change your LLM_MODEL
````shell
LLM_MODEL=proxyllm
@ -15,7 +15,7 @@ PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
make sure your openapi API_KEY is available
##### Q2 What difference between `python dbgpt_server --light` and `python dbgpt_server`
### Q2 What difference between `python dbgpt_server --light` and `python dbgpt_server`
:::tip
python dbgpt_server --light` dbgpt_server does not start the llm service. Users can deploy the llm service separately by using `python llmserver`, and dbgpt_server accesses the llm service through set the LLM_SERVER environment variable in .env. The purpose is to allow for the separate deployment of dbgpt's backend service and llm service.
@ -23,7 +23,7 @@ python dbgpt_server --light` dbgpt_server does not start the llm service. Users
python dbgpt_server service and the llm service are deployed on the same instance. when dbgpt_server starts the service, it also starts the llm service at the same time.
:::
##### Q3 how to use MultiGPUs
### Q3 how to use MultiGPUs
DB-GPT will use all available gpu by default. And you can modify the setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file
to use the specific gpu IDs.
@ -40,7 +40,7 @@ CUDA_VISIBLE_DEVICES=3,4,5,6 python3 dbgpt/app/dbgpt_server.py
You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to configure the maximum memory used by each GPU.
##### Q4 Not Enough Memory
### Q4 Not Enough Memory
DB-GPT supported 8-bit quantization and 4-bit quantization.

View File

@ -29,9 +29,103 @@ const sidebars = {
label: "Quickstart",
},
{
type: "doc",
id: "awel",
type: "category",
label: "AWEL(Agentic Workflow Expression Language)",
collapsed: false,
collapsible: false,
items: [
{
type: 'doc',
id: "awel/awel"
},
{
type: 'doc',
id: "awel/get_started"
},
{
type: "doc",
id: "awel/why_use_awel"
},
{
type: "category",
label: "How to",
items: [
{
type: "category",
label: "Operator",
collapsed: false,
collapsible: false,
items: [
{
type: "doc",
id: "awel/how_to/operator/map"
},
{
type: "doc",
id: "awel/how_to/operator/join"
},
{
type: "doc",
id: "awel/how_to/operator/reduce"
},
{
type: "doc",
id: "awel/how_to/operator/trigger"
},
{
type: "doc",
id: "awel/how_to/operator/streamify_abs_operator"
}
]
},
{
type: "category",
label: "Workflow",
collapsed: false,
collapsible: false,
items: [
{
type: "doc",
id: "awel/how_to/workflow/dag_pipeline"
}
]
}
]
},
{
type: "category",
label: "Cookbook",
items: [
{
type: "doc",
id: "awel/cookbook/quickstart_basic_awel_workflow"
},
{
type: "doc",
id: "awel/cookbook/sql_awel_use_rag_and_schema_linking"
},
{
type: "doc",
id: "awel/cookbook/data_analysis_use_awel"
},
{
type: "doc",
id: "awel/cookbook/multi_round_chat_withllm"
},
],
link: {
type: 'generated-index',
description: 'Example code for accomplishing common workflow with the Agentic Workflow Expression Language (AWEL). These examples show how to build different app use LLMs (the core AWEL interface) and dbgpt modules.',
slug: "cookbook"
},
}
],
link: {
type: 'generated-index',
description: "AWEL (Agentic Workflow Expression Language) is an intelligent agent workflow expression language specifically designed for the development of large-model applications",
},
},
{
@ -122,9 +216,27 @@ const sidebars = {
id: 'application/started_tutorial/chat_dashboard',
},
{
type: 'doc',
id: 'application/started_tutorial/agent',
},
type: "category",
label: "Agents",
items: [
{
type: 'doc',
id: 'application/started_tutorial/agents/plugin',
},
{
type: "doc",
id: "application/started_tutorial/agents/db_data_analysis_agents",
},
{
type: "doc",
id: "application/started_tutorial/agents/crawl_data_analysis_agents",
}
],
link: {
type: 'generated-index',
slug: "agents",
},
}
],
},
{
@ -209,6 +321,10 @@ const sidebars = {
},
],
link: {
type: 'generated-index',
slug: "modules",
},
},
{

BIN
docs/static/img/agents/agent_scene.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

BIN
docs/static/img/agents/code_agent.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 453 KiB

BIN
docs/static/img/agents/crawl_agent.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 475 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 268 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

BIN
docs/static/img/cli/cli_m.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 MiB

BIN
docs/static/img/cli/kbqa.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 MiB

BIN
docs/static/img/cli/kd_new.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

BIN
docs/static/img/cli/start_cli_new.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

View File

@ -19,9 +19,7 @@ import asyncio
import os
from dbgpt.agent.agents.agent import AgentContext
from dbgpt.agent.agents.agents_mange import agent_mange
from dbgpt.agent.agents.expand.code_assistant_agent import CodeAssistantAgent
from dbgpt.agent.agents.planner_agent import PlannerAgent
from dbgpt.agent.agents.user_proxy_agent import UserProxyAgent
from dbgpt.agent.memory.gpts_memory import GptsMemory
from dbgpt.core.interface.llm import ModelMetadata