mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-08-15 23:13:15 +00:00
docs: init v0.6.0 docs
This commit is contained in:
parent
b951b50689
commit
74a1a59a79
@ -0,0 +1 @@
|
||||
# DB-GPT V0.6.0, Defining new standards for AI-native data applications.
|
1
docs/docs/application/apps/app_chat.md
Normal file
1
docs/docs/application/apps/app_chat.md
Normal file
@ -0,0 +1 @@
|
||||
# App Chat
|
1
docs/docs/application/apps/app_explore.md
Normal file
1
docs/docs/application/apps/app_explore.md
Normal file
@ -0,0 +1 @@
|
||||
# App Explore
|
1
docs/docs/application/apps/app_manage.md
Normal file
1
docs/docs/application/apps/app_manage.md
Normal file
@ -0,0 +1 @@
|
||||
# App Manage
|
1
docs/docs/application/apps/chat_dashboard.md
Normal file
1
docs/docs/application/apps/chat_dashboard.md
Normal file
@ -0,0 +1 @@
|
||||
# Chat Dashboard
|
1
docs/docs/application/apps/chat_data.md
Normal file
1
docs/docs/application/apps/chat_data.md
Normal file
@ -0,0 +1 @@
|
||||
# Chat Data
|
1
docs/docs/application/apps/chat_excel.md
Normal file
1
docs/docs/application/apps/chat_excel.md
Normal file
@ -0,0 +1 @@
|
||||
# Chat Excel
|
1
docs/docs/application/apps/chat_knowledge.md
Normal file
1
docs/docs/application/apps/chat_knowledge.md
Normal file
@ -0,0 +1 @@
|
||||
# Chat Knowledge
|
1
docs/docs/application/awel.md
Normal file
1
docs/docs/application/awel.md
Normal file
@ -0,0 +1 @@
|
||||
# Use Data App With AWEL
|
@ -1,75 +0,0 @@
|
||||
# AWEL Flow Usage
|
||||
|
||||
:::info NOTE
|
||||
|
||||
⚠️ Please note that this tutorial mainly introduces the installation and use of agent workflows. For the development of workflows, please refer to the `Development Guide`.
|
||||
This capability is supported after version V0.5.0.
|
||||
:::
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/app/dbgpts_flow_black.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
As shown in the picture, this is the management and editing interface for DB-GPT workflows. Intelligent agents can be orchestrated into definitive workflows using the Agentic Workflow Expression Language (AWEL). These workflows can be used for subsequent application creation.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/app/awel_flow_node.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
## Workflow Installation
|
||||
|
||||
As part of this introductory tutorial, we will cover the installation and use of workflows.
|
||||
|
||||
Before you can start using workflows, you need to complete the installation and deployment of DB-GPT. For detailed deployment instructions, you can refer to the quick start guide. Once the project is deployed, you can begin installing and using AWEL workflows. The DB-GPT official provides an application repository that can be used for installation. Here, we will use the command line for operation. Execute `dbgpt --help` in the terminal to check if the command line is installed correctly.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/app/dbgpts_cli.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
As illustrated, the dbgpt command supports various operations, including model-related tasks, knowledge base interactions, Trace logs, and more. Here, we will focus on the operations related to apps.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/app/dbgpts_apps.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
By using the `dbgpt app list-remote` command, we can see that there are three available AWEL workflows in the current repository. Here, we will install the `awel-flow-web-info-search` workflow. To do this, execute the command dbgpt app install `awel-flow-web-info-search`.
|
||||
|
||||
Let's also install the other official workflows provided:
|
||||
|
||||
```
|
||||
dbgpt app install awel-flow-web-info-search
|
||||
dbgpt app install awel-flow-example-chat
|
||||
dbgpt app install awel-flow-simple-streaming-chat
|
||||
```
|
||||
By executing these commands, you will install the respective workflows onto your system.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/app/dbgpts_app_install.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
After successful installation, restart the DB-GPT service (dynamic hot loading is on the way 😊). Refresh the page, and you will be able to see the corresponding workflows on the AWEL workflow page.
|
||||
|
||||
## Creating Applications Based on Workflows
|
||||
|
||||
Earlier, we introduced the construction and installation of AWEL workflows. Next, let's discuss how to create data applications based on large models.
|
||||
|
||||
Here, we will create a search dialogue application based on the `awel-flow-web-info-search` workflow.
|
||||
|
||||
The core capability of the search dialogue application is to search for relevant knowledge using a search engine (such as Baidu or Google) and then provide a summarized answer. The effect is as follows:
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/app/app_search.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
Creating the aforementioned application is very simple. In the application creation panel, click `Create`, enter the following parameters, and the creation process will be complete. There are a few parameters that require attention:
|
||||
|
||||
- Work Mode
|
||||
- Flows
|
||||
The work mode we are using here is `awel_layout`. The AWEL workflow selected is `awel-flow-web-info-search`, which is the workflow that was installed previously.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/app/app_awel.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
The above is the basic introduction to using the intelligent agent workflow. We look forward to more of your suggestions on how to play around with it. For instructions on how to develop workflows, you can refer to the development tutorial that follows.
|
||||
|
0
docs/docs/application/datasources.md
Normal file
0
docs/docs/application/datasources.md
Normal file
1
docs/docs/application/graph_rag.md
Normal file
1
docs/docs/application/graph_rag.md
Normal file
@ -0,0 +1 @@
|
||||
# GraphRAG
|
0
docs/docs/application/llms.md
Normal file
0
docs/docs/application/llms.md
Normal file
0
docs/docs/application/prompts.md
Normal file
0
docs/docs/application/prompts.md
Normal file
@ -1,71 +0,0 @@
|
||||
# Crawl data analysis agents
|
||||
|
||||
In this case, the usage of an agent that automatcally writes programs to scrape internet data and perform analysis is demonstrated. One can observe through natural language interaction how the agent step by step completes the code writing process, and accomplishes the task handling. Unlike data analysis agents, the agent handles everything from code writing to data scraping and analysis autonomously, supporting direct data crawling from the internet for analysis.
|
||||
|
||||
## How to use?
|
||||
Below are the steps for using the data scraping and analysis agent:
|
||||
|
||||
- **Write the agent**: in this case, we have already prepared the code writing assistant CodeAssistantAgent, with the source code located at dbgpt/agent/agents/expand/code_assistant_agent.py
|
||||
- **Insert Metadata**
|
||||
- **Select Dialogue Scenario**
|
||||
- **Start Dialogue**
|
||||
|
||||
### Write the agent
|
||||
In this case, the agent has already been programmed in the code, and the detailed code path is `dbgpt/agent/agents/expand/code_assistant_agent.py`. The specifics of the code are as follows.
|
||||
|
||||
:::info note
|
||||
|
||||
At the same time, under the `dbgpt/agent/agents/expand` path, several other Agents have been implemented. Interested students can expand on their own.
|
||||
:::
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/code_agent.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Insert Metadata
|
||||
|
||||
The purpose of inserting metadata is to enable us to interact with the agent through the interactive interface.
|
||||
|
||||
```sql
|
||||
INSERT INTO dbgpt.gpts_instance
|
||||
(gpts_name, gpts_describe, resource_db, resource_internet, resource_knowledge, gpts_agents, gpts_models, `language`, user_code, sys_code, created_at, updated_at, team_mode, is_sustainable)
|
||||
VALUES (
|
||||
'互联网数据分析助手',
|
||||
'互联网数据分析助手',
|
||||
'',
|
||||
'{"type": "\\u4e92\\u8054\\u7f51\\u6570\\u636e", "name": "\\u6240\\u6709\\u6765\\u6e90\\u4e92\\u8054\\u7f51\\u7684\\u6570\\u636e", "introduce": "string"}',
|
||||
'{"type": "\\u6587\\u6863\\u7a7a\\u95f4", "name": "TY", "introduce": " MYSQL\\u6570\\u636e\\u5e93\\u7684\\u5b98\\u65b9\\u64cd\\u4f5c\\u624b\\u518c"}',
|
||||
'[ "CodeEngineer"]',
|
||||
'{"DataScientist": ["vicuna-13b-v1.5", "tongyi_proxyllm", "chatgpt_proxyllm"], "CodeEngineer": ["chatgpt_proxyllm", "tongyi_proxyllm", "vicuna-13b-v1.5"], "default": ["chatgpt_proxyllm", "tongyi_proxyllm", "vicuna-13b-v1.5"]}',
|
||||
'en',
|
||||
'',
|
||||
'',
|
||||
'2023-12-19 01:52:30',
|
||||
'2023-12-19 01:52:30',
|
||||
'auto_plan',
|
||||
0
|
||||
);
|
||||
```
|
||||
|
||||
### Select Dialogue Scenario
|
||||
|
||||
We choose `Agent Chat` scene.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/agent_scene.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
After entering the scene, select the `Internet Data Analysis Assistant Agent` that we have just prepared, and then you can fulfill the requirements through a dialogue.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/crawl_agent.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
### Start Dialogue
|
||||
|
||||
> To obtain and analyze the issue data for the 'eosphoros-ai/DB-GPT' repository over the past week and create a Markdown table grouped by day and status.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/crawl_agent_issue.png'} width="720px" />
|
||||
</p>
|
@ -1,50 +0,0 @@
|
||||
# Local Data Analysis Agents
|
||||
|
||||
In this case, we will show you how to use a data analysis agents, serving as a typical `GBI(Generative Business Intelligence)` application scenario. One can observe how Agents step by step analyze and solve problems through natural language interaction.
|
||||
|
||||
## How to use?
|
||||
- **Data Preparation**
|
||||
- **Add Data Source**
|
||||
- **Insert Metadata**
|
||||
- **Select Dialogue Scenario**
|
||||
- **Select Agent**
|
||||
- **Start Dialogue**
|
||||
|
||||
|
||||
### Data Preparation
|
||||
For data preparation, we can reuse the test data from the introductory tutorial; for detailed preparation steps, please refer to: [Data Preparation](/docs/application/started_tutorial/chat_dashboard#data-preparation).
|
||||
|
||||
### Add Data Source
|
||||
Similarly, you may refer to the introductory tutorial on [how to add a data source](/docs/application/started_tutorial/chat_dashboard#add-data-source).
|
||||
|
||||
|
||||
### Insert Metadata
|
||||
Execute the following SQL statement to insert metadata.
|
||||
```SQL
|
||||
INSERT INTO dbgpt.gpts_instance
|
||||
( gpts_name, gpts_describe, resource_db, resource_internet, resource_knowledge, gpts_agents, gpts_models, `language`, user_code, sys_code, created_at, updated_at, team_mode, is_sustainable)
|
||||
VALUES('数据分析AI助手', '数据分析AI助手', '{"type": "\\u672c\\u5730\\u6570\\u636e\\u5e93", "name": "dbgpt_test", "introduce": ""}', '{"type": "\\u672c\\u5730\\u6570\\u636e\\u5e93", "name": "dbgpt_test", "introduce": ""}', '{"type": "\\u6587\\u6863\\u7a7a\\u95f4", "name": "TY", "introduce": " MYSQL\\u6570\\u636e\\u5e93\\u7684\\u5b98\\u65b9\\u64cd\\u4f5c\\u624b\\u518c"}', '["DataScientist", "Reporter"]', '{"DataScientist": ["vicuna-13b-v1.5", "tongyi_proxyllm", "chatgpt_proxyllm"], "Reporter": ["chatgpt_proxyllm", "tongyi_proxyllm","vicuna-13b-v1.5"], "default": ["chatgpt_proxyllm", "tongyi_proxyllm", "vicuna-13b-v1.5"]}', 'en', '', '', '2023-12-15 06:58:29', '2023-12-15 06:58:29', 'auto_plan', 0);
|
||||
```
|
||||
|
||||
### Select Dialogue Scenario
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/agent_scene.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Select Agent
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/data_analysis_agent.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Start conversation
|
||||
> 构建销售报表,分析用户的订单,从至少三个维度分析
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/data_agents_charts.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/agents/data_agents_gif.gif'} width="720px" />
|
||||
</p>
|
@ -1,62 +0,0 @@
|
||||
# Data Agent
|
||||
|
||||
Regarding the use of plugin(data agent), the current project supports basic plugin warehouses and plugin expansion capabilities. The project currently has a built-in search plugin. Let's experience the basic usage of the plugin.
|
||||
|
||||
## Steps
|
||||
The use of the default plugin mainly includes the following steps. For more advanced features, you can follow the subsequent advanced tutorials.
|
||||
- 1.Enter the plugin market
|
||||
- 2.View the list of plugins in the GitHub repository
|
||||
- 3.Download the plugin
|
||||
- 4.Select Data Agent
|
||||
- 5.Start chat
|
||||
|
||||
### View plugin list
|
||||
First, you can click the `Update GitHub plugin` button, and the plugin list in the [GitHub plugin repository](https://github.com/eosphoros-ai/DB-GPT-Plugins) will automatically be displayed here.
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/plugin/show_plugin.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Download plugin
|
||||
|
||||
Click the `download` button to download the plugin locally
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/plugin/download.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
After the download is successful, you can see the plugin list in the my plugin interface. Of course, it also supports uploading models through local upload.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/plugin/show_plugin_more.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
### Select `Data Agent`
|
||||
Select the plugin dialog to enable plugin use.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/plugin/choose_plugin.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
### Configure cookies
|
||||
|
||||
Before starting to use the default search plugin, you need to configure cookies. For detailed configuration tutorials, see the [plugin description](https://github.com/eosphoros-ai/DB-GPT-Plugins/tree/main/src/dbgpt_plugins/search_engine).
|
||||
|
||||
Specify the corresponding cookie configuration items in the `.env` file to complete the configuration.
|
||||
|
||||
|
||||
### Start chat
|
||||
After configuring cookies, we can start using the plugin.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/plugin/chat.gif'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
:::info note
|
||||
|
||||
For more plugin expansion and advanced gameplay, welcome to [communicate](https://github.com/eosphoros-ai/DB-GPT/issues) with us.
|
||||
:::
|
@ -1,64 +0,0 @@
|
||||
# Chat Dashboard
|
||||
|
||||
Report analysis corresponds to the `Chat Dashboard` scenario in DB-GPT, and intelligent report generation and analysis can be performed through natural language. It is one of the basic capabilities of generative BI (GBI). Let's take a look at how to use the report analysis capabilities.
|
||||
|
||||
## Steps
|
||||
The following are the steps for using report analysis:
|
||||
- 1.Data preparation
|
||||
- 2.Add data source
|
||||
- 3.Select Chat Dashboard
|
||||
- 4.Start chat
|
||||
|
||||
|
||||
### Data preparation
|
||||
|
||||
In order to better experience the report analysis capabilities, we have built some test data into the code. To use this test data, we first need to create a test library.
|
||||
```SQL
|
||||
CREATE DATABASE IF NOT EXISTS dbgpt_test CHARACTER SET utf8;
|
||||
```
|
||||
|
||||
After the test library is created, you can initialize the test data with one click through the script.
|
||||
|
||||
```python
|
||||
python docker/examples/dashboard/test_case_mysql_data.py
|
||||
```
|
||||
|
||||
### Add data source
|
||||
|
||||
The steps to add a data source are the same as [Chat Data](./chat_data.md). Select the corresponding database type in the data source management tab, then create it. Fill in the necessary information to complete the creation.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_dashboard/add_data.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
### Select Chat Dashboard
|
||||
|
||||
After the data source is added, select `Chat Dashboard` on the home scene page to perform report analysis.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_dashboard/choose_chat_dashboard.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
### Start chat
|
||||
Enter specific questions in the dialog box on the right to start a data conversation.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_dashboard/preview.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
:::info note
|
||||
|
||||
⚠️ Data dialogue has relatively high requirements on model capabilities, and `ChatGPT/GPT-4` has a high success rate. Other open source models you can try `Vicuna-13B`
|
||||
:::
|
||||
|
||||
Of course, in addition to `preview mode`, `editor mode` is also provided. In editor mode, SQL can be edited and modified. You can see the changes in the chart synchronously.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_dashboard/edit.png'} width="720px" />
|
||||
</p>
|
@ -1,64 +0,0 @@
|
||||
# Chat Data
|
||||
Chat data capability is to dialogue with data through natural language. Currently, it is mainly dialogue between structured and semi-structured data, which can assist in data analysis and insight.
|
||||
|
||||
:::info note
|
||||
|
||||
Before starting the data conversation, we first need to add the data source
|
||||
:::
|
||||
|
||||
## steps
|
||||
|
||||
To start a data conversation, you need to go through the following steps:
|
||||
- 1.Add data source
|
||||
- 2.Select ChatData
|
||||
- 3.Select the corresponding database
|
||||
- 4.Start a conversation
|
||||
|
||||
|
||||
### Add data source
|
||||
|
||||
First, select the `data source` on the left to add and add a database. Currently, DB-GPT supports multiple database types. Just select the corresponding database type to add. Here we choose MySQL as a demonstration. For the test data of the demonstration, see the [test sample](https://github.com/eosphoros-ai/DB-GPT/tree/main/docker/examples/sqls).
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_data/add_data.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
### Choose ChatData
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_data/choose_type.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Start a conversation
|
||||
|
||||
|
||||
:::info note
|
||||
|
||||
⚠️ Pay attention to selecting the corresponding model and database during the dialogue. At the same time, DB-GPT also provides preview mode and editing mode.
|
||||
:::
|
||||
|
||||
|
||||
:::tip
|
||||
|
||||
preview mode
|
||||
:::
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_data/start_chat.gif'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
:::tip
|
||||
|
||||
editing mode
|
||||
:::
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_data/edit.png'} width="720px" />
|
||||
</p>
|
||||
|
@ -1,56 +0,0 @@
|
||||
# Chat Excel
|
||||
|
||||
Chat Excel means that you can interpret and analyze Excel data through natural language dialogue.
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_excel/excel.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
## Steps
|
||||
|
||||
The steps to use Chat Excel are relatively simple and are mainly divided into the following steps:
|
||||
- 1.Select Chat Excel dialogue scene
|
||||
- 2.Upload Excel document
|
||||
- 3.Start chat
|
||||
|
||||
### Select `Chat Excel`
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_excel/choose_excel.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Upload Excel document
|
||||
|
||||
|
||||
:::info note
|
||||
|
||||
⚠️ the Excel file format is converted to `.csv` format
|
||||
:::
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_excel/upload_excel.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
After the upload is successful, the content will be summarized by default and some questioning strategies will be recommended.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_excel/upload_finish.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Start chat
|
||||
|
||||
You can then start a conversation based on the uploaded file.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_excel/chat.gif'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
Use open source models
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_excel/use_vicuna.gif'} width="720px" />
|
||||
</p>
|
@ -1,102 +0,0 @@
|
||||
# Chat Knowledge
|
||||
|
||||
`Chat knowledge` provides the ability to question and answer questions based on private domain knowledge, and can build intelligent question and answer systems, reading assistants and other products based on the `knowledge base`. `RAG` technology is also used in DB-GPT to enhance knowledge retrieval.
|
||||
|
||||
|
||||
## Noun explanation
|
||||
|
||||
|
||||
:::info note
|
||||
|
||||
`Knowledge Space`: is a document space that manages a type of knowledge. Document knowledge of the same type can be uploaded to a knowledge space.
|
||||
:::
|
||||
|
||||
|
||||
## Steps
|
||||
The knowledge base operation process is relatively simple and is mainly divided into the following steps.
|
||||
- 1.Create knowledge space
|
||||
- 2.Upload documents
|
||||
- 3.Wait for document vectorization
|
||||
- 4.Knowledge base chat
|
||||
|
||||
|
||||
### Create knowledge space
|
||||
|
||||
Select the knowledge base, click the `Create` button, and fill in the necessary information to complete the creation of the knowledge space.
|
||||
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/create_knowledge_base.png'} width="720px"/>
|
||||
</p>
|
||||
|
||||
### Upload documents
|
||||
|
||||
Document addition currently supports multiple types, such as plain text, URL crawling, and various document types such as PDF, Word, and Markdown. Select a specific document to `upload`.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/upload_doc.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
Select one or more corresponding documents and click `next`.
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/upload_doc_finish.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
### Documents Segmentation
|
||||
|
||||
Choose Document Segmentation, you can choose to segment the document by chunk size, separator, paragraph or markdown header. The default is to segment the document by chunk size.
|
||||
|
||||
and click Process, it will take a few minutes to complete the document segmentation.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/doc_segmentation.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
:::tip
|
||||
**Automatic: The document is automatically segmented according to the document type.**
|
||||
|
||||
**Chunk size: The number of words in each segment of the document. The default is 512 words.**
|
||||
- chunk size: The number of words in each segment of the document. The default is 512 words.
|
||||
- chunk overlap: The number of words overlapped between each segment of the document. The default is 50 words.
|
||||
** Separator:segmentation by separator **
|
||||
- separator: The separator of the document. The default is `\n`.
|
||||
- enable_merge: Whether to merge the separator chunks according to chunk_size after splits. The default is `False`.
|
||||
** Page: page segmentation, only support .pdf and .pptx document.**
|
||||
|
||||
** Paragraph: paragraph segmentation, only support .docx document.**
|
||||
- separator: The paragraph separator of the document. The default is `\n`.
|
||||
|
||||
** Markdown header: markdown header segmentation, only support .md document.**
|
||||
:::
|
||||
|
||||
|
||||
### Waiting for document vectorization
|
||||
|
||||
Click on the `knowledge space` and observe the document `slicing` + `vectorization` status in the lower left corner. When the status reaches `FINISHED`, you can start a knowledge base conversation.
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/waiting_doc_vector.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
### Knowledge base chat
|
||||
|
||||
Click the `Chat`button to start a conversation with the knowledge base.
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/chat.png'} width="720px" />
|
||||
</p>
|
||||
|
||||
|
||||
### Reading assistant
|
||||
In addition to the above capabilities, you can also upload documents directly in the knowledge base dialogue window, and the document will be summarized by default. This capability can be used as a `reading assistant` to assist document reading.
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/read_helper.gif'} width="720px" />
|
||||
</p>
|
1
docs/docs/changelog/Released_V0.6.0.md
Normal file
1
docs/docs/changelog/Released_V0.6.0.md
Normal file
@ -0,0 +1 @@
|
||||
# DB-GPT V0.6.0, Defining new standards for AI-native data applications.
|
1
docs/docs/upgrade/v0.6.0.md
Normal file
1
docs/docs/upgrade/v0.6.0.md
Normal file
@ -0,0 +1 @@
|
||||
# Upgrade To v0.6.0
|
@ -43,7 +43,7 @@ const config = {
|
||||
favicon: 'img/eosphoros.jpeg',
|
||||
|
||||
// Set the production url of your site here
|
||||
url: 'http://docs.dbgpt.site',
|
||||
url: 'http://docs.dbgpt.cn',
|
||||
// Set the /<baseUrl>/ pathname under which your site is served
|
||||
// For GitHub pages deployment, it is often '/<projectName>/'
|
||||
baseUrl: '/',
|
||||
|
107
docs/sidebars.js
107
docs/sidebars.js
@ -269,65 +269,79 @@ const sidebars = {
|
||||
collapsed: false,
|
||||
collapsible: false,
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/app_usage'
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/awel_flow_usage'
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Getting Started Tutorial',
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/started_tutorial/chat_knowledge',
|
||||
id: 'application/apps/app_explore',
|
||||
label: "App Explore"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/started_tutorial/chat_data',
|
||||
id: 'application/apps/app_chat',
|
||||
label: "App Chat"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/apps/app_manage',
|
||||
label: "App Manage"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/apps/chat_knowledge',
|
||||
label: "Chat Knowledge Base"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/started_tutorial/chat_excel',
|
||||
id: 'application/apps/chat_data',
|
||||
label: "Chat Data"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/started_tutorial/chat_db',
|
||||
id: 'application/apps/chat_excel',
|
||||
label: "Chat Excel"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/started_tutorial/chat_dashboard',
|
||||
id: 'application/apps/chat_db',
|
||||
label: "chat DB"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/apps/chat_dashboard',
|
||||
label: "Chat Dashboard"
|
||||
},{
|
||||
type: 'doc',
|
||||
id: 'application/started_tutorial/chat_financial_report',
|
||||
id: 'application/apps/chat_financial_report',
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "Agents",
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/started_tutorial/agents/plugin',
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "application/started_tutorial/agents/db_data_analysis_agents",
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "application/started_tutorial/agents/crawl_data_analysis_agents",
|
||||
}
|
||||
],
|
||||
link: {
|
||||
type: 'generated-index',
|
||||
slug: "agents",
|
||||
},
|
||||
}
|
||||
],
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/prompts',
|
||||
label: "Prompt"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/llms',
|
||||
label: "LLMs"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/datasources',
|
||||
label: "Datasources"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/graph_rag',
|
||||
label: "GraphRAG"
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'application/awel',
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Advanced Tutorial',
|
||||
@ -489,16 +503,6 @@ const sidebars = {
|
||||
type: 'doc',
|
||||
id: "agents/introduction/custom_agents"
|
||||
},
|
||||
// {
|
||||
// type: "category",
|
||||
// label: "Cookbook",
|
||||
// items: [
|
||||
// {
|
||||
// type: "doc",
|
||||
// id: "agents/cookbook/calculator_with_agents"
|
||||
// },
|
||||
// ],
|
||||
// },
|
||||
{
|
||||
type: "category",
|
||||
label: "Modules",
|
||||
@ -667,6 +671,10 @@ const sidebars = {
|
||||
type: 'doc',
|
||||
id: 'changelog/doc',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'changelog/Released_V0.6.0',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'changelog/Released_V0.5.0',
|
||||
@ -682,10 +690,15 @@ const sidebars = {
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'upgrade/v0.5.0',
|
||||
}, {
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'upgrade/v0.5.1',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'upgrade/v0.6.0',
|
||||
}
|
||||
],
|
||||
},
|
||||
|
||||
|
@ -1832,15 +1832,10 @@
|
||||
dependencies:
|
||||
"@types/mdx" "^2.0.0"
|
||||
|
||||
"@node-rs/jieba-linux-x64-gnu@1.10.3":
|
||||
"@node-rs/jieba-darwin-arm64@1.10.3":
|
||||
version "1.10.3"
|
||||
resolved "https://registry.npmmirror.com/@node-rs/jieba-linux-x64-gnu/-/jieba-linux-x64-gnu-1.10.3.tgz"
|
||||
integrity sha512-GF5cfvu/0wXO2fVX/XV3WYH/xEGWzMBvfqLhGiA1OA1xHIufnA1T7uU3ZXkyoNi5Bzf6dmxnwtE4CJL0nvhwjQ==
|
||||
|
||||
"@node-rs/jieba-linux-x64-musl@1.10.3":
|
||||
version "1.10.3"
|
||||
resolved "https://registry.npmmirror.com/@node-rs/jieba-linux-x64-musl/-/jieba-linux-x64-musl-1.10.3.tgz"
|
||||
integrity sha512-h45HMVU/hgzQ0saXNsK9fKlGdah1i1cXZULpB5vQRlRL2ZIaGp+ULtWTogS7vkoo2K8s2l4tqakWMg9eUjIJ2A==
|
||||
resolved "https://registry.npmmirror.com/@node-rs/jieba-darwin-arm64/-/jieba-darwin-arm64-1.10.3.tgz"
|
||||
integrity sha512-dwPhkav1tEARskwPz91UUXL2NXy4h0lJYTuJzpGgwXxm552zBM2JJ41kjah1364j+EOq5At3NQvf5r5rH89phQ==
|
||||
|
||||
"@node-rs/jieba@^1.6.0":
|
||||
version "1.10.3"
|
||||
@ -4692,6 +4687,11 @@ fs.realpath@^1.0.0:
|
||||
resolved "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz"
|
||||
integrity sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==
|
||||
|
||||
fsevents@~2.3.2:
|
||||
version "2.3.3"
|
||||
resolved "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz"
|
||||
integrity sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==
|
||||
|
||||
function-bind@^1.1.1:
|
||||
version "1.1.1"
|
||||
resolved "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz"
|
||||
|
Loading…
Reference in New Issue
Block a user