mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-07-31 07:34:07 +00:00
Merge remote-tracking branch 'origin/dbgpt_doc' into dbgpt_doc
This commit is contained in:
commit
ee94242d3c
223
README.en.md
223
README.en.md
@ -1,106 +1,223 @@
|
||||
# DB-GPT
|
||||
# DB-GPT 
|
||||
|
||||
---
|
||||
|
||||
[中文版](README.md)
|
||||
|
||||
A Open Database-GPT Experiment, interact your data and environment using the local GPT, no data leaks, 100% privately, 100% security.
|
||||
[](https://star-history.com/#csunny/DB-GPT)
|
||||
|
||||
## What is DB-GPT?
|
||||
|
||||
As large models are released and iterated upon, they are becoming increasingly intelligent. However, in the process of using large models, we face significant challenges in data security and privacy. We need to ensure that our sensitive data and environments remain completely controlled and avoid any data privacy leaks or security risks. Based on this, we have launched the DB-GPT project to build a complete private large model solution for all database-based scenarios. This solution supports local deployment, allowing it to be applied not only in independent private environments but also to be independently deployed and isolated according to business modules, ensuring that the ability of large models is absolutely private, secure, and controllable.
|
||||
|
||||
DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
|
||||
|
||||
## Features
|
||||
|
||||
- SQL Project
|
||||
- SQL Generate
|
||||
- SQL-diagnosis
|
||||
- Database-QA Based Knowledge
|
||||
Currently, we have released multiple key features, which are listed below to demonstrate our current capabilities:
|
||||
|
||||
## Architecture Design
|
||||
- SQL language capabilities
|
||||
- SQL generation
|
||||
- SQL diagnosis
|
||||
- Private domain Q&A and data processing
|
||||
- Database knowledge Q&A
|
||||
- Data processing
|
||||
- Plugins
|
||||
- Support custom plugin execution tasks and natively support the Auto-GPT plugin, such as:
|
||||
- Automatic execution of SQL and retrieval of query results
|
||||
- Automatic crawling and learning of knowledge
|
||||
- Unified vector storage/indexing of knowledge base
|
||||
- Support for unstructured data such as PDF, Markdown, CSV, and WebURL
|
||||
|
||||
|
||||
## Demo
|
||||
|
||||
Run on an RTX 4090 GPU. [YouTube](https://www.youtube.com/watch?v=1PWI6F89LPo)
|
||||
|
||||
### Run
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/demo_en.gif" width="600px" />
|
||||
</p>
|
||||
|
||||
### SQL Generation
|
||||
|
||||
1. Generate Create Table SQL
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/SQL_Gen_CreateTable_en.png" width="600px" />
|
||||
</p>
|
||||
|
||||
2. Generating executable SQL:To generate executable SQL, first select the corresponding database and then the model can generate SQL based on the corresponding database schema information. The successful result of running it would be demonstrated as follows:
|
||||
<p align="center">
|
||||
<img src="./assets/exeable_en.png" width="600px" />
|
||||
</p>
|
||||
|
||||
### Q&A
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/DB_QA_en.png" width="600px" />
|
||||
</p>
|
||||
|
||||
1. Based on the default built-in knowledge base, question and answer.
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/Knownledge_based_QA_en.png" width="600px" />
|
||||
</p>
|
||||
|
||||
2. Add your own knowledge base.
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/new_knownledge_en.gif" width="600px" />
|
||||
</p>
|
||||
|
||||
3. Learning from crawling data from the Internet
|
||||
|
||||
- TODO
|
||||
|
||||
|
||||
## Introduction
|
||||
DB-GPT creates a vast model operating system using [FastChat](https://github.com/lm-sys/FastChat) and offers a large language model powered by [Vicuna](https://huggingface.co/Tribbiani/vicuna-7b). In addition, we provide private domain knowledge base question-answering capability through LangChain. Furthermore, we also provide support for additional plugins, and our design natively supports the Auto-GPT plugin.
|
||||
|
||||
Is the architecture of the entire DB-GPT shown in the following figure:
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/DB-GPT.png" width="600px" />
|
||||
</p>
|
||||
|
||||
[DB-GPT](https://github.com/csunny/DB-GPT) is an experimental open-source application that builds upon the [FastChat](https://github.com/lm-sys/FastChat) model and uses vicuna as its base model. Additionally, it looks like this application incorporates langchain and llama-index embedding knowledge to improve Database-QA capabilities.
|
||||
The core capabilities mainly consist of the following parts:
|
||||
1. Knowledge base capability: Supports private domain knowledge base question-answering capability.
|
||||
2. Large-scale model management capability: Provides a large model operating environment based on FastChat.
|
||||
3. Unified data vector storage and indexing: Provides a uniform way to store and index various data types.
|
||||
4. Connection module: Used to connect different modules and data sources to achieve data flow and interaction.
|
||||
5. Agent and plugins: Provides Agent and plugin mechanisms, allowing users to customize and enhance the system's behavior.
|
||||
6. Prompt generation and optimization: Automatically generates high-quality prompts and optimizes them to improve system response efficiency.
|
||||
7. Multi-platform product interface: Supports various client products, such as web, mobile applications, and desktop applications.
|
||||
|
||||
Overall, it appears to be a sophisticated and innovative tool for working with databases. If you have any specific questions about how to use or implement DB-GPT in your work, please let me know and I'll do my best to assist you.
|
||||
Below is a brief introduction to each module:
|
||||
|
||||
## Demo
|
||||
### Knowledge base capability
|
||||
|
||||
Run on an RTX 4090 GPU (The origin mov not sped up!, [YouTube地址](https://www.youtube.com/watch?v=1PWI6F89LPo))
|
||||
As the knowledge base is currently the most significant user demand scenario, we natively support the construction and processing of knowledge bases. At the same time, we also provide multiple knowledge base management strategies in this project, such as:
|
||||
1. Default built-in knowledge base
|
||||
2. Custom addition of knowledge bases
|
||||
3. Various usage scenarios such as constructing knowledge bases through plugin capabilities and web crawling. Users only need to organize the knowledge documents, and they can use our existing capabilities to build the knowledge base required for the large model.
|
||||
|
||||
### Run
|
||||
### LLMs Management
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/演示.gif" width="600px" />
|
||||
</p>
|
||||
In the underlying large model integration, we have designed an open interface that supports integration with various large models. At the same time, we have a very strict control and evaluation mechanism for the effectiveness of the integrated models. In terms of accuracy, the integrated models need to align with the capability of ChatGPT at a level of 85% or higher. We use higher standards to select models, hoping to save users the cumbersome testing and evaluation process in the process of use.
|
||||
|
||||
### SQL Generate
|
||||
### Vector storage and indexing
|
||||
|
||||
First, select the DataBase, you can use Schema to generate the SQL.。
|
||||
In order to facilitate the management of knowledge after vectorization, we have built-in multiple vector storage engines, from memory-based Chroma to distributed Milvus. Users can choose different storage engines according to their own scenario needs. The storage of knowledge vectors is the cornerstone of AI capability enhancement. As the intermediate language for interaction between humans and large language models, vectors play a very important role in this project.
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/SQLGEN.png" width="600px" />
|
||||
</p>
|
||||
### Connections
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/exeable.png" width="600px" />
|
||||
</p>
|
||||
In order to interact more conveniently with users' private environments, the project has designed a connection module, which can support connection to databases, Excel, knowledge bases, and other environments to achieve information and data exchange.
|
||||
|
||||
### Database-QA
|
||||
### Agent and Plugin
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/DB_QA.png" width="600px" />
|
||||
</p>
|
||||
The ability of Agent and Plugin is the core of whether large models can be automated. In this project, we natively support the plugin mode, and large models can automatically achieve their goals. At the same time, in order to give full play to the advantages of the community, the plugins used in this project natively support the Auto-GPT plugin ecology, that is, Auto-GPT plugins can directly run in our project.
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/VectorDBQA.png" width="600px" />
|
||||
</p>
|
||||
### Prompt Automatic Generation and Optimization
|
||||
|
||||
## Deployment
|
||||
Prompt is a very important part of the interaction between the large model and the user, and to a certain extent, it determines the quality and accuracy of the answer generated by the large model. In this project, we will automatically optimize the corresponding prompt according to user input and usage scenarios, making it easier and more efficient for users to use large language models.
|
||||
|
||||
### 1. Python Requirement
|
||||
### Multi-Platform Product Interface
|
||||
|
||||
```bash
|
||||
$ python>=3.9
|
||||
$ pip install -r requirements.txt
|
||||
```
|
||||
TODO: In terms of terminal display, we will provide a multi-platform product interface, including PC, mobile phone, command line, Slack and other platforms.
|
||||
|
||||
or if you use conda envirenment, you can use this command
|
||||
## Deployment
|
||||
|
||||
```bash
|
||||
$ conda env create -f environment.yml
|
||||
```
|
||||
### 1. Hardware Requirements
|
||||
As our project has the ability to achieve ChatGPT performance of over 85%, there are certain hardware requirements. However, overall, the project can be deployed and used on consumer-grade graphics cards. The specific hardware requirements for deployment are as follows:
|
||||
|
||||
### 2. MySQL
|
||||
| GPU | VRAM Size | Performance |
|
||||
| --------- | --------- | ------------------------------------------- |
|
||||
| RTX 4090 | 24 GB | Smooth conversation inference |
|
||||
| RTX 3090 | 24 GB | Smooth conversation inference, better than V100 |
|
||||
| V100 | 16 GB | Conversation inference possible, noticeable stutter |
|
||||
|
||||
In this project examples, we connect mysql and run SQL-Generate. so you need install mysql local for test. recommand docker
|
||||
### 2. Install
|
||||
|
||||
This project relies on a local MySQL database service, which you need to install locally. We recommend using Docker for installation.
|
||||
|
||||
```bash
|
||||
$ docker run --name=mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=aa12345678 -dit mysql:latest
|
||||
```
|
||||
We use [Chroma embedding database](https://github.com/chroma-core/chroma) as the default for our vector database, so there is no need for special installation. If you choose to connect to other databases, you can follow our tutorial for installation and configuration.
|
||||
For the entire installation process of DB-GPT, we use the miniconda3 virtual environment. Create a virtual environment and install the Python dependencies.
|
||||
```
|
||||
python>=3.10
|
||||
conda create -n dbgpt_env python=3.10
|
||||
conda activate dbgpt_env
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
Alternatively, you can use the following command:
|
||||
```
|
||||
cd DB-GPT
|
||||
conda env create -f environment.yml
|
||||
```
|
||||
It is recommended to set the Python package path to avoid runtime errors due to package not found.
|
||||
```
|
||||
echo "/root/workspace/DB-GPT" > /root/miniconda3/env/dbgpt_env/lib/python3.10/site-packages/dbgpt.pth
|
||||
```
|
||||
Notice: You need replace the path to your owner.
|
||||
|
||||
### 3. LLM
|
||||
### 3. Run
|
||||
You can refer to this document to obtain the Vicuna weights: [Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-weights) .
|
||||
|
||||
- [vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-weights)
|
||||
- [Hugging Face](https://huggingface.co/Tribbiani/vicuna-7b)
|
||||
If you have difficulty with this step, you can also directly use the model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a replacement.
|
||||
|
||||
1. Run server
|
||||
```bash
|
||||
$ cd pilot/server
|
||||
$ python vicuna_server.py
|
||||
$ python pilot/server/llmserver.py
|
||||
```
|
||||
|
||||
Run gradio webui
|
||||
|
||||
```bash
|
||||
$ python webserver.py
|
||||
$ python pilot/server/webserver.py
|
||||
```
|
||||
Notice: the webserver need to connect llmserver, so you need change the pilot/configs/model_config.py file. change the VICUNA_MODEL_SERVER = "http://127.0.0.1:8000" to your address. It's very important.
|
||||
|
||||
## Thanks
|
||||
## Usage Instructions
|
||||
We provide a user interface for Gradio, which allows you to use DB-GPT through our user interface. Additionally, we have prepared several reference articles (written in Chinese) that introduce the code and principles related to our project.
|
||||
- [LLM Practical In Action Series (1) — Combined Langchain-Vicuna Application Practical](https://medium.com/@cfqcsunny/llm-practical-in-action-series-1-combined-langchain-vicuna-application-practical-701cd0413c9f)
|
||||
|
||||
- [FastChat](https://github.com/lm-sys/FastChat)
|
||||
- [vicuna-13b](https://huggingface.co/Tribbiani/vicuna-13b)
|
||||
- [langchain](https://github.com/hwchase17/langchain)
|
||||
- [llama-index](https://github.com/jerryjliu/llama_index) and [In-Context Learning](https://arxiv.org/abs/2301.00234)
|
||||
## Acknowledgement
|
||||
|
||||
The achievements of this project are thanks to the technical community, especially the following projects:
|
||||
- [FastChat](https://github.com/lm-sys/FastChat) for providing chat services
|
||||
- [vicuna-13b](https://lmsys.org/blog/2023-03-30-vicuna/) as the base model
|
||||
- [langchain](https://langchain.readthedocs.io/) tool chain
|
||||
- [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) universal plugin template
|
||||
- [Hugging Face](https://huggingface.co/) for big model management
|
||||
- [Chroma](https://github.com/chroma-core/chroma) for vector storage
|
||||
- [Milvus](https://milvus.io/) for distributed vector storage
|
||||
- [ChatGLM](https://github.com/THUDM/ChatGLM-6B) as the base model
|
||||
- [llama_index](https://github.com/jerryjliu/llama_index) for enhancing database-related knowledge using [in-context learning](https://arxiv.org/abs/2301.00234) based on existing knowledge bases.
|
||||
|
||||
<!-- GITCONTRIBUTOR_START -->
|
||||
|
||||
## Contributors
|
||||
|
||||
|[<img src="https://avatars.githubusercontent.com/u/17919400?v=4" width="100px;"/><br/><sub><b>csunny</b></sub>](https://github.com/csunny)<br/>|[<img src="https://avatars.githubusercontent.com/u/1011681?v=4" width="100px;"/><br/><sub><b>xudafeng</b></sub>](https://github.com/xudafeng)<br/>|[<img src="https://avatars.githubusercontent.com/u/7636723?s=96&v=4" width="100px;"/><br/><sub><b>明天</b></sub>](https://github.com/yhjun1026)<br/> | [<img src="https://avatars.githubusercontent.com/u/13723926?v=4" width="100px;"/><br/><sub><b>Aries-ckt</b></sub>](https://github.com/Aries-ckt)<br/>|[<img src="https://avatars.githubusercontent.com/u/95130644?v=4" width="100px;"/><br/><sub><b>thebigbone</b></sub>](https://github.com/thebigbone)<br/>|
|
||||
| :---: | :---: | :---: | :---: |:---: |
|
||||
|
||||
|
||||
This project follows the git-contributor [spec](https://github.com/xudafeng/git-contributor), auto updated at `Sun May 14 2023 23:02:43 GMT+0800`.
|
||||
|
||||
<!-- GITCONTRIBUTOR_END -->
|
||||
|
||||
## Licence
|
||||
|
||||
The MIT License (MIT)
|
||||
|
||||
## Contact Information
|
||||
We are working on building a community, if you have any ideas about building the community, feel free to contact me us.
|
||||
|
||||
name | email|
|
||||
---------|---------------------
|
||||
yushun06| my_prophet@hotmail.com
|
||||
csunny | cfqcsunny@gmail.com
|
73
README.md
73
README.md
@ -1,14 +1,15 @@
|
||||
# DB-GPT 
|
||||
|
||||
---
|
||||
[English Edition](README.en.md)
|
||||
|
||||
## 背景
|
||||
[](https://star-history.com/#csunny/DB-GPT)
|
||||
|
||||
## DB-GPT 是什么?
|
||||
随着大模型的发布迭代,大模型变得越来越智能,在使用大模型的过程当中,遇到极大的数据安全与隐私挑战。在利用大模型能力的过程中我们的私密数据跟环境需要掌握自己的手里,完全可控,避免任何的数据隐私泄露以及安全风险。基于此,我们发起了DB-GPT项目,为所有以数据库为基础的场景,构建一套完整的私有大模型解决方案。 此方案因为支持本地部署,所以不仅仅可以应用于独立私有环境,而且还可以根据业务模块独立部署隔离,让大模型的能力绝对私有、安全、可控。
|
||||
|
||||
## 愿景
|
||||
DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地化的GPT大模型与您的数据和环境进行交互,无数据泄露风险,100% 私密,100% 安全。
|
||||
|
||||
|
||||
## 特性一览
|
||||
|
||||
目前我们已经发布了多种关键的特性,这里一一列举展示一下当前发布的能力。
|
||||
@ -23,8 +24,7 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
- SQL自动执行,获取查询结果
|
||||
- 自动爬取学习知识
|
||||
- 知识库统一向量存储/索引
|
||||
- 非结构化数据支持
|
||||
- PDF、MarkDown、CSV、WebURL
|
||||
- 非结构化数据支持包括PDF、MarkDown、CSV、WebURL
|
||||
|
||||
## 效果演示
|
||||
|
||||
@ -54,12 +54,19 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
<img src="./assets/exeable.png" width="600px" />
|
||||
</p>
|
||||
|
||||
3. 自动分析执行SQL输出运行结果
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/Auto-DB-GPT.png" width="600px" />
|
||||
</p>
|
||||
|
||||
### 数据库问答
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/DB_QA.png" width="600px" />
|
||||
</p>
|
||||
|
||||
|
||||
1. 基于默认内置知识库问答
|
||||
|
||||
<p align="center">
|
||||
@ -76,7 +83,7 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
- TODO
|
||||
|
||||
## 架构方案
|
||||
DB-GPT基于[FastChat](https://github.com/lm-sys/FastChat) 构建大模型运行环境,并提供 vicuna 作为基础的大语言模型。此外,我们通过langchain提供私域知识库问答能力。同时我们支持插件模式, 在设计上原生支持Auto-GPT插件。
|
||||
DB-GPT基于 [FastChat](https://github.com/lm-sys/FastChat) 构建大模型运行环境,并提供 vicuna 作为基础的大语言模型。此外,我们通过LangChain提供私域知识库问答能力。同时我们支持插件模式, 在设计上原生支持Auto-GPT插件。
|
||||
|
||||
整个DB-GPT的架构,如下图所示
|
||||
|
||||
@ -85,18 +92,23 @@ DB-GPT基于[FastChat](https://github.com/lm-sys/FastChat) 构建大模型运行
|
||||
</p>
|
||||
|
||||
核心能力主要有以下几个部分。
|
||||
1. 知识库能力
|
||||
2. 大模型管理能力
|
||||
3. 统一的数据向量化存储与索引
|
||||
4. 连接模块
|
||||
5. Agent与插件
|
||||
6. Prompt自动生成与优化
|
||||
7. 多端产品界面
|
||||
1. 知识库能力:支持私域知识库问答能力
|
||||
2. 大模型管理能力:基于FastChat提供一个大模型的运营环境。
|
||||
3. 统一的数据向量化存储与索引:提供一种统一的方式来存储和索引各种数据类型。
|
||||
4. 连接模块:用于连接不同的模块和数据源,实现数据的流转和交互。
|
||||
5. Agent与插件:提供Agent和插件机制,使得用户可以自定义并增强系统的行为。
|
||||
6. Prompt自动生成与优化:自动化生成高质量的Prompt,并进行优化,提高系统的响应效率。
|
||||
7. 多端产品界面:支持多种不同的客户端产品,例如Web、移动应用和桌面应用等。
|
||||
|
||||
下面对每个模块也做一些简要的介绍:
|
||||
|
||||
### 知识库能力
|
||||
知识库作为当前用户需求最大的场景,我们原生支持知识库的构建与处理。同时在本项目当中,也提供了多种知识库的管理策略。 如: 1. 默认内置知识库 2. 自定义新增知识库 3. 通过插件能力自抓取构建知识库等多种使用场景。 用户只需要整理好知识文档,即可用我们现有的能力构建大模型所需要的知识库能力。
|
||||
知识库作为当前用户需求最大的场景,我们原生支持知识库的构建与处理。同时在本项目当中,也提供了多种知识库的管理策略。 如:
|
||||
1. 默认内置知识库
|
||||
2. 自定义新增知识库
|
||||
3. 通过插件能力自抓取构建知识库等多种使用场景。
|
||||
|
||||
用户只需要整理好知识文档,即可用我们现有的能力构建大模型所需要的知识库能力。
|
||||
|
||||
### 大模型管理能力
|
||||
在底层大模型接入中,设计了开放的接口,支持对接多种大模型。同时对于接入模型的效果,我们有非常严格的把控与评审机制。对大模型能力上与ChatGPT对比,在准确率上需要满足85%以上的能力对齐。我们用更高的标准筛选模型,是期望在用户使用过程中,可以省去前面繁琐的测试评估环节。
|
||||
@ -114,20 +126,18 @@ Agent与插件能力是大模型能否自动化的核心,在本的项目中,
|
||||
Prompt是与大模型交互过程中非常重要的部分,一定程度上Prompt决定了大模型生成答案的质量与准确性,在本的项目中,我们会根据用户输入与使用场景,自动优化对应的Prompt,让用户使用大语言模型变得更简单、更高效。
|
||||
|
||||
### 多端产品界面
|
||||
TODO: 在终端展示上,我们将提供多端产品界面。包括PC、手机、命令行、slack等多种模式。
|
||||
TODO: 在终端展示上,我们将提供多端产品界面。包括PC、手机、命令行、Slack等多种模式。
|
||||
|
||||
|
||||
## 安装教程
|
||||
### 硬件说明
|
||||
### 1.硬件说明
|
||||
因为我们的项目在效果上具备ChatGPT 85%以上的能力,因此对硬件有一定的要求。 但总体来说,我们在消费级的显卡上即可完成项目的部署使用,具体部署的硬件说明如下:
|
||||
```
|
||||
GPU型号 | 显存大小 | 性能
|
||||
-------|----------|------------------------------
|
||||
TRX4090| 24G |可以流畅的进行对话推理,无卡顿
|
||||
TRX3090| 24G |可以流畅进行对话推理,有卡顿感,但好与V100
|
||||
V100 | 16G |可以进行对话推理,有明显卡顿
|
||||
```
|
||||
### DB-GPT安装
|
||||
| GPU型号 | 显存大小 | 性能 |
|
||||
| ------- | -------- | ------------------------------------------ |
|
||||
| RTX4090 | 24G | 可以流畅的进行对话推理,无卡顿 |
|
||||
| RTX3090 | 24G | 可以流畅进行对话推理,有卡顿感,但好于V100 |
|
||||
| V100 | 16G | 可以进行对话推理,有明显卡顿 |
|
||||
### 2.DB-GPT安装
|
||||
|
||||
本项目依赖一个本地的 MySQL 数据库服务,你需要本地安装,推荐直接使用 Docker 安装。
|
||||
```
|
||||
@ -154,10 +164,10 @@ echo "/root/workspace/DB-GPT" > /root/miniconda3/env/dbgpt_env/lib/python3.10/si
|
||||
|
||||
### 3. 运行大模型
|
||||
|
||||
关于基础模型, 可以根据[vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-weights)合成教程进行合成。
|
||||
如果此步有困难的同学,也可以直接使用[Hugging Face](https://huggingface.co/)上的模型进行替代. [替代模型](https://huggingface.co/Tribbiani/vicuna-7b)
|
||||
关于基础模型, 可以根据[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-weights)合成教程进行合成。
|
||||
如果此步有困难的同学,也可以直接使用[此链接](https://huggingface.co/Tribbiani/vicuna-7b)上的模型进行替代。
|
||||
|
||||
2. 运行模型服务
|
||||
运行模型服务
|
||||
```
|
||||
cd pilot/server
|
||||
python llmserver.py
|
||||
@ -168,10 +178,11 @@ python llmserver.py
|
||||
```bash
|
||||
$ python webserver.py
|
||||
```
|
||||
注意: 在启动Webserver之前, 需要修改pilot/configs/model_config.py 文件中的VICUNA_MODEL_SERVER = "http://127.0.0.1:8000", 将地址设置为你的服务器地址。
|
||||
|
||||
## 使用说明
|
||||
|
||||
我们提供了gradio的用户界面,可以通过我们的用户界面使用DB-GPT, 同时关于我们项目相关的一些代码跟原理介绍,我们也准备了以下几篇参考文章。
|
||||
我们提供了Gradio的用户界面,可以通过我们的用户界面使用DB-GPT, 同时关于我们项目相关的一些代码跟原理介绍,我们也准备了以下几篇参考文章。
|
||||
1. [大模型实战系列(1) —— 强强联合Langchain-Vicuna应用实战](https://zhuanlan.zhihu.com/p/628750042)
|
||||
2. [大模型实战系列(2) —— DB-GPT 阿里云部署指南](https://zhuanlan.zhihu.com/p/629467580)
|
||||
3. [大模型实战系列(3) —— DB-GPT插件模型原理与使用](https://zhuanlan.zhihu.com/p/629623125)
|
||||
@ -183,8 +194,8 @@ $ python webserver.py
|
||||
- [FastChat](https://github.com/lm-sys/FastChat) 提供 chat 服务
|
||||
- [vicuna-13b](https://huggingface.co/Tribbiani/vicuna-13b) 作为基础模型
|
||||
- [langchain](https://github.com/hwchase17/langchain) 工具链
|
||||
- [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT) 通用的插件模版
|
||||
- [HuggingFace](https://huggingface.co/) 大模型管理
|
||||
- [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) 通用的插件模版
|
||||
- [Hugging Face](https://huggingface.co/) 大模型管理
|
||||
- [Chroma](https://github.com/chroma-core/chroma) 向量存储
|
||||
- [Milvus](https://milvus.io/) 分布式向量存储
|
||||
- [ChatGLM](https://github.com/THUDM/ChatGLM-6B) 基础模型
|
||||
@ -205,7 +216,7 @@ This project follows the git-contributor [spec](https://github.com/xudafeng/git-
|
||||
这是一个用于数据库的复杂且创新的工具, 我们的项目也在紧急的开发当中, 会陆续发布一些新的feature。如在使用当中有任何具体问题, 优先在项目下提issue, 如有需要, 请联系如下微信,我会尽力提供帮助,同时也非常欢迎大家参与到项目建设中。
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/wechat.jpg" width="320px" />
|
||||
<img src="./assets/DB_GPT_wechat.png" width="320px" />
|
||||
</p>
|
||||
|
||||
## Licence
|
||||
|
BIN
assets/Auto-DB-GPT.png
Normal file
BIN
assets/Auto-DB-GPT.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 88 KiB |
BIN
assets/DB_GPT_wechat.png
Normal file
BIN
assets/DB_GPT_wechat.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 257 KiB |
BIN
assets/Knownledge_based_QA_en.png
Normal file
BIN
assets/Knownledge_based_QA_en.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 310 KiB |
BIN
assets/new_knownledge_en.gif
Normal file
BIN
assets/new_knownledge_en.gif
Normal file
Binary file not shown.
After Width: | Height: | Size: 2.5 MiB |
Loading…
Reference in New Issue
Block a user