Interact your data and environment using the local GPT, no data leaks, 100% privately, 100% security
Go to file
2023-08-11 19:00:13 +08:00
.github fix: typo 2023-08-09 20:27:19 +08:00
assets doc:docs for knowledge space arguments 2023-08-10 16:50:17 +08:00
docker feat: Knowledge QA support SQLite 2023-08-11 18:25:03 +08:00
docs doc: Run with SQLite 2023-08-11 19:00:13 +08:00
examples feat: define framework and split api 2023-06-20 19:36:35 +08:00
pilot feat: Knowledge QA support SQLite 2023-08-11 18:25:03 +08:00
plugins fix:unusual file 2023-05-15 00:18:06 +08:00
tests style: fmt 2023-08-03 15:39:16 +08:00
tools feat: support multiple knowledge file path and skip some error in knowledge embedding 2023-07-28 16:29:29 +08:00
.dockerignore docker ignore plugins too 2023-06-22 16:28:48 -07:00
.env.template feat: Knowledge QA support SQLite 2023-08-11 18:25:03 +08:00
.gitignore feat: Support SQLite connection 2023-08-11 18:25:03 +08:00
.plugin_env.template add plugin_env file, define plugin config strategy. 2023-06-13 15:58:24 +08:00
.readthedocs.yaml fix 2023-05-25 11:14:58 +08:00
CONTRIBUTING.md
docker-compose.yml doc: Run with SQLite 2023-08-11 19:00:13 +08:00
Dockerfile added tunnel 2023-06-23 01:18:34 +00:00
LICENSE
MANIFEST.in fix: no MANIFEST.in 2023-08-09 21:00:35 +08:00
README.md doc:readme discord 2023-08-11 10:50:39 +08:00
README.zh.md doc:readme discord 2023-08-11 10:50:39 +08:00
requirements.txt feat: Support 8-bit quantization and 4-bit quantization for multi-gpu inference 2023-08-02 19:29:59 +08:00
SECURITY.md Create SECURITY.md 2023-05-13 15:59:34 +08:00
setup.py doc:0.3.5 verison 2023-08-11 17:30:05 +08:00

DB-GPT: Revolutionizing Database Interactions with Private LLM Technology

What is DB-GPT?

DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.

Contents

DB-GPT Youtube Video

Demo

Run on an RTX 4090 GPU.

https://github.com/csunny/DB-GPT/assets/13723926/55f31781-1d49-4757-b96e-7ef6d3dbcf80

Chat with data, and figure charts.

Text2SQL, generate SQL from chat

Knowledge space to manage docs.

Chat with knowledge, such as url, pdf, csv, word. etc

Features

Currently, we have released multiple key features, which are listed below to demonstrate our current capabilities:

  • SQL language capabilities

    • SQL generation
    • SQL diagnosis
  • Private domain Q&A and data processing

    • Knowledge Management(We currently support many document formats: txt, pdf, md, html, doc, ppt, and url.)
    • Database knowledge Q&A
    • knowledge Embedding
  • ChatDB

  • ChatDashboard

  • Plugins

    • Support custom plugin execution tasks and natively support the Auto-GPT plugin, such as:
    • Automatic execution of SQL and retrieval of query results
    • Automatic crawling and learning of knowledge
  • Unified vector storage/indexing of knowledge base

    • Support for unstructured data such as PDF, TXT, Markdown, CSV, DOC, PPT, and WebURL
  • Multi LLMs Support, Supports multiple large language models, currently supporting

    • 🔥 Vicuna-v1.5(7b,13b)
    • 🔥 llama-2(7b,13b,70b)
    • WizardLM-v1.2(13b)
    • Vicuna (7b,13b)
    • ChatGLM-6b (int4,int8)
    • ChatGLM2-6b (int4,int8)
    • guanaco(7b,13b,33b)
    • Gorilla(7b,13b)
    • baichuan(7b,13b)

Star History Chart

Introduction

DB-GPT creates a vast model operating system using FastChat and offers a large language model powered by Vicuna. In addition, we provide private domain knowledge base question-answering capability. Furthermore, we also provide support for additional plugins, and our design natively supports the Auto-GPT plugin.Our vision is to make it easier and more convenient to build applications around databases and llm.

Is the architecture of the entire DB-GPT shown in the following figure:

The core capabilities mainly consist of the following parts:

  1. Knowledge base capability: Supports private domain knowledge base question-answering capability.
  2. Large-scale model management capability: Provides a large model operating environment based on FastChat.
  3. Unified data vector storage and indexing: Provides a uniform way to store and index various data types.
  4. Connection module: Used to connect different modules and data sources to achieve data flow and interaction.
  5. Agent and plugins: Provides Agent and plugin mechanisms, allowing users to customize and enhance the system's behavior.
  6. Prompt generation and optimization: Automatically generates high-quality prompts and optimizes them to improve system response efficiency.
  7. Multi-platform product interface: Supports various client products, such as web, mobile applications, and desktop applications.

SubModule

Image

🌐 AutoDL Image

Install

Quickstart

Language Switching

In the .env configuration file, modify the LANGUAGE parameter to switch to different languages. The default is English (Chinese: zh, English: en, other languages to be added later).

Platform Deployment

  • autodl autodl image. You can refer to the image instructions to build from scratch, or use docker pull to obtain the shared image, follow the instructions in the document to operate. If you have any questions, please leave a comment.

Usage Instructions

If nltk-related errors occur during the use of the knowledge base, you need to install the nltk toolkit. For more details, please refer to: nltk documents Run the Python interpreter and type the commands:

>>> import nltk
>>> nltk.download()

Acknowledgement

This project is standing on the shoulders of giants and is not going to work without the open-source communities. Special thanks to the following projects for their excellent contribution to the AI industry:

Contribution

RoadMap

Licence

The MIT License (MIT)

Contact Information

We are working on building a community, if you have any ideas about building the community, feel free to contact us.