dbgpt_doc->main (#217)

merge dbgpt_doc->main
This commit is contained in:
magic.chen 2023-06-14 18:03:50 +08:00 committed by GitHub
commit 3c9fffe815
12 changed files with 480 additions and 96 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 745 KiB

View File

@ -37,6 +37,7 @@ Once the environment is installed, we have to create a new folder "models" in th
```
git clone https://huggingface.co/Tribbiani/vicuna-13b
git clone https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
```
The model files are large and will take a long time to download. During the download, let's configure the .env file, which needs to be copied and created from the .env.template
@ -57,6 +58,11 @@ If you have difficulty with this step, you can also directly use the model from
$ python pilot/server/llmserver.py
```
Starting `llmserver.py` with the following command will result in a relatively stable Python service with multiple processes.
```bash
$ gunicorn llmserver:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 &
```
Run gradio webui
```bash

View File

@ -10,16 +10,15 @@ DB-GPT is divided into several functions, including chat with knowledge base, ex
### Knowledge
[How to Create your own knowledge repository](https://db-gpt.readthedocs.io/en/latest/modules/knowledge.html)
[How to Create your own knowledge repository](https://db-gpt.readthedocs.io/en/latest/modules/knownledge.html)
[Add new Knowledge demonstration](https://github.com/csunny/DB-GPT/blob/main/assets/new_knownledge_en.gif)
![Add new Knowledge demonstration](../../assets/new_knownledge.gif)
### SQL Generation
[sql generation demonstration](https://github.com/csunny/DB-GPT/blob/main/assets/demo_en.gif)
![sql generation demonstration](../../assets/demo_en.gif)
### SQL Execute
[sql execute demonstration](https://github.com/csunny/DB-GPT/blob/main/assets/auto_sql_en.gif)
![sql execute demonstration](../../assets/auto_sql_en.gif)
### Plugins
[db plugins demonstration](https://github.com/csunny/DB-GPT/blob/main/assets/auto_plugin.gif)
![db plugins demonstration](../../assets/dbgpt_bytebase_plugin.gif)

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-06-11 14:10+0800\n"
"POT-Creation-Date: 2023-06-14 17:26+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -17,95 +17,94 @@ msgstr ""
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.11.0\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/getting_started.md:1 cf1947dea9a843dd8b6fff68642f29b1
#: ../../getting_started/getting_started.md:1 a0477412435c4c569cf71d243d2884c7
msgid "Quickstart Guide"
msgstr "使用指南"
#: ../../getting_started/getting_started.md:3 4184879bf5b34521a95e497f4747241a
#: ../../getting_started/getting_started.md:3 331f8c3fbbac44c1b75d2ff595c0235f
msgid ""
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
"environment and data."
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
#: ../../getting_started/getting_started.md:5 7431b72cc1504b8bbcafb7512a6b6c92
#: ../../getting_started/getting_started.md:5 df41fe97067d4ba680e3231f05a843de
msgid "Installation"
msgstr "安装"
#: ../../getting_started/getting_started.md:7 b8faf2ec4e034855a2674ffcade8cee2
#: ../../getting_started/getting_started.md:7 73f72c06a89341f38f4bfbe70ed0d2ae
msgid "To get started, install DB-GPT with the following steps."
msgstr "请按照以下步骤安装DB-GPT"
#: ../../getting_started/getting_started.md:9 ae0f536a064647cda04ea3d253991d80
#: ../../getting_started/getting_started.md:9 59c159fd30104ba081e2ebbf1605fe11
msgid "1. Hardware Requirements"
msgstr "1. 硬件要求"
#: ../../getting_started/getting_started.md:10 8fa637100e644b478e0d6858f0a5b63d
#: ../../getting_started/getting_started.md:10 126558467caa4b05bea0bb051c864831
msgid ""
"As our project has the ability to achieve ChatGPT performance of over "
"85%, there are certain hardware requirements. However, overall, the "
"project can be deployed and used on consumer-grade graphics cards. The "
"specific hardware requirements for deployment are as follows:"
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。"
"但总体来说,我们在消费级的显卡上即可完成项目的部署使用,具体部署的硬件说明如下:"
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。但总体来说我们在消费级的显卡上即可完成项目的部署使用具体部署的硬件说明如下:"
#: ../../getting_started/getting_started.md c68539579083407882fb0d28943d40db
#: ../../getting_started/getting_started.md 967e5cde5c6241edae9e0e0e0b217221
msgid "GPU"
msgstr "GPU"
#: ../../getting_started/getting_started.md 613fbe77d41a4a20a30c3c9a0b6ec20c
#: ../../getting_started/getting_started.md 05be5479b15b403c8aa3374ea53feff8
msgid "VRAM Size"
msgstr "显存大小"
#: ../../getting_started/getting_started.md c0b7f8249d3d4c629ba5deb8188a49b4
#: ../../getting_started/getting_started.md ea646d956e834a75912780789946bb47
msgid "Performance"
msgstr "显存大小"
#: ../../getting_started/getting_started.md 5d103f7e4d1b4b6cb7358c0c717c9f73
#: ../../getting_started/getting_started.md aba96df6b91441b982128426eb3a2ebb
msgid "RTX 4090"
msgstr "RTX 4090"
#: ../../getting_started/getting_started.md 48338f6b18dc41efb3613d47b1a762a7
#: f14d278e083440b58fc7faeed30e2879
#: ../../getting_started/getting_started.md d2797ffe3d534460ae77a3397fc07c1c
#: e1b5f1a7502e475b872eebd75972a87f
msgid "24 GB"
msgstr "24 GB"
#: ../../getting_started/getting_started.md dc238037ff3449cdb95cbd882d8de170
#: ../../getting_started/getting_started.md 4ed99c06dc4643c290b5914142ea8371
msgid "Smooth conversation inference"
msgstr "可以流畅的进行对话推理,无卡顿"
#: ../../getting_started/getting_started.md d7f84ac79bf84cb6a453d3bfd26eb935
#: ../../getting_started/getting_started.md b3dc189389cd41f2883acfe3fad3d6a4
msgid "RTX 3090"
msgstr "RTX 3090"
#: ../../getting_started/getting_started.md 511ee322b777476b87a3aa5624609944
#: ../../getting_started/getting_started.md 0bea7ccef4154e2994144f81af877919
msgid "Smooth conversation inference, better than V100"
msgstr "可以流畅进行对话推理有卡顿感但好于V100"
#: ../../getting_started/getting_started.md 974b704e8cf84f6483774153df8a8c6c
#: ../../getting_started/getting_started.md 7d1c9e5c16184f2bacf8b12d6f38f629
msgid "V100"
msgstr "V100"
#: ../../getting_started/getting_started.md 72008961ce004a0fa24b74db55fcf96e
#: ../../getting_started/getting_started.md e9ddddf92e15420f85852cea1a1bbd8d
msgid "16 GB"
msgstr "16 GB"
#: ../../getting_started/getting_started.md 2a3b936fe04c4b7789680c26be7f4869
#: ../../getting_started/getting_started.md f5532da60b99495c8329d674749ae79f
msgid "Conversation inference possible, noticeable stutter"
msgstr "可以进行对话推理,有明显卡顿"
#: ../../getting_started/getting_started.md:18 fb1dbccb8f804384ade8e171aa40f99c
#: ../../getting_started/getting_started.md:18 3357d10704b94249b8ccdf7fd3645624
msgid "2. Install"
msgstr "2. 安装"
#: ../../getting_started/getting_started.md:20 695fdb8858c6488e9a0872d68fb387e5
#: ../../getting_started/getting_started.md:20 08e5054dd0fa4e07aa92236dca03a1d3
msgid ""
"This project relies on a local MySQL database service, which you need to "
"install locally. We recommend using Docker for installation."
msgstr "本项目依赖一个本地的 MySQL 数据库服务,你需要本地安装,推荐直接使用 Docker 安装。"
#: ../../getting_started/getting_started.md:25 954f3a282ec54b11a55ebfe1f680d1df
#: ../../getting_started/getting_started.md:25 f99a1af073e24b339390400251a50c9b
msgid ""
"We use [Chroma embedding database](https://github.com/chroma-core/chroma)"
" as the default for our vector database, so there is no need for special "
@ -114,66 +113,74 @@ msgid ""
"installation process of DB-GPT, we use the miniconda3 virtual "
"environment. Create a virtual environment and install the Python "
"dependencies."
msgstr "向量数据库我们默认使用的是Chroma内存数据库所以无需特殊安装如果有"
"需要连接其他的同学可以按照我们的教程进行安装配置。整个DB-GPT的"
"安装过程我们使用的是miniconda3的虚拟环境。创建虚拟环境并安装python依赖包"
msgstr ""
"向量数据库我们默认使用的是Chroma内存数据库所以无需特殊安装如果有需要连接其他的同学可以按照我们的教程进行安装配置。整个DB-"
"GPT的安装过程我们使用的是miniconda3的虚拟环境。创建虚拟环境并安装python依赖包"
#: ../../getting_started/getting_started.md:35 0314bad0928940fc8e382d289d356c66
#: ../../getting_started/getting_started.md:35 d8ebdf7c4ac54113be1c94ed879dc93f
msgid ""
"Once the environment is installed, we have to create a new folder "
"\"models\" in the DB-GPT project, and then we can put all the models "
"downloaded from huggingface in this directory"
msgstr "环境安装完成后我们必须在DB-GPT项目中创建一个新文件夹\"models\""
"然后我们可以把从huggingface下载的所有模型放到这个目录下。"
msgstr ""
"环境安装完成后我们必须在DB-"
"GPT项目中创建一个新文件夹\"models\"然后我们可以把从huggingface下载的所有模型放到这个目录下。"
#: ../../getting_started/getting_started.md:42 afdf176f72224fd6b8b6e9e23c80c1ef
#: ../../getting_started/getting_started.md:42 cd3ce56a4d644574a0c30dd86148a58c
msgid ""
"The model files are large and will take a long time to download. During "
"the download, let's configure the .env file, which needs to be copied and"
" created from the .env.template"
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件"
"它需要从。env.template中复制和创建。"
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件它需要从。env.template中复制和创建。"
#: ../../getting_started/getting_started.md:48 76c87610993f41059c3c0aade5117171
#: ../../getting_started/getting_started.md:48 96d634498ae04c84b4eae8502d5f65e8
msgid ""
"You can configure basic parameters in the .env file, for example setting "
"LLM_MODEL to the model to be used"
msgstr "您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。"
#: ../../getting_started/getting_started.md:35 443f5f92e4cd4ce4887bae2556b605b0
#: ../../getting_started/getting_started.md:50 a78627885d2a41a481131308d2d061a6
msgid "3. Run"
msgstr "3. 运行"
#: ../../getting_started/getting_started.md:36 3dab200eceda460b81a096d44de43d21
#: ../../getting_started/getting_started.md:51 69c1d06867cb4a0ea36ad0f3686109db
msgid ""
"You can refer to this document to obtain the Vicuna weights: "
"[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-"
"weights) ."
msgstr "关于基础模型, 可以根据[Vicuna](https://github.com/lm-sys/FastChat/b"
"lob/main/README.md#model-weights) 合成教程进行合成。"
msgstr ""
"关于基础模型, 可以根据[Vicuna](https://github.com/lm-"
"sys/FastChat/blob/main/README.md#model-weights) 合成教程进行合成。"
#: ../../getting_started/getting_started.md:38 b036ca6294f04bceb686187d2d8b6646
#: ../../getting_started/getting_started.md:53 1dcccc1784e04fa3a3340aa446f1484f
msgid ""
"If you have difficulty with this step, you can also directly use the "
"model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a "
"replacement."
msgstr "如果此步有困难的同学,也可以直接使用[此链接](https://huggingface.co/Tribbiani/vicuna-7b)上的模型进行替代。"
msgstr ""
"如果此步有困难的同学,也可以直接使用[此链接](https://huggingface.co/Tribbiani/vicuna-"
"7b)上的模型进行替代。"
#: ../../getting_started/getting_started.md:40 35537c13ff6f4bd69951c486274ca1f9
#: ../../getting_started/getting_started.md:55 45a4c9f50ee04533bd57652505ab4f62
msgid "Run server"
msgstr "运行模型服务"
#: ../../getting_started/getting_started.md:45 f7aa3668a6c94fb3a1b8346392d921f3
#: ../../getting_started/getting_started.md:60 f3482509ad2f4137becd4775768180bd
msgid ""
"Starting `llmserver.py` with the following command will result in a "
"relatively stable Python service with multiple processes."
msgstr "使用以下命令启动llmserver.py将会得到一个相对稳定的Python服务并且具有多个进程。"
#: ../../getting_started/getting_started.md:65 ddf718f6863f4e58b8de1232dc8189dd
msgid "Run gradio webui"
msgstr "运行模型服务"
#: ../../getting_started/getting_started.md:51 d80c908f01144e2c8a15b7f6e8e7f88d
#: ../../getting_started/getting_started.md:71 d3df863c46ae49b1a63e5778d439d336
msgid ""
"Notice: the webserver need to connect llmserver, so you need change the"
" .env file. change the MODEL_SERVER = \"http://127.0.0.1:8000\" to your "
"address. It's very important."
msgstr "注意: 在启动Webserver之前, 需要修改.env 文件中的MODEL_SERVER"
" = "http://127.0.0.1:8000", 将地址设置为你的服务器地址。"
msgstr ""
"注意: 在启动Webserver之前, 需要修改.env 文件中的MODEL_SERVER = "
"\"http://127.0.0.1:8000\", 将地址设置为你的服务器地址。"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-06-14 14:51+0800\n"
"POT-Creation-Date: 2023-06-14 17:19+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -50,42 +50,63 @@ msgid "Knowledge"
msgstr "知识库"
#: ../../getting_started/tutorials.md:13 ea00f3de8c754bf2950e735a2f14043a
#, fuzzy
msgid ""
"[How to Create your own knowledge repository](https://db-"
"gpt.readthedocs.io/en/latest/modules/knowledge.html)"
"gpt.readthedocs.io/en/latest/modules/knownledge.html)"
msgstr ""
"[怎么创建自己的知识库](https://db-"
"gpt.readthedocs.io/en/latest/modules/knowledge.html)"
#: ../../getting_started/tutorials.md:15 07195f11314945989eeeb9400c8a9b43
msgid "[Add new Knowledge demonstration](../../assets/new_knownledge_en.gif)"
#, fuzzy
msgid "![Add new Knowledge demonstration](../../assets/new_knownledge.gif)"
msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
#: ../../getting_started/tutorials.md:15 333cdda401df4509a11d14535391b8a8
#, fuzzy
msgid "Add new Knowledge demonstration"
msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
#: ../../getting_started/tutorials.md:17 5245cd247a184f63a10f735f414f303f
msgid "SQL Generation"
msgstr ""
#: ../../getting_started/tutorials.md:18 38077ab510264112b6156c27b8880967
#: ../../getting_started/tutorials.md:18 9a980e7625d34b98bf318851c43fb13d
#, fuzzy
msgid "[sql generation demonstration](../../assets/demo_en.gif)"
msgid "![sql generation demonstration](../../assets/demo_en.gif)"
msgstr "[sql生成演示](../../assets/demo_en.gif)"
#: ../../getting_started/tutorials.md:18 952c680cf62140978b4e94d36c49134a
#, fuzzy
msgid "sql generation demonstration"
msgstr "[sql生成演示](../../assets/demo_en.gif)"
#: ../../getting_started/tutorials.md:20 c0a6f9fefbb9404695fe3bffb6ecc577
msgid "SQL Execute"
msgstr "SQL执行"
#: ../../getting_started/tutorials.md:21 39fe94853f9c4165b40812c57171a6f4
#: ../../getting_started/tutorials.md:21 e959cc6ca356407d854ee5541233c19a
#, fuzzy
msgid "[sql execute demonstration](../../assets/auto_sql_en.gif)"
msgid "![sql execute demonstration](../../assets/auto_sql_en.gif)"
msgstr "[sql execute 演示](../../assets/auto_sql_en.gif)"
#: ../../getting_started/tutorials.md:24 0fd9770dbf3c49b0b644599dc70187a7
#: ../../getting_started/tutorials.md:21 69247d51ccd349b082ea452f6d74d2b3
#, fuzzy
msgid "sql execute demonstration"
msgstr "SQL执行"
#: ../../getting_started/tutorials.md:23 0fd9770dbf3c49b0b644599dc70187a7
#, fuzzy
msgid "Plugins"
msgstr "DB Plugins"
#: ../../getting_started/tutorials.md:25 fc9830406c39473ab32df00a33340385
#: ../../getting_started/tutorials.md:24 cf58eb1ee13f49f69e501c0e221b4bed
#, fuzzy
msgid "[db plugins demonstration](../../assets/dbgpt_bytebase_plugin.gif)"
msgid "![db plugins demonstration](../../assets/dbgpt_bytebase_plugin.gif)"
msgstr "[db plugins 演示](../../assets/dbgpt_bytebase_plugin.gif)"
#: ../../getting_started/tutorials.md:24 9e474caadb87481ba51f8595067f7edd
msgid "db plugins demonstration"
msgstr ""

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-06-13 11:38+0800\n"
"POT-Creation-Date: 2023-06-14 17:26+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -19,11 +19,11 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../modules/llms.md:1 34386f3fecba48fbbd86718283ba593c
#: ../../modules/llms.md:1 8eef439964b5442d91ad04ff72b3b45b
msgid "LLMs"
msgstr "大语言模型"
#: ../../modules/llms.md:3 241b39ad980f4cfd90a7f0fdae05a1d2
#: ../../modules/llms.md:3 a29256f1f39b4bcda29a6811ad1b10f6
#, python-format
msgid ""
"In the underlying large model integration, we have designed an open "
@ -36,62 +36,95 @@ msgid ""
"of use."
msgstr "在底层大模型接入中我们设计了开放的接口支持对接多种大模型。同时对于接入模型的效果我们有非常严格的把控与评审机制。对大模型能力上与ChatGPT对比在准确率上需要满足85%以上的能力对齐。我们用更高的标准筛选模型,是期望在用户使用过程中,可以省去前面繁琐的测试评估环节。"
#: ../../modules/llms.md:5 25175e87a62e41bca86798eb783cefd6
#: ../../modules/llms.md:5 e899f11399bb45d9990c73a273ed7697
msgid "Multi LLMs Usage"
msgstr "多模型使用"
#: ../../modules/llms.md:6 8c35341e9ca94202ba779567813f9973
#: ../../modules/llms.md:6 a21d6a875b3949b9be512f8ea396f6b3
msgid ""
"To use multiple models, modify the LLM_MODEL parameter in the .env "
"configuration file to switch between the models."
msgstr "如果要使用不同的模型,请修改.env配置文件中的LLM MODEL参数以在模型之间切换。"
#: ../../modules/llms.md:8 2edf3309a6554f39ad74e19faff09cee
#: ../../modules/llms.md:8 dd061d45bb4044dcbc7e5d4b0014ded8
msgid ""
"Notice: you can create .env file from .env.template, just use command "
"like this:"
msgstr "注意:你可以从 .env.template 创建 .env 文件。只需使用如下命令:"
#: ../../modules/llms.md:14 5fa7639ef294425e89e13b7c6617fb4b
#: ../../modules/llms.md:14 c1a789e2de2c4958987370521d46c7cc
msgid ""
"now we support models vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, "
"guanaco-33b-merged, falcon-40b, gorilla-7b."
msgstr "现在我们支持的模型有vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, "
"guanaco-33b-merged, falcon-40b, gorilla-7b."
msgstr ""
"现在我们支持的模型有vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, guanaco-33b-"
"merged, falcon-40b, gorilla-7b."
#: ../../modules/llms.md:16 96c9a5ad00264bd2a07bdbdec87e471e
#: ../../modules/llms.md:16 ddb6f2f638e642a595365a91ffdba8f9
msgid ""
"DB-GPT provides a model load adapter and chat adapter. load adapter which"
" allows you to easily adapt load different LLM models by inheriting the "
"BaseLLMAdapter. You just implement match() and loader() method."
msgstr "DB-GPT提供了多模型适配器load adapter和chat adapter.load adapter通过继承BaseLLMAdapter类, 实现match和loader方法允许你适配不同的LLM."
msgstr ""
"DB-GPT提供了多模型适配器load adapter和chat adapter.load adapter通过继承BaseLLMAdapter类,"
" 实现match和loader方法允许你适配不同的LLM."
#: ../../modules/llms.md:18 1033714691464f50900c04c9e1bb5643
#: ../../modules/llms.md:18 5e32be54895243caa6d44d0b3421e4a0
msgid "vicuna llm load adapter"
msgstr "vicuna llm load adapter"
#: ../../modules/llms.md:35 faa6432575be45bcae5deb1cc7fee3fb
#: ../../modules/llms.md:35 0b1f2e7c65164c9584e0c544394e7d57
msgid "chatglm load adapter"
msgstr "chatglm load adapter"
#: ../../modules/llms.md:62 61c4189cabf04e628132c2bf5f02bb50
#: ../../modules/llms.md:62 885b35375c764e29a983b54514e378d2
msgid ""
"chat adapter which allows you to easily adapt chat different LLM models "
"by inheriting the BaseChatAdpter.you just implement match() and "
"get_generate_stream_func() method"
msgstr "chat adapter通过继承BaseChatAdpter允许你通过实现match和get_generate_stream_func方法允许你适配不同的LLM."
msgstr ""
"chat "
"adapter通过继承BaseChatAdpter允许你通过实现match和get_generate_stream_func方法允许你适配不同的LLM."
#: ../../modules/llms.md:64 407a67e4e2c6414b9cde346961d850c0
#: ../../modules/llms.md:64 e0538e7e0526440085b32add07f5ec7f
msgid "vicuna llm chat adapter"
msgstr "vicuna llm chat adapter"
#: ../../modules/llms.md:76 53a55238cd90406db58c50dc64465195
#: ../../modules/llms.md:76 2c97712441874e0d8deedc1d9a1ce5ed
msgid "chatglm llm chat adapter"
msgstr "chatglm llm chat adapter"
#: ../../modules/llms.md:89 b0c5ff72c05e40b3b301d6b81205fe63
#: ../../modules/llms.md:89 485e5aa261714146a03a30dbcd612653
msgid ""
"if you want to integrate your own model, just need to inheriting "
"BaseLLMAdaper and BaseChatAdpter and implement the methods"
msgstr "如果你想集成自己的模型只需要继承BaseLLMAdaper和BaseChatAdpter类然后实现里面的方法即可"
#: ../../modules/llms.md:92 a63b63022db74d76b743044be178e227
#, fuzzy
msgid "Multi Proxy LLMs"
msgstr "多模型使用"
#: ../../modules/llms.md:93 dab3041e90384049872a7f77933b1a1f
msgid "1. Openai proxy"
msgstr "Openai代理"
#: ../../modules/llms.md:94 e50eae200bf04e4788bbc394e0b3d6b9
msgid ""
"If you haven't deployed a private infrastructure for a large model, or if"
" you want to use DB-GPT in a low-cost and high-efficiency way, you can "
"also use OpenAI's large model as your underlying model."
msgstr "如果你没有部署私有大模型的资源或者你想使用低成本启动DB-GPT,你可以使用openai的大模型作为你的底层模型"
#: ../../modules/llms.md:96 53dd581608d74355ba0ce486a01ef261
msgid ""
"If your environment deploying DB-GPT has access to OpenAI, then modify "
"the .env configuration file as below will work."
msgstr "如果你的环境能够访问openai你只需要参考如下修改.env配置文件即可"
#: ../../modules/llms.md:104 8df2d75af41b4953a73b6b7eae9f0373
msgid ""
"If you can't access OpenAI locally but have an OpenAI proxy service, you "
"can configure as follows."
msgstr "如果你本地无法访问openai但是你有一个openai的代理服务你可以参考如下配置"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-06-11 14:10+0800\n"
"POT-Creation-Date: 2023-06-14 17:26+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -17,13 +17,13 @@ msgstr ""
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.11.0\n"
"Generated-By: Babel 2.12.1\n"
#: ../../modules/plugins.md:1 48f1b7ff4099485ba3853c373e64273f
#: ../../modules/plugins.md:1 be587eb6ad384844b83ac740a2f3309e
msgid "Plugins"
msgstr "插件"
#: ../../modules/plugins.md:3 3d94b3250511468d80aa29359f01128d
#: ../../modules/plugins.md:3 291a6adc34684b28867b7d1adcceadbb
msgid ""
"The ability of Agent and Plugin is the core of whether large models can "
"be automated. In this project, we natively support the plugin mode, and "
@ -31,7 +31,73 @@ msgid ""
"order to give full play to the advantages of the community, the plugins "
"used in this project natively support the Auto-GPT plugin ecology, that "
"is, Auto-GPT plugins can directly run in our project."
msgstr "Agent与插件能力是大模型能否自动化的核心在本的项目中原生支持插件模式"
"大模型可以自动化完成目标。 同时为了充分发挥社区的优势,本项目中所用的插件原生支持"
"Auto-GPT插件生态即Auto-GPT的插件可以直接在我们的项目中运行。"
msgstr ""
"Agent与插件能力是大模型能否自动化的核心在本的项目中原生支持插件模式大模型可以自动化完成目标。 同时为了充分发挥社区的优势"
"本项目中所用的插件原生支持Auto-GPT插件生态即Auto-GPT的插件可以直接在我们的项目中运行。"
#: ../../modules/plugins.md:5 72010cbac395488bb72c5ffa79806f61
#, fuzzy
msgid "Local Plugins"
msgstr "插件"
#: ../../modules/plugins.md:7 842e405d6748425dab6684c3377e580a
msgid "1.1 How to write local plugins."
msgstr "如何编写一个本地插件"
#: ../../modules/plugins.md:9 23fa64fa76954000a3f33f5b3205975e
msgid ""
"Local plugins use the Auto-GPT plugin template. A simple example is as "
"follows: first write a plugin file called \"sql_executor.py\"."
msgstr "本地插件使用Auto-GPT插件模板一个简单的示例如下首先编写一个插件文件`sql_executor.py`"
#: ../../modules/plugins.md:39 5044401271fb4cfba53299875886b5b9
msgid ""
"Then set the \"can_handle_post_prompt\" method of the plugin template to "
"True. In the \"post_prompt\" method, write the prompt information and the"
" mapped plugin function."
msgstr "然后设置can_handle_post_prompt函数为True, 在post_prompt函数中编写prompt信息和插件映射函数"
#: ../../modules/plugins.md:81 8f2001c2934d4655a1993a98d5e7dd63
msgid "1.2 How to use local plugins"
msgstr "1.2 如何使用本地插件"
#: ../../modules/plugins.md:83 aa6234c1531a470db97e076408c70ebc
msgid ""
"Pack your plugin project into `your-plugin.zip` and place it in the "
"`/plugins/` directory of the DB-GPT project. After starting the "
"webserver, you can select and use it in the `Plugin Model` section."
msgstr "将您的插件项目打包成your-plugin.zip并将其放置在DB-GPT项目的/plugins/目录中。启动Web服务器后您可以在插件模型部分中选择并使用它。"
#: ../../modules/plugins.md:86 6d1402451cb44ab2bb9ded2a303d8dd0
#, fuzzy
msgid "Public Plugins"
msgstr "插件"
#: ../../modules/plugins.md:88 2bb33cadc7604f529c762939a6225f17
msgid "1.1 How to use public plugins"
msgstr "1.1 如何编写公共插件"
#: ../../modules/plugins.md:90 d5f458dcba2e435987ffbc62a1d7a989
msgid ""
"By default, after launching the webserver, plugins from the public plugin"
" library `DB-GPT-Plugins` will be automatically loaded. For more details,"
" please refer to [DB-GPT-Plugins](https://github.com/csunny/DB-GPT-"
"Plugins)"
msgstr "默认情况下在启动Web服务器后将自动加载来自公共插件库DB-GPT-Plugins的插件。要了解更多详情请参阅[DB-GPT-Plugins](https://github.com/csunny/DB-GPT-Plugins)"
#: ../../modules/plugins.md:92 28fa983769ae44a0b790d977c60ce982
msgid "1.2 Contribute to the DB-GPT-Plugins repository"
msgstr "1.2 贡献到DB-GPT-Plugins仓库"
#: ../../modules/plugins.md:94 68573e7cc17f479fa676d0631e011baf
msgid ""
"Please refer to the plugin development process in the public plugin "
"library, and put the configuration parameters in `.plugin_env`"
msgstr "请参考公共插件库开发过程,将插件配置参数写入.plugin_env文件"
#: ../../modules/plugins.md:96 8d6860026b824b46b2899cbf3dc3b4a0
msgid ""
"We warmly welcome everyone to contribute plugins to the public plugin "
"library!"
msgstr "非常欢迎大家向我们公共插件库贡献插件!"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-06-11 14:10+0800\n"
"POT-Creation-Date: 2023-06-14 17:26+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -17,9 +17,88 @@ msgstr ""
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.11.0\n"
"Generated-By: Babel 2.12.1\n"
#: ../../use_cases/tool_use_with_plugin.md:1 2bd7d79a16a548c4a3872a12c436aa4f
#: ../../use_cases/tool_use_with_plugin.md:1 defad8b142a2408e89b782803149be50
msgid "Tool use with plugin"
msgstr "插件工具"
#: ../../use_cases/tool_use_with_plugin.md:3 30ca5f2ce55a4369b48dd8a8b08c3273
msgid ""
"DB-GPT supports a variety of plug-ins, such as MySQL, MongoDB, ClickHouse"
" and other database tool plug-ins. In addition, some database management "
"platforms can also package their interfaces and package them into plug-"
"ins, and use the model to realize the ability of \"single-sentence "
"requirements\""
msgstr "DB-GPT支持各种插件例如MySQL、MongoDB、ClickHouse等数据库工具插件。此外一些数据库管理平台也可以将它们的接口打包成插件使用该模型实现"一句话需求"的能力。"
#: ../../use_cases/tool_use_with_plugin.md:6 96c20363d25842cbbab21819c2567a52
msgid "DB-GPT-DASHBOARD-PLUGIN"
msgstr "DB-GPT-DASHBOARD-PLUGIN"
#: ../../use_cases/tool_use_with_plugin.md:8 1a78f9e15c6949019e213f3f88388e9f
msgid ""
"[Db-GPT Chart Plugin](https://github.com/csunny/DB-GPT-"
"Plugins/blob/main/src/dbgpt_plugins/Readme.md)"
msgstr ""
"[Db-GPT Chart Plugin](https://github.com/csunny/DB-GPT-"
"Plugins/blob/main/src/dbgpt_plugins/Readme.md)"
#: ../../use_cases/tool_use_with_plugin.md:10 5ac6d662797e4694926cb0b44e58ff40
msgid ""
"This is a DB-GPT plugin to generate data analysis charts, if you want to "
"use the test sample data, please first pull the code of [DB-GPT-"
"Plugins](https://github.com/csunny/DB-GPT-Plugins), run the command to "
"generate test DuckDB data, and then copy the generated data file to the "
"`/pilot/mock_datas` directory of the DB-GPT project."
msgstr "这是一个DB-GPT插件用于生成数据分析图表。如果您想使用测试样本数据请先拉取 DB-GPT-Plugins 的代码,运行命令以生成测试 DuckDB 数据,然后将生成的数据文件复制到 DB-GPT 项目的 /pilot/mock_datas 目录中。"
#: ../../use_cases/tool_use_with_plugin.md:21 1029537ed0ca44e499a8f2098cc72f1a
msgid ""
"Test Case: Use a histogram to analyze the total order amount of users in "
"different cities."
msgstr "测试示例:请使用柱状图分析各个城市的用户数"
#: ../../use_cases/tool_use_with_plugin.md:26 a8a2103279d641b98948bcb06971c160
msgid ""
"More detail see: [DB-DASHBOARD](https://github.com/csunny/DB-GPT-"
"Plugins/blob/main/src/dbgpt_plugins/Readme.md)"
msgstr "更多详情请看:[DB-DASHBOARD](https://github.com/csunny/DB-GPT-"
"Plugins/blob/main/src/dbgpt_plugins/Readme.md)"
#: ../../use_cases/tool_use_with_plugin.md:29 db2906b5f89e4a01a4a47a89ad750804
msgid "DB-GPT-SQL-Execution-Plugin"
msgstr "DB-GPT-SQL-Execution-Plugin"
#: ../../use_cases/tool_use_with_plugin.md:32 cd491706a64a4bf3af4459fa0699c82e
msgid "This is an DbGPT plugin to connect Generic Db And Execute SQL."
msgstr "这是一个 DbGPT 插件,用于连接通用数据库并执行 SQL。"
#: ../../use_cases/tool_use_with_plugin.md:35 73dbe0fb80b44b9d9c2928cbf161f291
msgid "DB-GPT-Bytebase-Plugin"
msgstr "DB-GPT-Bytebase-Plugin"
#: ../../use_cases/tool_use_with_plugin.md:37 72f675332ff04297861394e8eb2cf5c4
msgid ""
"To use a tool or platform plugin, you should first deploy a plugin. "
"Taking the open-source database management platform Bytebase as an "
"example, you can deploy your Bytebase service with one click using Docker"
" and access it at http://127.0.0.1:5678. More details can be found at "
"https://github.com/bytebase/bytebase."
msgstr "要使用一个工具或平台插件您应该首先部署一个插件。以开源数据库管理平台Bytebase为例您可以使用Docker一键部署Bytebase服务并通过http://127.0.0.1:5678进行访问。更多细节可以在 https://github.com/bytebase/bytebase 找到。"
#: ../../use_cases/tool_use_with_plugin.md:53 0d84f8fabc29479caa0bf630c00283d2
msgid ""
"Note: If your machine's CPU architecture is `ARM`, please use `--platform"
" linux/arm64` instead."
msgstr "备注如果你的机器CPU架构是ARM,请使用--platform linux/arm64 代替"
#: ../../use_cases/tool_use_with_plugin.md:55 d4117072f9504875a765c558f55f2d88
msgid ""
"Select the plugin on DB-GPTAll built-in plugins are from our repository:"
" https://github.com/csunny/DB-GPT-Pluginschoose DB-GPT-Bytebase-Plugin."
" Supporting functions include creating projects, creating environments, "
"creating database instances, creating databases, database DDL/DML "
"operations, and ticket approval process, etc."
msgstr "在DB-GPT上选择插件所有内置插件均来自我们的仓库https://github.com/csunny/DB-GPT-Plugins选择DB-GPT-Bytebase-Plugin。支持的功能包括创建项目、创建环境、创建数据库实例、创建数据库、数据库DDL/DML操作和审批流程等。"

View File

@ -87,3 +87,24 @@ class ChatGLMChatAdapter(BaseChatAdpter):
return chatglm_generate_stream
```
if you want to integrate your own model, just need to inheriting BaseLLMAdaper and BaseChatAdpter and implement the methods
## Multi Proxy LLMs
### 1. Openai proxy
If you haven't deployed a private infrastructure for a large model, or if you want to use DB-GPT in a low-cost and high-efficiency way, you can also use OpenAI's large model as your underlying model.
- If your environment deploying DB-GPT has access to OpenAI, then modify the .env configuration file as below will work.
```
LLM_MODEL=proxy_llm
MODEL_SERVER=127.0.0.1:8000
PROXY_API_KEY=sk-xxx
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
```
- If you can't access OpenAI locally but have an OpenAI proxy service, you can configure as follows.
```
LLM_MODEL=proxy_llm
MODEL_SERVER=127.0.0.1:8000
PROXY_API_KEY=sk-xxx
PROXY_SERVER_URL={your-openai-proxy-server/v1/chat/completions}
```

View File

@ -1,3 +1,98 @@
# Plugins
The ability of Agent and Plugin is the core of whether large models can be automated. In this project, we natively support the plugin mode, and large models can automatically achieve their goals. At the same time, in order to give full play to the advantages of the community, the plugins used in this project natively support the Auto-GPT plugin ecology, that is, Auto-GPT plugins can directly run in our project.
## Local Plugins
### 1.1 How to write local plugins.
- Local plugins use the Auto-GPT plugin template. A simple example is as follows: first write a plugin file called "sql_executor.py".
```python
import pymysql
import pymysql.cursors
def get_conn():
return pymysql.connect(
host="127.0.0.1",
port=int("2883"),
user="mock",
password="mock",
database="mock",
charset="utf8mb4",
ssl_ca=None,
)
def ob_sql_executor(sql: str):
try:
conn = get_conn()
with conn.cursor() as cursor:
cursor.execute(sql)
result = cursor.fetchall()
field_names = tuple(i[0] for i in cursor.description)
result = list(result)
result.insert(0, field_names)
return result
except pymysql.err.ProgrammingError as e:
return str(e)
```
Then set the "can_handle_post_prompt" method of the plugin template to True. In the "post_prompt" method, write the prompt information and the mapped plugin function.
```python
"""This is a template for DB-GPT plugins."""
from typing import Any, Dict, List, Optional, Tuple, TypeVar, TypedDict
from auto_gpt_plugin_template import AutoGPTPluginTemplate
PromptGenerator = TypeVar("PromptGenerator")
class Message(TypedDict):
role: str
content: str
class DBGPTOceanBase(AutoGPTPluginTemplate):
"""
This is an DB-GPT plugin to connect OceanBase.
"""
def __init__(self):
super().__init__()
self._name = "DB-GPT-OB-Serverless-Plugin"
self._version = "0.1.0"
self._description = "This is an DB-GPT plugin to connect OceanBase."
def can_handle_post_prompt(self) -> bool:
return True
def post_prompt(self, prompt: PromptGenerator) -> PromptGenerator:
from .sql_executor import ob_sql_executor
prompt.add_command(
"ob_sql_executor",
"Execute SQL in OceanBase Database.",
{"sql": "<sql>"},
ob_sql_executor,
)
return prompt
...
```
### 1.2 How to use local plugins
- Pack your plugin project into `your-plugin.zip` and place it in the `/plugins/` directory of the DB-GPT project. After starting the webserver, you can select and use it in the `Plugin Model` section.
## Public Plugins
### 1.1 How to use public plugins
- By default, after launching the webserver, plugins from the public plugin library `DB-GPT-Plugins` will be automatically loaded. For more details, please refer to [DB-GPT-Plugins](https://github.com/csunny/DB-GPT-Plugins)
### 1.2 Contribute to the DB-GPT-Plugins repository
- Please refer to the plugin development process in the public plugin library, and put the configuration parameters in `.plugin_env`
- We warmly welcome everyone to contribute plugins to the public plugin library!

View File

@ -1 +1,57 @@
# Tool use with plugin
- DB-GPT supports a variety of plug-ins, such as MySQL, MongoDB, ClickHouse and other database tool plug-ins. In addition, some database management platforms can also package their interfaces and package them into plug-ins, and use the model to realize the ability of "single-sentence requirements"
## DB-GPT-DASHBOARD-PLUGIN
[](https://github.com/csunny/DB-GPT-Plugins/blob/main/src/dbgpt_plugins/Readme.md)
- This is a DB-GPT plugin to generate data analysis charts, if you want to use the test sample data, please first pull the code of [DB-GPT-Plugins](https://github.com/csunny/DB-GPT-Plugins), run the command to generate test DuckDB data, and then copy the generated data file to the `/pilot/mock_datas` directory of the DB-GPT project.
```bash
git clone https://github.com/csunny/DB-GPT-Plugins.git
pip install -r requirements.txt
python /DB-GPT-Plugins/src/dbgpt_plugins/db_dashboard/mock_datas.py
cp /DB-GPT-Plugins/src/dbgpt_plugins/db_dashboard/mock_datas/db-gpt-test.db /DB-GPT/pilot/mock_datas/
python /DB-GPT/pilot/llmserver.py
python /DB-GPT/pilot/webserver.py
```
- Test Case: Use a histogram to analyze the total order amount of users in different cities.
<p align="center">
<img src="../../assets/chart_db_city_users.png" width="680px" />
</p>
- More detail see: [DB-DASHBOARD](https://github.com/csunny/DB-GPT-Plugins/blob/main/src/dbgpt_plugins/Readme.md)
## DB-GPT-SQL-Execution-Plugin
- This is an DbGPT plugin to connect Generic Db And Execute SQL.
## DB-GPT-Bytebase-Plugin
- To use a tool or platform plugin, you should first deploy a plugin. Taking the open-source database management platform Bytebase as an example, you can deploy your Bytebase service with one click using Docker and access it at http://127.0.0.1:5678. More details can be found at https://github.com/bytebase/bytebase.
```bash
docker run --init \
--name bytebase \
--platform linux/amd64 \
--restart always \
--publish 5678:8080 \
--health-cmd "curl --fail http://localhost:5678/healthz || exit 1" \
--health-interval 5m \
--health-timeout 60s \
--volume ~/.bytebase/data:/var/opt/bytebase \
bytebase/bytebase:2.2.0 \
--data /var/opt/bytebase \
--port 8080
```
Note: If your machine's CPU architecture is `ARM`, please use `--platform linux/arm64` instead.
- Select the plugin on DB-GPTAll built-in plugins are from our repository: https://github.com/csunny/DB-GPT-Pluginschoose DB-GPT-Bytebase-Plugin.
Supporting functions include creating projects, creating environments, creating database instances, creating databases, database DDL/DML operations, and ticket approval process, etc.

View File

@ -10,6 +10,7 @@ if "pytest" in sys.argv or "pytest" in sys.modules or os.getenv("CI"):
# Load the users .env file into environment variables
load_dotenv(verbose=True, override=True)
load_dotenv(".plugin_env")
ROOT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
load_dotenv(os.path.join(ROOT_PATH, ".plugin_env"))
del load_dotenv