doc:llm use faq

This commit is contained in:
aries_ckt
2023-08-28 15:08:10 +08:00
parent 4e3527e9d7
commit f88acaffea
3 changed files with 95 additions and 50 deletions

View File

@@ -7,15 +7,27 @@ LLM_MODEL=proxyllm
```` ````
set your OPENAPI KEY set your OPENAPI KEY
````shell ````shell
PROXY_API_KEY={your-openai-sk} PROXY_API_KEY={your-openai-sk}
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
```` ````
make sure your openapi API_KEY is available make sure your openapi API_KEY is available
##### Q2 how to use MultiGPUs ##### Q2 What difference between `python dbgpt_server --light` and `python dbgpt_server`
DB-GPT will use all available gpu by default. And you can modify the setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu IDs.
```{note}
* `python dbgpt_server --light` dbgpt_server does not start the llm service. Users can deploy the llm service separately by using `python llmserver`, and dbgpt_server accesses the llm service through set the LLM_SERVER environment variable in .env. The purpose is to allow for the separate deployment of dbgpt's backend service and llm service.
* `python dbgpt_server` dbgpt_server service and the llm service are deployed on the same instance. when dbgpt_server starts the service, it also starts the llm service at the same time.
```
##### Q3 how to use MultiGPUs
DB-GPT will use all available gpu by default. And you can modify the setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file
to use the specific gpu IDs.
Optionally, you can also specify the gpu ID to use before the starting command, as shown below: Optionally, you can also specify the gpu ID to use before the starting command, as shown below:
@@ -29,7 +41,7 @@ CUDA_VISIBLE_DEVICES=3,4,5,6 python3 pilot/server/dbgpt_server.py
You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to configure the maximum memory used by each GPU. You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to configure the maximum memory used by each GPU.
##### Q3 Not Enough Memory ##### Q4 Not Enough Memory
DB-GPT supported 8-bit quantization and 4-bit quantization. DB-GPT supported 8-bit quantization and 4-bit quantization.

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n" "Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n" "POT-Creation-Date: 2023-08-28 15:05+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n" "Language: zh_CN\n"
@@ -19,72 +19,98 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n" "Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/llm/llm_faq.md:1 f79c82f385904385b08618436e600d9f #: ../../getting_started/faq/llm/llm_faq.md:1 35c867af1c9c4d52a26c35f6a77af4b8
msgid "LLM USE FAQ" msgid "LLM USE FAQ"
msgstr "LLM模型使用FAQ" msgstr "LLM模型使用FAQ"
#: ../../getting_started/faq/llm/llm_faq.md:3 1fc802fa69224062b02403bc35084c18 #: ../../getting_started/faq/llm/llm_faq.md:3 302d2be76da0463db7ef7b5002ca6bb1
msgid "Q1:how to use openai chatgpt service" msgid "Q1:how to use openai chatgpt service"
msgstr "我怎么使用OPENAI服务" msgstr "我怎么使用OPENAI服务"
#: ../../getting_started/faq/llm/llm_faq.md:4 9094902d148a4cc99fe72aa0e41062ae #: ../../getting_started/faq/llm/llm_faq.md:4 4d7790aa3cdb4cfcb2360bbb2b136d6e
msgid "change your LLM_MODEL" msgid "change your LLM_MODEL"
msgstr "通过在.env文件设置LLM_MODEL" msgstr "通过在.env文件设置LLM_MODEL"
#: ../../getting_started/faq/llm/llm_faq.md:9 07073eb8d9eb4988a3b035666c63d3fb #: ../../getting_started/faq/llm/llm_faq.md:9 90b7622f45ef4701aab2c0aaaf52f900
msgid "set your OPENAPI KEY" msgid "set your OPENAPI KEY"
msgstr "set your OPENAPI KEY" msgstr "set your OPENAPI KEY"
#: ../../getting_started/faq/llm/llm_faq.md:15 a71bb0d1181e47368a286b5694a00056 #: ../../getting_started/faq/llm/llm_faq.md:15 67c1e41b57fe460c970bde38327db787
msgid "make sure your openapi API_KEY is available" msgid "make sure your openapi API_KEY is available"
msgstr "确认openapi API_KEY是否可用" msgstr "确认openapi API_KEY是否可用"
#: ../../getting_started/faq/llm/llm_faq.md:17 789b003864824970923bac474a9ab0cd #: ../../getting_started/faq/llm/llm_faq.md:17 c7941e083ef246ae9dc70b6e49315184
msgid "Q2 how to use MultiGPUs" msgid ""
"Q2 What difference between `python dbgpt_server --light` and `python "
"dbgpt_server`"
msgstr "Q2 `python dbgpt_server --light` 和 `python "
"dbgpt_server`的区别是什么?"
#: ../../getting_started/faq/llm/llm_faq.md:19 23e2459b5ac74830b22b1b86dfa85297
msgid ""
"`python dbgpt_server --light` dbgpt_server does not start the llm "
"service. Users can deploy the llm service separately by using `python "
"llmserver`, and dbgpt_server accesses the llm service through set the "
"LLM_SERVER environment variable in .env. The purpose is to allow for the "
"separate deployment of dbgpt's backend service and llm service."
msgstr "`python dbgpt_server --light` dbgpt_server在启动后台服务的时候不启动模型服务, 用户可以通过`python llmserver`单独部署模型服务dbgpt_server通过LLM_SERVER环境变量来访问模型服务。目的是为了可以将dbgpt后台服务和大模型服务分离部署。"
#: ../../getting_started/faq/llm/llm_faq.md:21 4169f2294921469383c25cc8ae4ea83d
msgid ""
"`python dbgpt_server` dbgpt_server service and the llm service are "
"deployed on the same instance. when dbgpt_server starts the service, it "
"also starts the llm service at the same time."
msgstr "`python dbgpt_server` 是将后台服务和模型服务部署在同一台实例上.dbgpt_server在启动服务的时候同时开启模型服务."
#: ../../getting_started/faq/llm/llm_faq.md:25 56b8dcd2d2d44403a0aca00f7452b92c
#, fuzzy
msgid "Q3 how to use MultiGPUs"
msgstr "Q2 怎么使用 MultiGPUs" msgstr "Q2 怎么使用 MultiGPUs"
#: ../../getting_started/faq/llm/llm_faq.md:18 4be3dd71a8654202a210bcb11c50cc79 #: ../../getting_started/faq/llm/llm_faq.md:26 9d02946ea8d54f11a2600278afc07ab6
msgid "" msgid ""
"DB-GPT will use all available gpu by default. And you can modify the " "DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu" "setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs." " IDs."
msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs" msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
#: ../../getting_started/faq/llm/llm_faq.md:20 3a00c5fff666451cacda7f9af37564b9 #: ../../getting_started/faq/llm/llm_faq.md:28 cf26d432a34d4b2482bcc5255f01f9b1
msgid "" msgid ""
"Optionally, you can also specify the gpu ID to use before the starting " "Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:" "command, as shown below:"
msgstr "你也可以指定gpu ID启动" msgstr "你也可以指定gpu ID启动"
#: ../../getting_started/faq/llm/llm_faq.md:30 7fef386f3a3443569042e7d8b9a3ff15 #: ../../getting_started/faq/llm/llm_faq.md:38 e105136f3fea42c78338f17d671a578c
msgid "" msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to " "You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU." "configure the maximum memory used by each GPU."
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存" msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
#: ../../getting_started/faq/llm/llm_faq.md:32 d75d440da8ab49a3944c3a456db25bee #: ../../getting_started/faq/llm/llm_faq.md:40 69d02c9004ec4b10a5fe0712a6dc6e9b
msgid "Q3 Not Enough Memory" #, fuzzy
msgid "Q4 Not Enough Memory"
msgstr "Q3 机器显存不够 " msgstr "Q3 机器显存不够 "
#: ../../getting_started/faq/llm/llm_faq.md:34 2a2cd59382e149ffb623cb3d42754dca #: ../../getting_started/faq/llm/llm_faq.md:42 c3333f6cb608451182d4807fcb9c2d78
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization." msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization." msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
#: ../../getting_started/faq/llm/llm_faq.md:36 755131812baa4f5a99b706849459e10a #: ../../getting_started/faq/llm/llm_faq.md:44 01568d7dc96a42dd84d1c1e76ea44186
msgid "" msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` " "You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by " "in `.env` file to use quantization(8-bit quantization is enabled by "
"default)." "default)."
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`" msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
#: ../../getting_started/faq/llm/llm_faq.md:38 b85424bc11134af985f687d8ee8d2c9f #: ../../getting_started/faq/llm/llm_faq.md:46 7e4682b7992b40d7b18801c9ce4c17b6
msgid "" msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit" "Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM." " quantization can run with 48 GB of VRAM."
msgstr "Llama-2-70b with 8-bit quantization 可以运行在 80 GB VRAM机器 4-bit " msgstr ""
"Llama-2-70b with 8-bit quantization 可以运行在 80 GB VRAM机器 4-bit "
"quantization可以运行在 48 GB VRAM" "quantization可以运行在 48 GB VRAM"
#: ../../getting_started/faq/llm/llm_faq.md:40 b6e4db679636492c9e4170a33fd6f638 #: ../../getting_started/faq/llm/llm_faq.md:48 9be65b2cac2541f0aeac21cf36441350
msgid "" msgid ""
"Note: you need to install the latest dependencies according to " "Note: you need to install the latest dependencies according to "
"[requirements.txt](https://github.com/eosphoros-ai/DB-" "[requirements.txt](https://github.com/eosphoros-ai/DB-"

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: DB-GPT 0.3.0\n" "Project-Id-Version: DB-GPT 0.3.0\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-23 17:00+0800\n" "POT-Creation-Date: 2023-08-28 15:05+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n" "Language: zh_CN\n"
@@ -19,15 +19,15 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n" "Generated-By: Babel 2.12.1\n"
#: ../../getting_started/getting_started.md:1 f5e3f55f96de414183847de3e1eb0a75 #: ../../getting_started/getting_started.md:1 da78e5535cec40a5af378654db5bdd95
msgid "Quickstart Guide" msgid "Quickstart Guide"
msgstr "使用指南" msgstr "使用指南"
#: ../../getting_started/getting_started.md:3 39ac3167d0044868b9d9efca953e73c5 #: ../../getting_started/getting_started.md:3 7e54f808055c44578a2364e9a6c95f69
msgid "Welcome to DB-GPT!" msgid "Welcome to DB-GPT!"
msgstr "欢迎来到DB-GPT!" msgstr "欢迎来到DB-GPT!"
#: ../../getting_started/getting_started.md:5 bdc699f3da554c6eb46eba0636fe7bda #: ../../getting_started/getting_started.md:5 3d46f319cc404984a9f3fbf36e3e14f7
msgid "" msgid ""
"DB-GPT is an experimental open-source project that uses localized GPT " "DB-GPT is an experimental open-source project that uses localized GPT "
"large models to interact with your data and environment. With this " "large models to interact with your data and environment. With this "
@@ -38,33 +38,33 @@ msgstr ""
"我们发起了DB-GPT项目为所有以数据库为基础的场景构建一套完整的私有大模型解决方案。 " "我们发起了DB-GPT项目为所有以数据库为基础的场景构建一套完整的私有大模型解决方案。 "
"此方案因为支持本地部署,所以不仅仅可以应用于独立私有环境,而且还可以根据业务模块独立部署隔离,让大模型的能力绝对私有、安全、可控。" "此方案因为支持本地部署,所以不仅仅可以应用于独立私有环境,而且还可以根据业务模块独立部署隔离,让大模型的能力绝对私有、安全、可控。"
#: ../../getting_started/getting_started.md:7 7d1a79640331431e85d5c54075dca1fd #: ../../getting_started/getting_started.md:9 777caeb99fb9458f89b7699f9b790dbf
msgid "" msgid ""
"Our vision is to make it easier and more convenient to build applications" "Our vision is to make it easier and more convenient to build applications"
" around databases and llm." " around databases and llm."
msgstr "我们的愿景是让围绕数据库构建大模型应用更简单,更方便。" msgstr "我们的愿景是让围绕数据库构建大模型应用更简单,更方便。"
#: ../../getting_started/getting_started.md:9 f400d65e13e0405d8aea576c02a7b02a #: ../../getting_started/getting_started.md:11 02310b9fc7934df093a2b7f53d82b809
msgid "What can I do with DB-GPT?" msgid "What can I do with DB-GPT?"
msgstr "通过DB-GPT我能做什么" msgstr "通过DB-GPT我能做什么"
#: ../../getting_started/getting_started.md:10 2b161262a21b474ca7c191ed0f9ab6b0 #: ../../getting_started/getting_started.md:13 ec30099749e94de085ac9d45edc35019
msgid "Chat Data with your Datasource." msgid "Chat Data with your Datasource."
msgstr "和自己的数据聊天,进行数据分析" msgstr "和自己的数据聊天,进行数据分析"
#: ../../getting_started/getting_started.md:11 0f1322f19cea45768b5bde4448dfea5e #: ../../getting_started/getting_started.md:14 cf5e7961b73447abb788388d843889d7
msgid "Private domain knowledge question answering." msgid "Private domain knowledge question answering."
msgstr "私有领域的知识问答" msgstr "私有领域的知识问答"
#: ../../getting_started/getting_started.md:12 9dc0e76f46c64304aabb80fcf4a2a3f2 #: ../../getting_started/getting_started.md:15 60b5ab4b672e4d0589c765843243c66f
msgid "Quickly provide private LLM Model deployment." msgid "Quickly provide private LLM Model deployment."
msgstr "快速构建私有大模型部署" msgstr "快速构建私有大模型部署"
#: ../../getting_started/getting_started.md:14 ad5daeae9621455ba447735fd55648cc #: ../../getting_started/getting_started.md:17 a5f7f27e3b904eb3b8ad3d457d683c10
msgid "Usage with DB-GPT." msgid "Usage with DB-GPT."
msgstr "DB-GPT使用姿势." msgstr "DB-GPT使用姿势."
#: ../../getting_started/getting_started.md:15 870d82fd402444aab3028764607ff468 #: ../../getting_started/getting_started.md:19 5d1914c6fd6d43a59b83412383706fb5
msgid "" msgid ""
"Follow DB-GPT application [install tutorial](https://db-" "Follow DB-GPT application [install tutorial](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html)." "gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html)."
@@ -72,7 +72,7 @@ msgstr ""
"先安装部署应用[安装教程](https://db-" "先安装部署应用[安装教程](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html)." "gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html)."
#: ../../getting_started/getting_started.md:16 c7806bcd1b9a43eda8a86402f9903dfe #: ../../getting_started/getting_started.md:21 9d9ed9a6cdbc4dbdbb2d4009b446c392
#, fuzzy #, fuzzy
msgid "" msgid ""
"Follow DB-GPT [application usage](https://db-" "Follow DB-GPT [application usage](https://db-"
@@ -81,7 +81,7 @@ msgstr ""
"先安装部署应用[安装教程](https://db-" "先安装部署应用[安装教程](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html)." "gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html)."
#: ../../getting_started/getting_started.md:18 5dd30b81ed39476f871ed31edd01c043 #: ../../getting_started/getting_started.md:24 319b71993fcc499a89f477b499a21a84
msgid "" msgid ""
"If you encounter any issues while using DB-GPT (whether it's during " "If you encounter any issues while using DB-GPT (whether it's during "
"installation or usage), you can refer to the [FAQ](https://db-" "installation or usage), you can refer to the [FAQ](https://db-"
@@ -91,44 +91,48 @@ msgstr ""
"如果你在使用DB-GPT过程中遇到什么问题(无论是安装还是使用),可以查看[FAQ](https://db-" "如果你在使用DB-GPT过程中遇到什么问题(无论是安装还是使用),可以查看[FAQ](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/faq/deploy/deploy_faq.html)\"" "gpt.readthedocs.io/en/latest/getting_started/faq/deploy/deploy_faq.html)\""
#: ../../getting_started/getting_started.md:21 c6202be38f3641098eedca1ef31665b3 #: ../../getting_started/getting_started.md:27 7cb0e2f7384c473e9626e59052c114e1
msgid "🗺️ 生态" msgid "🗺️ Ecosystem"
msgstr "🗺️ 生态" msgstr ""
#: ../../getting_started/getting_started.md:23 d55c843f8a044e099e61a86ef5a70eaf #: ../../getting_started/getting_started.md:29 6067623edfda45c5a7fdfa80ffd886f4
msgid "Github: https://github.com/eosphoros-ai/DB-GPT" msgid "Github: https://github.com/eosphoros-ai/DB-GPT"
msgstr "Github: https://github.com/eosphoros-ai/DB-GPT" msgstr "Github: https://github.com/eosphoros-ai/DB-GPT"
#: ../../getting_started/getting_started.md:24 7e279ce6cce346ecb3e58537f2d8ca24 #: ../../getting_started/getting_started.md:30 2eb9498deaac4a928e1df453d71fcddf
msgid "PyPi:" msgid "PyPi:"
msgstr "PyPi:" msgstr "PyPi:"
#: ../../getting_started/getting_started.md:25 ca92a6bc95c94996b4e672be7ee43f1b #: ../../getting_started/getting_started.md:31 f321d5c1e4bd4556959cd947a23055b9
msgid "DB-GPT: https://pypi.org/simple/db-gpt." msgid "DB-GPT: https://pypi.org/simple/db-gpt."
msgstr "DB-GPT: https://pypi.org/simple/db-gpt." msgstr "DB-GPT: https://pypi.org/simple/db-gpt."
#: ../../getting_started/getting_started.md:27 cc44973d1a78434fa3dfe511618b08d9 #: ../../getting_started/getting_started.md:33 b31a2fa10f0d4c28b8068b44654dfbf9
msgid "Associated projects" msgid "Associated projects"
msgstr "关联的项目" msgstr "关联的项目"
#: ../../getting_started/getting_started.md:30 93b80349645943b38d16afd80ee076da #: ../../getting_started/getting_started.md:35 c8b6d0da11fc471a9fefcef8f85500e1
msgid "" msgid ""
"🧪 DB-GPT-Hub: https://github.com/eosphoros-ai/DB-GPT-Hub | an " "🧪 DB-GPT-Hub: https://github.com/eosphoros-ai/DB-GPT-Hub | an "
"experimental project to implement Text-to-SQL parsing using LLMs" "experimental project to implement Text-to-SQL parsing using LLMs"
msgstr "🧪 DB-GPT-Hub: https://github.com/eosphoros-ai/DB-GPT-Hub | 基于开源大模型的Text-to-SQL实验性项目" msgstr ""
"🧪 DB-GPT-Hub: https://github.com/eosphoros-ai/DB-GPT-Hub | 基于开源大模型的Text-"
"to-SQL实验性项目"
#: ../../getting_started/getting_started.md:31 2c061d9eee624a4b994bb6af3361591b #: ../../getting_started/getting_started.md:37 a76faff98a9a49d893a3e15f920c054a
msgid "" msgid ""
"🏡 DB-GPT-Web: https://github.com/eosphoros-ai/DB-GPT-Web | Web " "🏡 DB-GPT-Web: https://github.com/eosphoros-ai/DB-GPT-Web | Web "
"application for DB-GPT." "application for DB-GPT."
msgstr "🏡 DB-GPT-Web: https://github.com/eosphoros-ai/DB-GPT-Web | Web " msgstr ""
"应用 for DB-GPT." "🏡 DB-GPT-Web: https://github.com/eosphoros-ai/DB-GPT-Web | Web 应用 for DB-"
"GPT."
#: ../../getting_started/getting_started.md:32 6564f208ecd04a3fae6dd5f12cfbad12 #: ../../getting_started/getting_started.md:38 be54eeb941564d799cd68a6cfe1bfffc
msgid "" msgid ""
"🚀 DB-GPT-Plugins: https://github.com/eosphoros-ai/DB-GPT-Web | DB-GPT " "🚀 DB-GPT-Plugins: https://github.com/eosphoros-ai/DB-GPT-Web | DB-GPT "
"Plugins Repo, Which support AutoGPT plugin." "Plugins Repo, Which support AutoGPT plugin."
msgstr "🚀 DB-GPT-Plugins: https://github.com/eosphoros-ai/DB-GPT-Web | DB-GPT " msgstr ""
"🚀 DB-GPT-Plugins: https://github.com/eosphoros-ai/DB-GPT-Web | DB-GPT "
"Plugins Repo, Which support AutoGPT plugin." "Plugins Repo, Which support AutoGPT plugin."
#~ msgid "4.2. Run with docker compose" #~ msgid "4.2. Run with docker compose"
@@ -552,3 +556,6 @@ msgstr "🚀 DB-GPT-Plugins: https://github.com/eosphoros-ai/DB-GPT-Web | DB-GPT
#~ "gpt.readthedocs.io/en/latest/getting_started/application/chatdb/chatdb.html)." #~ "gpt.readthedocs.io/en/latest/getting_started/application/chatdb/chatdb.html)."
#~ msgstr "" #~ msgstr ""
#~ msgid "🗺️ 生态"
#~ msgstr "🗺️ 生态"