doc:dashboard document

This commit is contained in:
aries_ckt 2023-08-21 17:16:00 +08:00
parent 4803a913b5
commit bf3e52aa32
6 changed files with 433 additions and 203 deletions

View File

@ -18,3 +18,4 @@ DB-GPT product is a Web application that you can chat database, chat knowledge,
./application/chatdb/chatdb.md ./application/chatdb/chatdb.md
./application/kbqa/kbqa.md ./application/kbqa/kbqa.md
./application/dashboard/dashboard.md

View File

@ -0,0 +1,37 @@
Dashboard
==================================
The purpose of the DB-GPT Dashboard is to empower data analysts with efficiency. DB-GPT provides intelligent reporting
technology, allowing business analysts to perform self-service analysis directly using natural language and gain
insights into their respective areas of business.
```{note} Dashboard now support Datasource Type
* Mysql
* Sqlite
* DuckDB
```
## Steps to Dashboard In DB-GPT
#### 1 add datasource
If you are using Dashboard for the first time, you need to mock some data to test. DB-GPT provide some dashboard test
data in pilot/mock_datas/, you should follow the steps.
![add_datasource](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/0afc72ea-83c8-45ff-8c36-213b1c6fb5dd)
#### 2.Choose Dashboard Mode
![create_space](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/5e888880-0e97-4b60-8e5c-b7e7224197f0)
#### 3.Select Datasource
![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/da2ac8b5-eca4-48ef-938f-f9dc1ca711b3)
#### 4.Input your analysis goals
![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6)
![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/3d14a2da-165e-4b2f-a921-325c20fe5ae9)
#### 5.Adjust and modify your report
![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94-041b-44b4-b6ec-891bf8da52a4)

View File

@ -8,7 +8,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n" "Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n" "POT-Creation-Date: 2023-08-21 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n" "Language: zh_CN\n"
@ -19,3 +19,136 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n" "Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/dashboard/dashboard.md:1
#: 8017757596ff4c7faa06f7e7d18902ca
msgid "Dashboard"
msgstr "Dashboard"
#: ../../getting_started/application/dashboard/dashboard.md:3
#: 5b84a61923404d8c81d5a1430b3fa12c
msgid ""
"The purpose of the DB-GPT Dashboard is to empower data analysts with "
"efficiency. DB-GPT provides intelligent reporting technology, allowing "
"business analysts to perform self-service analysis directly using natural"
" language and gain insights into their respective areas of business."
msgstr "DB-GPT Dashboard目的是赋能数据分析人员。DB-GPT通过提供智能报表技术使得业务分析人员可以直接使用简单的自然语言进行自助分析。"
#: ../../getting_started/application/dashboard/dashboard.md:6
#: 48604cca2b3f482692bb65a01f0297a7
msgid "Dashboard now support Datasource Type"
msgstr "Dashboard目前支持的数据源类型"
#: ../../getting_started/application/dashboard/dashboard.md:7
#: e4371bc220be46f0833dc7d0c804f263
msgid "Mysql"
msgstr "Mysql"
#: ../../getting_started/application/dashboard/dashboard.md:8
#: 719c578796fa44a3ad062289aa4650d7
msgid "Sqlite"
msgstr "Sqlite"
#: ../../getting_started/application/dashboard/dashboard.md:9
#: c7817904bbf34dfca56a19a004937146
msgid "DuckDB"
msgstr "DuckDB"
#: ../../getting_started/application/dashboard/dashboard.md:11
#: 1cebeafe853d43809e6ced45d2b68812
msgid "Steps to Dashboard In DB-GPT"
msgstr "Dashboard使用步骤"
#: ../../getting_started/application/dashboard/dashboard.md:14
#: 977520bbea44423ea290617712482148
msgid "1 add datasource"
msgstr "1.添加数据源"
#: ../../getting_started/application/dashboard/dashboard.md:15
#: a8fcef153c68498fa9886051e8d7b072
msgid ""
"If you are using Dashboard for the first time, you need to mock some data"
" to test. DB-GPT provide some dashboard test data in pilot/mock_datas/, "
"you should follow the steps. ![add_datasource](https://github.com"
"/eosphoros-ai/DB-"
"GPT/assets/13723926/043da0c1-70a0-4a26-9aa0-5aa34fd07a5c)"
msgstr "如果你是第一次使用Dashboard需要构造测试数据DB-GPT在pilot/mock_datas/提供了测试数据,只需要将数据源进行添加即可"
#: ../../getting_started/application/dashboard/dashboard.md:15
#: 1abcaa9d7fad4b53a0622ab3e982e6d5
msgid "add_datasource"
msgstr "添加数据源"
#: ../../getting_started/application/dashboard/dashboard.md:19
#: 21ebb5bf568741a9b3d7a4275dde69fa
msgid "2.Choose Dashboard Mode"
msgstr "2.进入Dashboard"
#: ../../getting_started/application/dashboard/dashboard.md:20
#: 1b55d97634b44543acf8f367f77d8436
msgid ""
"![create_space](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/5e888880-0e97-4b60-8e5c-b7e7224197f0)"
msgstr "![create_space](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/5e888880-0e97-4b60-8e5c-b7e7224197f0)"
#: ../../getting_started/application/dashboard/dashboard.md:20
#: 6c97d2aa26fa401cb3c4172bfe4aea6a
msgid "create_space"
msgstr "create_space"
#: ../../getting_started/application/dashboard/dashboard.md:23
#: ff8e96f78698428a9a578b4f90e0feb4
msgid "3.Select Datasource"
msgstr "3.选择数据源"
#: ../../getting_started/application/dashboard/dashboard.md:24
#: 277c924a6f2b49f98414cde95310384f
msgid ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/da2ac8b5-eca4-48ef-938f-f9dc1ca711b3)"
msgstr "![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/da2ac8b5-eca4-48ef-938f-f9dc1ca711b3)"
#: ../../getting_started/application/dashboard/dashboard.md:24
#: ../../getting_started/application/dashboard/dashboard.md:27
#: 33164f10fb38452fbf98be5aabaeeb91 3a46cb4427cf4ba386230dff47cf7647
#: d0093988bb414c41a93e8ad6f88e8404
msgid "document"
msgstr "document"
#: ../../getting_started/application/dashboard/dashboard.md:26
#: 6a57e48482724d23adf51e888d126562
msgid "4.Input your analysis goals"
msgstr "4.输入分析目标"
#: ../../getting_started/application/dashboard/dashboard.md:27
#: cb96df3f9135450fbf71177978c50141
msgid ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) "
"![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926"
"/3d14a2da-165e-4b2f-a921-325c20fe5ae9)"
msgstr "![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) "
"![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926"
"/3d14a2da-165e-4b2f-a921-325c20fe5ae9)"
#: ../../getting_started/application/dashboard/dashboard.md:31
#: ed0f008525334a36a900b82339591095
msgid "5.Adjust and modify your report"
msgstr "5.调整"
#: ../../getting_started/application/dashboard/dashboard.md:34
#: 8fc26117a2e1484b9452cfaf8c7f208b
msgid ""
"![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94"
"-041b-44b4-b6ec-891bf8da52a4)"
msgstr "![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94"
"-041b-44b4-b6ec-891bf8da52a4)"
#: ../../getting_started/application/dashboard/dashboard.md:34
#: 6d12166c3c574651a854534cc8c7e997
msgid "upload"
msgstr "upload"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n" "Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 23:15+0800\n" "POT-Creation-Date: 2023-08-21 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n" "Language: zh_CN\n"
@ -20,54 +20,74 @@ msgstr ""
"Generated-By: Babel 2.12.1\n" "Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/deploy/deploy_faq.md:1 #: ../../getting_started/faq/deploy/deploy_faq.md:1
#: 6aa73265a43d4e6ea287e6265ef4efe5 #: 4466dae5cd1048cd9c22450be667b05a
msgid "Installation FAQ" msgid "Installation FAQ"
msgstr "Installation FAQ" msgstr "Installation FAQ"
#: ../../getting_started/faq/deploy/deploy_faq.md:5 #: ../../getting_started/faq/deploy/deploy_faq.md:5
#: 4efe241d5e724db5ab22548cfb88f8b6 #: dfa13f5fdf1e4fb9af92b58a5bae2ae9
#, fuzzy
msgid "" msgid ""
"Q1: execute `pip install -e .` error, found some package cannot find "
"correct version."
msgstr ""
"Q1: execute `pip install -r requirements.txt` error, found some package " "Q1: execute `pip install -r requirements.txt` error, found some package "
"cannot find correct version." "cannot find correct version."
msgstr "Q1: execute `pip install -r requirements.txt` error, found some package "
"cannot find correct version."
#: ../../getting_started/faq/deploy/deploy_faq.md:6 #: ../../getting_started/faq/deploy/deploy_faq.md:6
#: e837e10bdcfa49cebb71b32eece4831b #: c694387b681149d18707be047b46fa87
msgid "change the pip source." msgid "change the pip source."
msgstr "替换pip源." msgstr "替换pip源."
#: ../../getting_started/faq/deploy/deploy_faq.md:13 #: ../../getting_started/faq/deploy/deploy_faq.md:13
#: ../../getting_started/faq/deploy/deploy_faq.md:20 #: ../../getting_started/faq/deploy/deploy_faq.md:20
#: 84310ec0c54e4a02949da2e0b35c8c7d e8a7a8b38b7849b88c14fb6d647f9b63 #: 5423bc84710c42ee8ba07e95467ce3ac 99aa6bb16764443f801a342eb8f212ce
msgid "or" msgid "or"
msgstr "或者" msgstr "或者"
#: ../../getting_started/faq/deploy/deploy_faq.md:27 #: ../../getting_started/faq/deploy/deploy_faq.md:27
#: 87797a5dafef47c8884f6f1be9a1fbd2 #: 6cc878fe282f4a9ab024d0b884c57894
msgid "" msgid ""
"Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to" "Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to"
" open database file" " open database file"
msgstr "Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to" msgstr ""
"Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to"
" open database file" " open database file"
#: ../../getting_started/faq/deploy/deploy_faq.md:29 #: ../../getting_started/faq/deploy/deploy_faq.md:29
#: bc96b22e201c47ec999c8d98227a956d #: 18a71bc1062a4b1c8247068d4d49e25d
msgid "make sure you pull latest code or create directory with mkdir pilot/data" msgid "make sure you pull latest code or create directory with mkdir pilot/data"
msgstr "make sure you pull latest code or create directory with mkdir pilot/data" msgstr "make sure you pull latest code or create directory with mkdir pilot/data"
#: ../../getting_started/faq/deploy/deploy_faq.md:31 #: ../../getting_started/faq/deploy/deploy_faq.md:31
#: d7938f1c70a64efa9948080a6d416964 #: 0987d395af24440a95dd9367e3004a0b
msgid "Q3: The model keeps getting killed." msgid "Q3: The model keeps getting killed."
msgstr "Q3: The model keeps getting killed." msgstr "Q3: The model keeps getting killed."
#: ../../getting_started/faq/deploy/deploy_faq.md:32 #: ../../getting_started/faq/deploy/deploy_faq.md:33
#: b072386586a64b2289c0fcdf6857b2b7 #: bfd90cb8f2914bba84a44573a9acdd6d
msgid "" msgid ""
"your GPU VRAM size is not enough, try replace your hardware or replace " "your GPU VRAM size is not enough, try replace your hardware or replace "
"other llms." "other llms."
msgstr "GPU显存不够, 增加显存或者换一个显存小的模型" msgstr "GPU显存不够, 增加显存或者换一个显存小的模型"
#: ../../getting_started/faq/deploy/deploy_faq.md:35
#: 09a9baca454d4b868fedffa4febe7c5c
msgid "Q4: How to access website on the public network"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:37
#: 3ad8d2cf2b4348a6baed5f3e302cd58c
msgid ""
"You can try to use gradio's [network](https://github.com/gradio-"
"app/gradio/blob/main/gradio/networking.py) to achieve."
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:48
#: 90b35959c5854b69acad9b701e21e65f
msgid "Open `url` with your browser to see the website."
msgstr ""
#~ msgid "" #~ msgid ""
#~ "Q2: When use Mysql, Access denied " #~ "Q2: When use Mysql, Access denied "
#~ "for user 'root@localhost'(using password :NO)" #~ "for user 'root@localhost'(using password :NO)"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n" "Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-17 21:23+0800\n" "POT-Creation-Date: 2023-08-21 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n" "Language: zh_CN\n"
@ -20,34 +20,34 @@ msgstr ""
"Generated-By: Babel 2.12.1\n" "Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/deploy/deploy.md:1 #: ../../getting_started/install/deploy/deploy.md:1
#: de0b03c3b94a4e2aad7d380f532b85c0 #: 14020ee624b545a5a034b7e357f42545
msgid "Installation From Source" msgid "Installation From Source"
msgstr "源码安装" msgstr "源码安装"
#: ../../getting_started/install/deploy/deploy.md:3 #: ../../getting_started/install/deploy/deploy.md:3
#: 65a034e1a90f40bab24899be901cc97f #: eeafb53bf0e846518457084d84edece7
msgid "" msgid ""
"This tutorial gives you a quick walkthrough about use DB-GPT with you " "This tutorial gives you a quick walkthrough about use DB-GPT with you "
"environment and data." "environment and data."
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。" msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
#: ../../getting_started/install/deploy/deploy.md:5 #: ../../getting_started/install/deploy/deploy.md:5
#: 33b15956f7ef446a9aa4cac014163884 #: 1d6ee2f0f1ae43e9904da4c710b13e28
msgid "Installation" msgid "Installation"
msgstr "安装" msgstr "安装"
#: ../../getting_started/install/deploy/deploy.md:7 #: ../../getting_started/install/deploy/deploy.md:7
#: ad64dc334e8e43bebc8873afb27f7b15 #: 6ebdb4ae390e4077af2388c48a73430d
msgid "To get started, install DB-GPT with the following steps." msgid "To get started, install DB-GPT with the following steps."
msgstr "请按照以下步骤安装DB-GPT" msgstr "请按照以下步骤安装DB-GPT"
#: ../../getting_started/install/deploy/deploy.md:9 #: ../../getting_started/install/deploy/deploy.md:9
#: 33e12a5bef6c45dbb30fbffae556b664 #: 910cfe79d1064bd191d56957b76d37fa
msgid "1. Hardware Requirements" msgid "1. Hardware Requirements"
msgstr "1. 硬件要求" msgstr "1. 硬件要求"
#: ../../getting_started/install/deploy/deploy.md:10 #: ../../getting_started/install/deploy/deploy.md:10
#: dfd3b7c074124de78169f34168b7c757 #: 6207b8e32b7c4b669c8874ff9267627e
msgid "" msgid ""
"As our project has the ability to achieve ChatGPT performance of over " "As our project has the ability to achieve ChatGPT performance of over "
"85%, there are certain hardware requirements. However, overall, the " "85%, there are certain hardware requirements. However, overall, the "
@ -56,176 +56,176 @@ msgid ""
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。但总体来说我们在消费级的显卡上即可完成项目的部署使用具体部署的硬件说明如下:" msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。但总体来说我们在消费级的显卡上即可完成项目的部署使用具体部署的硬件说明如下:"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 3d4530d981bf4dbab815a11c74bfd897 #: 45babe2e028746559e437880fdbcd5d3
msgid "GPU" msgid "GPU"
msgstr "GPU" msgstr "GPU"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 348c8f9b734244258416ea2e11b76caa f2f5e55b8c9b4c7da0ac090e763a9f47 #: 7adbc2bb5e384d419b53badfaf36b962 9307790f5c464f58a54c94659451a037
msgid "VRAM Size" msgid "VRAM Size"
msgstr "显存" msgstr "显存"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 9e099897409f42339bc284c378318a72 #: 305fdfdbd4674648a059b65736be191c
msgid "Performance" msgid "Performance"
msgstr "Performance" msgstr "Performance"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 65bd67a198a84a5399f4799b505e062c #: 0e719a22b08844d9be04b4bcaeb4ad87
msgid "RTX 4090" msgid "RTX 4090"
msgstr "RTX 4090" msgstr "RTX 4090"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 75e7f58f8f5d42f081a3e4d2e51ccc18 d897e949b37344d084f6917b977bcceb #: 482dd0da73f3495198ee1c9c8fb7e8ed ed30edd1a6944d6c8cb6a06c9c12d4db
msgid "24 GB" msgid "24 GB"
msgstr "24 GB" msgstr "24 GB"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 9cba72a2be3c41bda797ff447b63e448 #: b9a9b3179d844b97a578193eacfec8cc
msgid "Smooth conversation inference" msgid "Smooth conversation inference"
msgstr "Smooth conversation inference" msgstr "Smooth conversation inference"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 90ea71d2099c47acac027773e69d2b23 #: 171da7c9f0744b5aa335a5411f126eb7
msgid "RTX 3090" msgid "RTX 3090"
msgstr "RTX 3090" msgstr "RTX 3090"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 7dd339251a1a45be9a45e0bb30bd09f7 #: fbb497f41e61437ba089008c573b0cc7
msgid "Smooth conversation inference, better than V100" msgid "Smooth conversation inference, better than V100"
msgstr "Smooth conversation inference, better than V100" msgstr "Smooth conversation inference, better than V100"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 8abe188052464e6aa395392db834c842 #: 2cb4fba16b664e1e9c22a1076f837a80
msgid "V100" msgid "V100"
msgstr "V100" msgstr "V100"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 0b5263806a19446991cbd59c0fec6ba7 d645833d7e854eab81102374bf3fb7d8 #: 05cccda43ffb41d7b73c2d5dfbc7f1c5 8c471150ab0746d8998ddca30ad86404
msgid "16 GB" msgid "16 GB"
msgstr "16 GB" msgstr "16 GB"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 83bb5f40fa7f4e9389ac6abdb6bbb285 f9e58862a0cb488194d7ad536f359f0d #: 335fa32f77a349abb9a813a3a9dd6974 8546d09080b0421597540e92ab485254
msgid "Conversation inference possible, noticeable stutter" msgid "Conversation inference possible, noticeable stutter"
msgstr "Conversation inference possible, noticeable stutter" msgstr "Conversation inference possible, noticeable stutter"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 281f7b3c9a9f450798e5cc7612ea3890 #: dd7a18f4bc144413b90863598c5e9a83
msgid "T4" msgid "T4"
msgstr "T4" msgstr "T4"
#: ../../getting_started/install/deploy/deploy.md:19 #: ../../getting_started/install/deploy/deploy.md:19
#: 7276d432615040b3a9eea3f2c5764319 #: 932dc4db2fba4272b72c31eb7d319255
msgid "" msgid ""
"if your VRAM Size is not enough, DB-GPT supported 8-bit quantization and " "if your VRAM Size is not enough, DB-GPT supported 8-bit quantization and "
"4-bit quantization." "4-bit quantization."
msgstr "如果你的显存不够DB-GPT支持8-bit和4-bit量化版本" msgstr "如果你的显存不够DB-GPT支持8-bit和4-bit量化版本"
#: ../../getting_started/install/deploy/deploy.md:21 #: ../../getting_started/install/deploy/deploy.md:21
#: f68d085e03d244ed9b5ccec347466889 #: 7dd6fabaf1ea43718f26e8b83a7299e3
msgid "" msgid ""
"Here are some of the VRAM size usage of the models we tested in some " "Here are some of the VRAM size usage of the models we tested in some "
"common scenarios." "common scenarios."
msgstr "这里是量化版本的相关说明" msgstr "这里是量化版本的相关说明"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 8d397d7ee603448b97153192e6d3e372 #: b50ded065d4943e3a5bfdfdf3a723f82
msgid "Model" msgid "Model"
msgstr "Model" msgstr "Model"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 4c789597627447278e89462690563faa #: 30621cf2407f4beca262eb47023d0b84
msgid "Quantize" msgid "Quantize"
msgstr "Quantize" msgstr "Quantize"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: a8d7c76224544ce69f376ac7cb2f5a3b dbf2beb53d7e491393b00848d63d7ffa #: 492112b927ce46308c50917beaa9e23a 8450f0b95a05475d9136906ec64d43b2
msgid "vicuna-7b-v1.5" msgid "vicuna-7b-v1.5"
msgstr "vicuna-7b-v1.5" msgstr "vicuna-7b-v1.5"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 1bd5afee2af84597b5a423308e92362c 46d030b6ff1849ebb22b590e6978d914 #: 1379c29cb10340848ce3e9bf9ec67348 39221fd99f0a41d29141fb73e1c9217d
#: 4cf8f3ebb0b743ffb2f1c123a25b75d0 6d96be424e6043daa2c02649894aa796 #: 3fd499ced4884e2aa6633784432f085c 6e872d37f2ab4571961465972092f439
#: 83bb935f520c4c818bfe37e13034b2a7 92ba161085374917b7f82810b1a2bf00 #: 917c7d492a4943f5963a210f9c997cb7 fb204224fb484019b344315d03f50571
#: ca5ccc49ba1046d2b4b13aaa7ceb62f5 #: ff9f5aea13d04176912ebf141cc15d44
msgid "4-bit" msgid "4-bit"
msgstr "4-bit" msgstr "4-bit"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 1173b5ee04cb4686ba34a527bc618bdb 558136997f4d49998f2f4e6a9bb656b0 #: 13976c780bc3451fae4ad398b39f5245 32fe2b6c2f1e40c8928c8537f9239d07
#: 8d203a9a70684cbaa9d937af8450847f #: 74f4ff229a314abf97c8fa4d6d73c339
msgid "8 GB" msgid "8 GB"
msgstr "8 GB" msgstr "8 GB"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 033458b772e3493b80041122e067e194 0f3eda083eac4739b2cf7d21337b145e #: 14ca9edebe794c7692738e5d14a15c06 453394769a20437da7a7a758c35345af
#: 6404eaa486cf45a69a27b0f87a7f6302 8e850aa3acf14ab2b231be74ddb34e86 #: 860b51d1ab6742c5bdefc6e1ffc923a3 988110ac5e7f418d8a46ea9c42238ecc
#: ba258645f41f47a693aacbbc0f38e981 df67cac599234c4ba667a9b40eb6d9bc #: d446bc441f8946d9a95a2a965157478e d683ae90e3da473f876cb948dd5dce5e
#: fc07aeb321434722a320fed0afe3ffb8 #: f1e2f6fb36624185ab2876bdc87301ec
msgid "8-bit" msgid "8-bit"
msgstr "8-bit" msgstr "8-bit"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 3ed2cb5787c14f268c446b03d5531233 68e7f0b0e8ad44ee86a86189bb3b553d #: 1b6230690780434184bab3a68d501b60 734f124ce767437097d9cb3584796df5
#: 8b4ea703d1df45c5be90d83c4723f16f cb606e0a458746fd86307c1e8aea08f1 #: 7da1c33a6d4746eba7f976cc43e6ad59 c0e7bd4672014afe88be1e11b8e772da
#: d5da58dbde3c4bb4ac8b464a0a507c62 e8e140c610ec4971afe1b7ec2690382a #: d045ee257f8a40e8bdad0e2b91c64018 e231e0b6f7ee4f5b85d28a18bbc32175
msgid "12 GB" msgid "12 GB"
msgstr "12 GB" msgstr "12 GB"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 512cd29c308c4d3ab66dbe63e7ea8f48 78f8307ab96c4245a1f09abcd714034c #: 2ee16dd760e74439bbbec13186f9e44e 65628a09b7bf487eb4b4f0268eab2751
msgid "vicuna-13b-v1.5" msgid "vicuna-13b-v1.5"
msgstr "vicuna-13b-v1.5" msgstr "vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 48849903b3cb4888b6dd3d5efcbb24fb 83178c05f6cf431f82bb1d6d25b2645e #: 3d633ad2c90547d1b47a06aafe4aa177 4a9e81e6303748ada67fa8b6ec1a8f57
#: 979e3ab64df14753b0987bdd49bd5cc6 #: d33dc735503649348590e03efabef94d
msgid "20 GB" msgid "20 GB"
msgstr "20 GB" msgstr "20 GB"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 0496046d1fb644c28361959b395231d7 3871c7c4432b4a15a9888586cdc70eda #: 6c02020d52e94f27b2eb1d10b20e9cca d58f181d10c04980b6d0bdc2be51c01c
msgid "llama-2-7b" msgid "llama-2-7b"
msgstr "llama-2-7b" msgstr "llama-2-7b"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 0c7c632d7c4d44fabfeed58fcc13db8f 78ee1adc6a7e4fa188706a1d5356059f #: b9f25fa66dc04e539dfa30b2297ac8aa e1e80256f5444e9aacebb64e053a6b70
msgid "llama-2-13b" msgid "llama-2-13b"
msgstr "llama-2-13b" msgstr "llama-2-13b"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 15a9341d4a6649908ef88045edd0cb93 d385381b0b4d4eff96a93a9d299cf516 #: 077c1d5b6db248d4aa6a3b0c5b2cc237 987cc7f04eea4d94acd7ca3ee0fdfe20
msgid "llama-2-70b" msgid "llama-2-70b"
msgstr "llama-2-70b" msgstr "llama-2-70b"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: de37a5c2b02a498b9344b24f626db9dc #: 18a1070ab2e048c7b3ea90e25d58b38f
msgid "48 GB" msgid "48 GB"
msgstr "48 GB" msgstr "48 GB"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: e897098f81314ce4bf729aee1de7354c #: da0474d2c8214e678021181191a651e5
msgid "80 GB" msgid "80 GB"
msgstr "80 GB" msgstr "80 GB"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 0782883260f840db8e8bf7c10b5ddf62 b03b5c9343454119ae11fcb2dedf9f90 #: 49e5fedf491b4b569459c23e4f6ebd69 8b54730a306a46f7a531c13ef825b20a
msgid "baichuan-7b" msgid "baichuan-7b"
msgstr "baichuan-7b" msgstr "baichuan-7b"
#: ../../getting_started/install/deploy/deploy.md #: ../../getting_started/install/deploy/deploy.md
#: 008a7d56e8dc4242ae3503bbbf4db153 65ea9ba20adb45519d65da7b16069fa8 #: 2158255ea61c4a0c9e96bec0df21fa06 694d9b2fe90740c5b9704089d7ddac9d
msgid "baichuan-13b" msgid "baichuan-13b"
msgstr "baichuan-13b" msgstr "baichuan-13b"
#: ../../getting_started/install/deploy/deploy.md:40 #: ../../getting_started/install/deploy/deploy.md:40
#: 1e434048c4844cc1906d83dd68af6d8c #: a9f9e470d41f4122a95cbc8bd2bc26dc
msgid "2. Install" msgid "2. Install"
msgstr "2. Install" msgstr "2. Install"
#: ../../getting_started/install/deploy/deploy.md:45 #: ../../getting_started/install/deploy/deploy.md:45
#: c6190ea13c024ddcb8e45ea22a235c3b #: c78fd491a6374224ab95dc39849f871f
msgid "" msgid ""
"We use Sqlite as default database, so there is no need for database " "We use Sqlite as default database, so there is no need for database "
"installation. If you choose to connect to other databases, you can " "installation. If you choose to connect to other databases, you can "
@ -240,12 +240,12 @@ msgstr ""
" Miniconda](https://docs.conda.io/en/latest/miniconda.html)" " Miniconda](https://docs.conda.io/en/latest/miniconda.html)"
#: ../../getting_started/install/deploy/deploy.md:54 #: ../../getting_started/install/deploy/deploy.md:54
#: ee1e44044b73460ea8cd2f6c2eb6100d #: 12180cd023a04152b1591a87d96d227a
msgid "Before use DB-GPT Knowledge" msgid "Before use DB-GPT Knowledge"
msgstr "在使用知识库之前" msgstr "在使用知识库之前"
#: ../../getting_started/install/deploy/deploy.md:60 #: ../../getting_started/install/deploy/deploy.md:60
#: 6a9ff4138c69429bb159bf452fa7ee55 #: 2d5c1e241a0b47de81c91eca2c4999c6
msgid "" msgid ""
"Once the environment is installed, we have to create a new folder " "Once the environment is installed, we have to create a new folder "
"\"models\" in the DB-GPT project, and then we can put all the models " "\"models\" in the DB-GPT project, and then we can put all the models "
@ -253,27 +253,27 @@ msgid ""
msgstr "如果你已经安装好了环境需要创建models, 然后到huggingface官网下载模型" msgstr "如果你已经安装好了环境需要创建models, 然后到huggingface官网下载模型"
#: ../../getting_started/install/deploy/deploy.md:63 #: ../../getting_started/install/deploy/deploy.md:63
#: 1299b19bd0f24cc896c59e2c8e7e656c #: 8a79092303e74ab2974fb6edd3d14a1c
msgid "Notice make sure you have install git-lfs" msgid "Notice make sure you have install git-lfs"
msgstr "" msgstr ""
#: ../../getting_started/install/deploy/deploy.md:65 #: ../../getting_started/install/deploy/deploy.md:65
#: 69b3433c8e5c4cbb960e0178bdd6ac97 #: 68e9b02da9994192856fd4572732041d
msgid "centos:yum install git-lfs" msgid "centos:yum install git-lfs"
msgstr "" msgstr ""
#: ../../getting_started/install/deploy/deploy.md:67 #: ../../getting_started/install/deploy/deploy.md:67
#: 50e3cfee5fd5484bb063d41693ac75f0 #: 30cae887feee4ec897d787801d1900db
msgid "ubuntu:app-get install git-lfs" msgid "ubuntu:app-get install git-lfs"
msgstr "" msgstr ""
#: ../../getting_started/install/deploy/deploy.md:69 #: ../../getting_started/install/deploy/deploy.md:69
#: 81c85ca1188b4ef5b94e0431c6309f9b #: 292a52e7f50242c59e7e95bafc8102da
msgid "macos:brew install git-lfs" msgid "macos:brew install git-lfs"
msgstr "" msgstr ""
#: ../../getting_started/install/deploy/deploy.md:86 #: ../../getting_started/install/deploy/deploy.md:86
#: 9b503ea553a24d488e1c180bf30055ff #: a66af00ea6a0403dbc90df23b0e3c40e
msgid "" msgid ""
"The model files are large and will take a long time to download. During " "The model files are large and will take a long time to download. During "
"the download, let's configure the .env file, which needs to be copied and" "the download, let's configure the .env file, which needs to be copied and"
@ -281,7 +281,7 @@ msgid ""
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件它需要从。env.template中复制和创建。" msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件它需要从。env.template中复制和创建。"
#: ../../getting_started/install/deploy/deploy.md:88 #: ../../getting_started/install/deploy/deploy.md:88
#: 643b6a27bc0f43ee9451d18d52a9a2eb #: 98934f9d2dda41e9b9bc20078ed750eb
msgid "" msgid ""
"if you want to use openai llm service, see [LLM Use FAQ](https://db-" "if you want to use openai llm service, see [LLM Use FAQ](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)" "gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
@ -290,19 +290,19 @@ msgstr ""
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)" "gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
#: ../../getting_started/install/deploy/deploy.md:91 #: ../../getting_started/install/deploy/deploy.md:91
#: cc869640e66949e99faa17b1098b1306 #: 727fc8be76dc40d59416b20273faee13
msgid "cp .env.template .env" msgid "cp .env.template .env"
msgstr "cp .env.template .env" msgstr "cp .env.template .env"
#: ../../getting_started/install/deploy/deploy.md:94 #: ../../getting_started/install/deploy/deploy.md:94
#: 1b94ed0e469f413b8e9d0ff3cdabca33 #: 65a5307640484800b2d667585e92b340
msgid "" msgid ""
"You can configure basic parameters in the .env file, for example setting " "You can configure basic parameters in the .env file, for example setting "
"LLM_MODEL to the model to be used" "LLM_MODEL to the model to be used"
msgstr "您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。" msgstr "您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。"
#: ../../getting_started/install/deploy/deploy.md:96 #: ../../getting_started/install/deploy/deploy.md:96
#: 52cfa3636f2b4f949035d2d54b39a123 #: ab83b0d6663441e588ceb52bb2e5934c
msgid "" msgid ""
"([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on " "([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on "
"llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-" "llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-"
@ -313,22 +313,45 @@ msgstr ""
"目前Vicuna-v1.5模型(基于llama2)已经开源了我们推荐你使用这个模型通过设置LLM_MODEL=vicuna-13b-v1.5" "目前Vicuna-v1.5模型(基于llama2)已经开源了我们推荐你使用这个模型通过设置LLM_MODEL=vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md:98 #: ../../getting_started/install/deploy/deploy.md:98
#: 491fd44ede1645a3a2db10097c10dbe8 #: 4ed331ceacb84a339a7e0038029e356e
msgid "3. Run" msgid "3. Run"
msgstr "3. Run" msgstr "3. Run"
#: ../../getting_started/install/deploy/deploy.md:100 #: ../../getting_started/install/deploy/deploy.md:100
#: f66b8a2b18b34df5b3e74674b4a9d7a9 #: e674fe7a6a9542dfae6e76f8c586cb04
msgid "**(Optional) load examples into SQLlite**"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:105
#: b2de940ecd9444d2ab9ebf8762565fb6
msgid "1.Run db-gpt server" msgid "1.Run db-gpt server"
msgstr "1.Run db-gpt server" msgstr "1.Run db-gpt server"
#: ../../getting_started/install/deploy/deploy.md:105 #: ../../getting_started/install/deploy/deploy.md:111
#: b72283f0ffdc4ecbb4da5239be5fd126 #: 929d863e28eb4c5b8bd4c53956d3bc76
msgid "Open http://localhost:5000 with your browser to see the product." msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000" msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/install/deploy/deploy.md:114
#: 43dd8b4a017f448f9be4f5432a083c08
msgid "If you want to access an external LLM service, you need to"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:116 #: ../../getting_started/install/deploy/deploy.md:116
#: 1fae8a8ce4184feba2d74f877a25d8d2 #: 452a8bbc3e7f43e9a89f244ab0910fd6
msgid ""
"1.set the variables LLM_MODEL=YOUR_MODEL_NAME, "
"MODEL_SERVER=YOUR_MODEL_SERVEReg:http://localhost:5000 in the .env "
"file."
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:118
#: 216350f67d1a4056afb1c0277dd46a0c
msgid "2.execute dbgpt_server.py in light mode"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:121
#: fc9c90ca4d974ee1ade9e972320debba
msgid "" msgid ""
"If you want to learn about dbgpt-webui, read https://github./csunny/DB-" "If you want to learn about dbgpt-webui, read https://github./csunny/DB-"
"GPT/tree/new-page-framework/datacenter" "GPT/tree/new-page-framework/datacenter"
@ -336,55 +359,55 @@ msgstr ""
"如果你想了解web-ui, 请访问https://github./csunny/DB-GPT/tree/new-page-" "如果你想了解web-ui, 请访问https://github./csunny/DB-GPT/tree/new-page-"
"framework/datacenter" "framework/datacenter"
#: ../../getting_started/install/deploy/deploy.md:123 #: ../../getting_started/install/deploy/deploy.md:127
#: 573c0349bd2140e9bb356b53f1da6ee3 #: c0df0cdc5dea4ef3bef4d4c1f4cc52ad
#, fuzzy #, fuzzy
msgid "Multiple GPUs" msgid "Multiple GPUs"
msgstr "4. Multiple GPUs" msgstr "4. Multiple GPUs"
#: ../../getting_started/install/deploy/deploy.md:125 #: ../../getting_started/install/deploy/deploy.md:129
#: af5d6a12ec954da19576decdf434df5d #: c3aed00fa8364e6eaab79a23cf649558
msgid "" msgid ""
"DB-GPT will use all available gpu by default. And you can modify the " "DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu" "setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs." " IDs."
msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs" msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
#: ../../getting_started/install/deploy/deploy.md:127 #: ../../getting_started/install/deploy/deploy.md:131
#: de96662007194418a2877cece51dc5cb #: 88b7510fef5943c5b4807bc92398a604
msgid "" msgid ""
"Optionally, you can also specify the gpu ID to use before the starting " "Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:" "command, as shown below:"
msgstr "你也可以指定gpu ID启动" msgstr "你也可以指定gpu ID启动"
#: ../../getting_started/install/deploy/deploy.md:137 #: ../../getting_started/install/deploy/deploy.md:141
#: 9cb0ff253fb2428dbaec97570e5c4fa4 #: 2cf93d3291bd4f56a3677e607f6185e7
msgid "" msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to " "You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU." "configure the maximum memory used by each GPU."
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存" msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
#: ../../getting_started/install/deploy/deploy.md:139 #: ../../getting_started/install/deploy/deploy.md:143
#: c708ee0a321444dd91be00cda469976c #: 00aac35bec094f99bd1f2f5f344cd3f5
#, fuzzy #, fuzzy
msgid "Not Enough Memory" msgid "Not Enough Memory"
msgstr "5. Not Enough Memory" msgstr "5. Not Enough Memory"
#: ../../getting_started/install/deploy/deploy.md:141 #: ../../getting_started/install/deploy/deploy.md:145
#: 760347ecf9a44d03a8e17cba153a2cc6 #: d3d4b8cd24114a24929f62bbe7bae1a2
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization." msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization." msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
#: ../../getting_started/install/deploy/deploy.md:143 #: ../../getting_started/install/deploy/deploy.md:147
#: 32e3dc941bfe4d6587e8be262f8fb4d3 #: eb9412b79f044aebafd59ecf3cc4f873
msgid "" msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` " "You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by " "in `.env` file to use quantization(8-bit quantization is enabled by "
"default)." "default)."
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`" msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
#: ../../getting_started/install/deploy/deploy.md:145 #: ../../getting_started/install/deploy/deploy.md:149
#: bdc9a3788149427bac9f3cf35578e206 #: b27c4982ec9b4ff0992d477be1100488
msgid "" msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit" "Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM." " quantization can run with 48 GB of VRAM."
@ -392,16 +415,6 @@ msgstr ""
"Llama-2-70b with 8-bit quantization 可以运行在80GB VRAM机器 4-bit " "Llama-2-70b with 8-bit quantization 可以运行在80GB VRAM机器 4-bit "
"quantization可以运行在 48 GB VRAM" "quantization可以运行在 48 GB VRAM"
#: ../../getting_started/install/deploy/deploy.md:147
#: 9b6085c41b5c4b96ac3e917dc5002fc2
msgid ""
"Note: you need to install the latest dependencies according to "
"[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)."
msgstr ""
"注意,需要安装[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)涉及的所有的依赖"
#~ msgid "" #~ msgid ""
#~ "Notice make sure you have install " #~ "Notice make sure you have install "
#~ "git-lfs centos:yum install git-lfs " #~ "git-lfs centos:yum install git-lfs "
@ -441,3 +454,12 @@ msgstr ""
#~ "如果你想访问外部的大模型服务(是通过DB-" #~ "如果你想访问外部的大模型服务(是通过DB-"
#~ "GPT/pilot/server/llmserver.py启动的模型服务)1.需要在.env文件设置模型名和外部模型服务地址。2.使用light模式启动服务" #~ "GPT/pilot/server/llmserver.py启动的模型服务)1.需要在.env文件设置模型名和外部模型服务地址。2.使用light模式启动服务"
#~ msgid ""
#~ "Note: you need to install the "
#~ "latest dependencies according to "
#~ "[requirements.txt](https://github.com/eosphoros-ai/DB-"
#~ "GPT/blob/main/requirements.txt)."
#~ msgstr ""
#~ "注意,需要安装[requirements.txt](https://github.com/eosphoros-ai/DB-"
#~ "GPT/blob/main/requirements.txt)涉及的所有的依赖"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n" "Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-17 21:23+0800\n" "POT-Creation-Date: 2023-08-21 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n" "Language: zh_CN\n"
@ -20,303 +20,320 @@ msgstr ""
"Generated-By: Babel 2.12.1\n" "Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/llm/llama/llama_cpp.md:1 #: ../../getting_started/install/llm/llama/llama_cpp.md:1
#: 911085eb102a47c1832411ada8b8b906 #: 24d5c21cd8b44f1d8585ba5c83e34acc
msgid "llama.cpp" msgid "llama.cpp"
msgstr "llama.cpp" msgstr "llama.cpp"
#: ../../getting_started/install/llm/llama/llama_cpp.md:3
#: 8099fe03f2204c7f90968ae5c0cae117
msgid ""
"DB-GPT is now supported by [llama-cpp-python](https://github.com/abetlen"
"/llama-cpp-python) through "
"[llama.cpp](https://github.com/ggerganov/llama.cpp)."
msgstr "DB-GPT is now supported by [llama-cpp-python](https://github.com/abetlen"
"/llama-cpp-python) through "
"[llama.cpp](https://github.com/ggerganov/llama.cpp)."
#: ../../getting_started/install/llm/llama/llama_cpp.md:5 #: ../../getting_started/install/llm/llama/llama_cpp.md:5
#: 79c33615ad6d44859b33ed0d05fdb1a5 #: 56969ff863d949aa8df55d3bdb6957e7
msgid ""
"DB-GPT already supports "
"[llama.cpp](https://github.com/ggerganov/llama.cpp) via [llama-cpp-"
"python](https://github.com/abetlen/llama-cpp-python)."
msgstr ""
#: ../../getting_started/install/llm/llama/llama_cpp.md:7
#: afe223eafcc641779e1580cac574c34a
msgid "Running llama.cpp" msgid "Running llama.cpp"
msgstr "运行 llama.cpp" msgstr "运行 llama.cpp"
#: ../../getting_started/install/llm/llama/llama_cpp.md:7 #: ../../getting_started/install/llm/llama/llama_cpp.md:9
#: 3111bf827639484fb5e5f72a42b1b4e7 #: 0eaf98a036434eecb2af1fa89f045620
msgid "Preparing Model Files" msgid "Preparing Model Files"
msgstr "准备模型文件" msgstr "准备模型文件"
#: ../../getting_started/install/llm/llama/llama_cpp.md:9 #: ../../getting_started/install/llm/llama/llama_cpp.md:11
#: 823f04ec193946d080b203a19c4ed96f #: 4f45be5d9658451fb95f1d5d31dc8778
msgid "" msgid ""
"To use llama.cpp, you need to prepare a ggml format model file, and there" "To use llama.cpp, you need to prepare a ggml format model file, and there"
" are two common ways to obtain it, you can choose either:" " are two common ways to obtain it, you can choose either:"
msgstr "使用llama.cpp, 你需要准备ggml格式的文件你可以通过以下两种方法获取" msgstr "使用llama.cpp, 你需要准备ggml格式的文件你可以通过以下两种方法获取"
#: ../../getting_started/install/llm/llama/llama_cpp.md:11 #: ../../getting_started/install/llm/llama/llama_cpp.md:13
#: c631f3c1a1db429c801c24fa7799b2e1 #: 9934596e0f6e466aae63cefbb019e0ec
msgid "Download a pre-converted model file." msgid "Download a pre-converted model file."
msgstr "Download a pre-converted model file." msgstr "Download a pre-converted model file."
#: ../../getting_started/install/llm/llama/llama_cpp.md:13 #: ../../getting_started/install/llm/llama/llama_cpp.md:15
#: 1ac7d5845ca241519ec15236e9802af6 #: 33fef76961064a5ca4c86c57111c8bd3
msgid "" msgid ""
"Suppose you want to use [Vicuna 7B v1.5](https://huggingface.co/lmsys" "Suppose you want to use [Vicuna 7B v1.5](https://huggingface.co/lmsys"
"/vicuna-7b-v1.5), you can download the file already converted from " "/vicuna-7b-v1.5), you can download the file already converted from "
"[TheBloke/vicuna-7B-v1.5-GGML](https://huggingface.co/TheBloke/vicuna-" "[TheBloke/vicuna-7B-v1.5-GGML](https://huggingface.co/TheBloke/vicuna-"
"7B-v1.5-GGML), only one file is needed. Download it to the `models` " "7B-v1.5-GGML), only one file is needed. Download it to the `models` "
"directory and rename it to `ggml-model-q4_0.bin`." "directory and rename it to `ggml-model-q4_0.bin`."
msgstr "假设您想使用[Vicuna 7B v1.5](https://huggingface.co/lmsys" msgstr ""
"/vicuna-7b-v1.5)您可以从[TheBloke/vicuna-7B-v1.5-GGML](https://huggingface.co/TheBloke/vicuna-" "假设您想使用[Vicuna 7B v1.5](https://huggingface.co/lmsys/vicuna-"
"7b-v1.5)您可以从[TheBloke/vicuna-"
"7B-v1.5-GGML](https://huggingface.co/TheBloke/vicuna-"
"7B-v1.5-GGML)下载已转换的文件只需要一个文件。将其下载到models目录并将其重命名为ggml-model-q4_0.bin。" "7B-v1.5-GGML)下载已转换的文件只需要一个文件。将其下载到models目录并将其重命名为ggml-model-q4_0.bin。"
#: ../../getting_started/install/llm/llama/llama_cpp.md:19 #: ../../getting_started/install/llm/llama/llama_cpp.md:21
#: 65344fafdaa1469797592e454ebee7b5 #: 65fed5b7e95b4205b2b94596a21b6fe8
msgid "Convert It Yourself" msgid "Convert It Yourself"
msgstr "Convert It Yourself" msgstr "Convert It Yourself"
#: ../../getting_started/install/llm/llama/llama_cpp.md:21 #: ../../getting_started/install/llm/llama/llama_cpp.md:23
#: 8da2bda172884c9fb2d64901d8b9178c #: 1421761d320046f79f725e64bd7d854c
msgid "" msgid ""
"You can convert the model file yourself according to the instructions in " "You can convert the model file yourself according to the instructions in "
"[llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp" "[llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp"
"#prepare-data--run), and put the converted file in the models directory " "#prepare-data--run), and put the converted file in the models directory "
"and rename it to `ggml-model-q4_0.bin`." "and rename it to `ggml-model-q4_0.bin`."
msgstr "您可以根据[llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp" msgstr ""
"#prepare-data--run)中的说明自己转换模型文件然后将转换后的文件放入models目录中并将其重命名为ggml-model-q4_0.bin。" "您可以根据[llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp"
"#prepare-data--run)中的说明自己转换模型文件然后将转换后的文件放入models目录中并将其重命名为ggml-"
"model-q4_0.bin。"
#: ../../getting_started/install/llm/llama/llama_cpp.md:23 #: ../../getting_started/install/llm/llama/llama_cpp.md:25
#: d30986c0a84448ff89bc4bb84e3d0deb #: 850b1f8ef6be49b192e01c1b7d8f1f26
msgid "Installing Dependencies" msgid "Installing Dependencies"
msgstr "安装依赖" msgstr "安装依赖"
#: ../../getting_started/install/llm/llama/llama_cpp.md:25 #: ../../getting_started/install/llm/llama/llama_cpp.md:27
#: b91ca009587b45679c54f4dce07c2eb3 #: b323ee4799d745cc9c0a449bd37c371a
msgid "" msgid ""
"llama.cpp is an optional dependency in DB-GPT, and you can manually " "llama.cpp is an optional dependency in DB-GPT, and you can manually "
"install it using the following command:" "install it using the following command:"
msgstr "llama.cpp在DB-GPT中是可选安装项, 你可以通过一下命令进行安装" msgstr "llama.cpp在DB-GPT中是可选安装项, 你可以通过一下命令进行安装"
#: ../../getting_started/install/llm/llama/llama_cpp.md:31 #: ../../getting_started/install/llm/llama/llama_cpp.md:33
#: 2c89087ba2214d97bc01a286826042bc #: 75b75c84ffb7476d8501a28bb2719615
msgid "Modifying the Configuration File" msgid "Modifying the Configuration File"
msgstr "修改配置文件" msgstr "修改配置文件"
#: ../../getting_started/install/llm/llama/llama_cpp.md:33 #: ../../getting_started/install/llm/llama/llama_cpp.md:35
#: e4ebd4dac0cd4fb4a8e3c1f6edde7ea8 #: d1f8b3e1ad3441f2aafbfe2519113c2c
msgid "Next, you can directly modify your `.env` file to enable llama.cpp." msgid "Next, you can directly modify your `.env` file to enable llama.cpp."
msgstr "修改`.env`文件使用llama.cpp" msgstr "修改`.env`文件使用llama.cpp"
#: ../../getting_started/install/llm/llama/llama_cpp.md:40 #: ../../getting_started/install/llm/llama/llama_cpp.md:42
#: 2fce7ec613784c8e96f19e9f4c4fb818 #: 2ddcab3834f646e58a8b3316abf6ce3a
msgid "" msgid ""
"Then you can run it according to [Run](https://db-" "Then you can run it according to [Run](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run)." "gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run)."
msgstr "然后你可以通过[Run](https://db-" msgstr ""
"然后你可以通过[Run](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run).来运行" "gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run).来运行"
#: ../../getting_started/install/llm/llama/llama_cpp.md:43 #: ../../getting_started/install/llm/llama/llama_cpp.md:45
#: 9bbaa16512d2420aa368ba34825cc024 #: bb9f222d22534827a9fa164b2126d192
msgid "More Configurations" msgid "More Configurations"
msgstr "更多配置文件" msgstr "更多配置文件"
#: ../../getting_started/install/llm/llama/llama_cpp.md:45 #: ../../getting_started/install/llm/llama/llama_cpp.md:47
#: 5ce014aa175d4a119150cf184098a0c3 #: 14d016ad5bad451888d01e24f0ca86d9
msgid "" msgid ""
"In DB-GPT, the model configuration can be done through `{model " "In DB-GPT, the model configuration can be done through `{model "
"name}_{config key}`." "name}_{config key}`."
msgstr "In DB-GPT, the model configuration can be done through `{model " msgstr ""
"In DB-GPT, the model configuration can be done through `{model "
"name}_{config key}`." "name}_{config key}`."
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 09d1f9eaf6cc4267b1eb94e4a8e78ba9 #: a1bf4c1f49bd4d97ac45d4f3aff442c6
msgid "Environment Variable Key" msgid "Environment Variable Key"
msgstr "Environment Variable Key" msgstr "Environment Variable Key"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 6650b3f2495e41588e23d8a2647e7ce3 #: 92692a38219c432fadffb8b3825ce678
msgid "default" msgid "default"
msgstr "default" msgstr "default"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 8f33e692a7fc41e1a42663535b95a08c #: 72b2d251aa2e4ca09c335b58e1a08de3
msgid "Prompt Template Name" msgid "Prompt Template Name"
msgstr "Prompt Template Name" msgstr "Prompt Template Name"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 2f8ca1f267694949a2b251d0ef576fd8 #: 85a9f89eeb9a4b70b56913354e947329
msgid "llama_cpp_prompt_template" msgid "llama_cpp_prompt_template"
msgstr "llama_cpp_prompt_template" msgstr "llama_cpp_prompt_template"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 9b6d310d3f3c454488f76062c8bcda67 c80abec11c3240cf9d0122543e9401c3 #: 17e9750fbb824dfdaaed5415f6406e35 602016763bb2470d8a8ef700e576407b
#: ed439c9374d74543a8d2a4f88f4db958 f49b3e4281b14f1b8909cd13159d406a #: 790caafd5c4c4cecbb4c190745fb994c ceb6c41315ab4c5798ab3c64ee8693eb
#: ffa824cc22a946ab851124b58cf7441a #: cfafab69a2684e27bd55aadfdd4c1575
msgid "None" msgid "None"
msgstr "None" msgstr "None"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: f41bd292bcb9491096c39b36bceb3816 #: 5d02f2d1d5834b1e9e5d6982247fd6c9
msgid "" msgid ""
"Prompt template name, now support: `zero_shot, vicuna_v1.1, llama-2" "Prompt template name, now support: `zero_shot, vicuna_v1.1, llama-2"
",baichuan-chat`, If None, the prompt template is automatically determined" ",baichuan-chat`, If None, the prompt template is automatically determined"
" from model path。" " from model path。"
msgstr "Prompt template 现在可以支持`zero_shot, vicuna_v1.1, llama-2" msgstr ""
",baichuan-chat`, 如果是None, the prompt template可以自动选择模型路径" "Prompt template 现在可以支持`zero_shot, vicuna_v1.1, llama-2,baichuan-chat`, "
"如果是None, the prompt template可以自动选择模型路径"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 8125df74f1a7429eb0c5ce350edc9315 #: 2a95bc11386f45498b3585b194f24c17
msgid "llama_cpp_model_path" msgid "llama_cpp_model_path"
msgstr "llama_cpp_model_path" msgstr "llama_cpp_model_path"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 28ad71d410fb4757821bf8cc1c232357 #: c02db8a50e7a4df0acb6b75798a3ad4b
msgid "Model path" msgid "Model path"
msgstr "Model path" msgstr "Model path"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: b7133f115d79477ab416dc63c64d8aa7 #: 6c92b2ec52634728bcc421670cdda70b
msgid "llama_cpp_n_gpu_layers" msgid "llama_cpp_n_gpu_layers"
msgstr "llama_cpp_n_gpu_layers" msgstr "llama_cpp_n_gpu_layers"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 7cdd27333a464ebeab6bf41bca709816 #: 9f1e1b763a0b40d28efd734fe20e1ba7
msgid "1000000000" msgid "1000000000"
msgstr "1000000000" msgstr "1000000000"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 667ec37bc6824eba999f39f4e6072999 #: 0f511b7907594c1f9c9818638764f209
msgid "Number of layers to offload to the GPU, Set this to 1000000000 to offload" msgid ""
"Number of layers to offload to the GPU, Set this to 1000000000 to offload"
" all layers to the GPU. If your GPU VRAM is not enough, you can set a low" " all layers to the GPU. If your GPU VRAM is not enough, you can set a low"
" number, eg: `10`" " number, eg: `10`"
msgstr "要将层数转移到GPU上将其设置为1000000000以将所有层转移到GPU上。如果您的GPU VRAM不足可以设置较低的数字例如10。" msgstr "要将层数转移到GPU上将其设置为1000000000以将所有层转移到GPU上。如果您的GPU VRAM不足可以设置较低的数字例如10。"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: edae6c77475d4958a96d59b4bd165916 #: 1ffdfa4eb78d4127b302b6d703852692
msgid "llama_cpp_n_threads" msgid "llama_cpp_n_threads"
msgstr "llama_cpp_n_threads" msgstr "llama_cpp_n_threads"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 8fef1cdd4b0b42faadc717a58b9434a4 #: f14379e7ea16476da403d5085b67db1c
msgid "" msgid ""
"Number of threads to use. If None, the number of threads is automatically" "Number of threads to use. If None, the number of threads is automatically"
" determined" " determined"
msgstr "要使用的线程数量。如果为None则线程数量将自动确定。" msgstr "要使用的线程数量。如果为None则线程数量将自动确定。"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: ef5f27f600aa4eaf8080d6dca46ad434 #: 41cc1035f6e340e19848452d48a161db
msgid "llama_cpp_n_batch" msgid "llama_cpp_n_batch"
msgstr "llama_cpp_n_batch" msgstr "llama_cpp_n_batch"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: be999101e5f84b83bde0d8e801083c52 #: 993c3b9218ee4299beae53bd75a01001
msgid "512" msgid "512"
msgstr "512" msgstr "512"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: cdc7a51c720a4fe38323eb7a1cfa6bd1 #: 0e11d38c9b58478cacdade34de146320
msgid "Maximum number of prompt tokens to batch together when calling llama_eval" msgid "Maximum number of prompt tokens to batch together when calling llama_eval"
msgstr "在调用llama_eval时批处理在一起的prompt tokens的最大数量。 msgstr "在调用llama_eval时批处理在一起的prompt tokens的最大数量"
"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 26d08360c2df4d018b2416cf8e4b3f48 #: 24f5381956d34569aabee4a5d832388b
msgid "llama_cpp_n_gqa" msgid "llama_cpp_n_gqa"
msgstr "llama_cpp_n_gqa" msgstr "llama_cpp_n_gqa"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 4d22c4f8e285445f9c26d181cf350cb7 #: 07d05844541c452caaa8d5bf56c3f8a1
msgid "Grouped-query attention. Must be 8 for llama-2 70b." msgid "Grouped-query attention. Must be 8 for llama-2 70b."
msgstr "对于llama-2 70b模型Grouped-query attention必须为8。" msgstr "对于llama-2 70b模型Grouped-query attention必须为8。"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 18e5e5ba6ac64e818546742458fb4c84 #: 40a1b9750d854bb19dc18b7d530beccf
msgid "llama_cpp_rms_norm_eps" msgid "llama_cpp_rms_norm_eps"
msgstr "llama_cpp_rms_norm_eps" msgstr "llama_cpp_rms_norm_eps"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: fc85743d71e1428fa2aa0423ff9d9170 #: 6018ee183b9548eabf91e9fc683e7c24
msgid "5e-06" msgid "5e-06"
msgstr "5e-06" msgstr "5e-06"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: bfc53bec0df94d9e8e9cc7d3333a69c1 #: eb273c6bcf2c4c47808024008ce230dc
msgid "5e-6 is a good value for llama-2 models." msgid "5e-6 is a good value for llama-2 models."
msgstr "对于llama-2模型来说5e-6是一个不错的值。" msgstr "对于llama-2模型来说5e-6是一个不错的值。"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 81fc8f37787d48b0b1da826a2f887886 #: f70f3e935b764b6f9544d201ba2aaa05
msgid "llama_cpp_cache_capacity" msgid "llama_cpp_cache_capacity"
msgstr "llama_cpp_cache_capacity" msgstr "llama_cpp_cache_capacity"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 125e3285249449cb90bbff063868d4d4 #: 70035ec5be244eda9fe93be3df2c66df
msgid "Maximum cache capacity. Examples: 2000MiB, 2GiB" msgid "Maximum cache capacity. Examples: 2000MiB, 2GiB"
msgstr "cache capacity最大值. Examples: 2000MiB, 2GiB" msgstr "cache capacity最大值. Examples: 2000MiB, 2GiB"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: 31ed49d49f574d4983738d91bed95fc9 #: 164c31b005ae4979938d9bc67e7f2759
msgid "llama_cpp_prefer_cpu" msgid "llama_cpp_prefer_cpu"
msgstr "llama_cpp_prefer_cpu" msgstr "llama_cpp_prefer_cpu"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: b6ed0d5394874bb38c7468216b8bca88 #: 28f890f6bee3412e94aeb1326367326e
msgid "False" msgid "False"
msgstr "False" msgstr "False"
#: ../../getting_started/install/llm/llama/llama_cpp.md #: ../../getting_started/install/llm/llama/llama_cpp.md
#: d148d6b130454666a601b65b444c7e51 #: f8f27b6323384431ba064a720f39f997
msgid "" msgid ""
"If a GPU is available, it will be preferred by default, unless " "If a GPU is available, it will be preferred by default, unless "
"prefer_cpu=False is configured." "prefer_cpu=False is configured."
msgstr "如果有可用的GPU默认情况下会优先使用GPU除非配置了prefer_cpu=False。" msgstr "如果有可用的GPU默认情况下会优先使用GPU除非配置了prefer_cpu=False。"
#: ../../getting_started/install/llm/llama/llama_cpp.md:59 #: ../../getting_started/install/llm/llama/llama_cpp.md:61
#: 75bd31d245b148569bcf9eca6c8bec9c #: 0471e56c790047bab422aa47edad0a15
msgid "GPU Acceleration" msgid "GPU Acceleration"
msgstr "GPU 加速" msgstr "GPU 加速"
#: ../../getting_started/install/llm/llama/llama_cpp.md:61 #: ../../getting_started/install/llm/llama/llama_cpp.md:63
#: 1cf5538f9d39457e901f4120f76d54c1 #: e95ad40d29004455bebeec8a1a7248c8
msgid "" msgid ""
"GPU acceleration is supported by default. If you encounter any issues, " "GPU acceleration is supported by default. If you encounter any issues, "
"you can uninstall the dependent packages with the following command:" "you can uninstall the dependent packages with the following command:"
msgstr "默认情况下支持GPU加速。如果遇到任何问题您可以使用以下命令卸载相关的依赖包" msgstr "默认情况下支持GPU加速。如果遇到任何问题您可以使用以下命令卸载相关的依赖包"
#: ../../getting_started/install/llm/llama/llama_cpp.md:66 #: ../../getting_started/install/llm/llama/llama_cpp.md:68
#: 08f8eea38e5e44fa80b528b75a379acf #: c0caf1420e43437589693ddec96bd50f
msgid "" msgid ""
"Then install `llama-cpp-python` according to the instructions in [llama-" "Then install `llama-cpp-python` according to the instructions in [llama-"
"cpp-python](https://github.com/abetlen/llama-cpp-" "cpp-python](https://github.com/abetlen/llama-cpp-"
"python/blob/main/README.md)." "python/blob/main/README.md)."
msgstr "然后通过指令[llama-" msgstr ""
"cpp-python](https://github.com/abetlen/llama-cpp-" "然后通过指令[llama-cpp-python](https://github.com/abetlen/llama-cpp-"
"python/blob/main/README.md).安装`llama-cpp-python`" "python/blob/main/README.md).安装`llama-cpp-python`"
#: ../../getting_started/install/llm/llama/llama_cpp.md:69 #: ../../getting_started/install/llm/llama/llama_cpp.md:71
#: abea5d4418c54657981640d6227b7be2 #: fe082f65b4e9416c97b18e5005bc0a59
msgid "Mac Usage" msgid "Mac Usage"
msgstr "Mac Usage" msgstr "Mac Usage"
#: ../../getting_started/install/llm/llama/llama_cpp.md:71 #: ../../getting_started/install/llm/llama/llama_cpp.md:73
#: be9fb5ecbdd5495b98007decccbd0372 #: 6f30d3fa399f434189fcb03d28a42d2d
msgid "" msgid ""
"Special attention, if you are using Apple Silicon (M1) Mac, it is highly " "Special attention, if you are using Apple Silicon (M1) Mac, it is highly "
"recommended to install arm64 architecture python support, for example:" "recommended to install arm64 architecture python support, for example:"
msgstr "特别注意如果您正在使用苹果芯片M1的Mac电脑强烈建议安装arm64架构的Python支持例如" msgstr "特别注意如果您正在使用苹果芯片M1的Mac电脑强烈建议安装arm64架构的Python支持例如"
#: ../../getting_started/install/llm/llama/llama_cpp.md:78 #: ../../getting_started/install/llm/llama/llama_cpp.md:80
#: efd6dfd4e9e24bf884803143e2b123f2 #: 74602bede3c5472fbabc7de47eb2ff7a
msgid "Windows Usage" msgid "Windows Usage"
msgstr "Windows使用" msgstr "Windows使用"
#: ../../getting_started/install/llm/llama/llama_cpp.md:80 #: ../../getting_started/install/llm/llama/llama_cpp.md:82
#: 27ecf054aa294a2eaed701c46edf27a7 #: ae78332a348b44cb847723a998b98048
msgid "" msgid ""
"The use under the Windows platform has not been rigorously tested and " "The use under the Windows platform has not been rigorously tested and "
"verified, and you are welcome to use it. If you have any problems, you " "verified, and you are welcome to use it. If you have any problems, you "
"can create an [issue](https://github.com/eosphoros-ai/DB-GPT/issues) or " "can create an [issue](https://github.com/eosphoros-ai/DB-GPT/issues) or "
"[contact us](https://github.com/eosphoros-ai/DB-GPT/tree/main#contact-" "[contact us](https://github.com/eosphoros-ai/DB-GPT/tree/main#contact-"
"information) directly." "information) directly."
msgstr "在Windows平台上的使用尚未经过严格的测试和验证欢迎您使用。如果您有任何问题可以创建一个[issue](https://github.com/eosphoros-ai/DB-GPT/issues)或者[contact us](https://github.com/eosphoros-ai/DB-GPT/tree/main#contact-" msgstr ""
"information) directly." "在Windows平台上的使用尚未经过严格的测试和验证欢迎您使用。如果您有任何问题可以创建一个[issue](https://github.com"
"/eosphoros-ai/DB-GPT/issues)或者[contact us](https://github.com/eosphoros-"
"ai/DB-GPT/tree/main#contact-information) directly."
#~ msgid ""
#~ "DB-GPT is now supported by "
#~ "[llama-cpp-python](https://github.com/abetlen/llama-"
#~ "cpp-python) through "
#~ "[llama.cpp](https://github.com/ggerganov/llama.cpp)."
#~ msgstr ""
#~ "DB-GPT is now supported by "
#~ "[llama-cpp-python](https://github.com/abetlen/llama-"
#~ "cpp-python) through "
#~ "[llama.cpp](https://github.com/ggerganov/llama.cpp)."