doc:chat excel

This commit is contained in:
aries_ckt 2023-08-29 21:18:23 +08:00
commit 77a1079e78
22 changed files with 569 additions and 311 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 637 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 425 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 783 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 366 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

View File

@ -18,4 +18,5 @@ DB-GPT product is a Web application that you can chat database, chat knowledge,
./application/chatdb/chatdb.md
./application/kbqa/kbqa.md
./application/dashboard/dashboard.md
./application/dashboard/dashboard.md
./application/chatexcel/chatexcel.md

View File

@ -0,0 +1,27 @@
ChatExcel
==================================
![db plugins demonstration](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/4113ac15-83c2-4350-86c0-5fc795677abd)
ChatExcel uses natural language to analyze and query Excel data.![db plugins demonstration](../../../../assets/chat_excel/chat_excel_1.png)
### 1.Select And Upload Excel or CSV File
Select your excel or csv file to upload and start the conversation.
```{tip}
ChatExcel
The ChatExcel function supports Excel and CSV format files, select the corresponding file to use.
```
![db plugins demonstration](../../../../assets/chat_excel/chat_excel_2.png)
![db plugins demonstration](../../../../assets/chat_excel/chat_excel_3.png)
### 2.Wait for Data Processing
After the data is uploaded, it will first learn and process the data structure and field meaning.
![db plugins demonstration](../../../../assets/chat_excel/chat_excel_4.png)
### 3.Use Data Analysis Calculation
Now you can use natural language to analyze and query data in the dialog box.
![db plugins demonstration](../../../../assets/chat_excel/chat_excel_5.png)
![db plugins demonstration](../../../../assets/chat_excel/chat_excel_6.png)
![db plugins demonstration](../../../../assets/chat_excel/chat_excel_7.png)

View File

@ -29,9 +29,8 @@ data in pilot/mock_datas/, you should follow the steps.
#### 4.Input your analysis goals
![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6)
![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/3d14a2da-165e-4b2f-a921-325c20fe5ae9)
![db plugins demonstration](../../../../assets/chat_dashboard/chat_dashboard_1.png)
#### 5.Adjust and modify your report
![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94-041b-44b4-b6ec-891bf8da52a4)
![db plugins demonstration](../../../../assets/chat_dashboard/chat_dashboard_2.png)

View File

@ -1,5 +1,7 @@
KBQA
==================================
![chat_knowledge](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/bc343c94-df3e-41e5-90d5-23b68c768c59)
DB-GPT supports a knowledge question-answering module, which aims to create an intelligent expert in the field of databases and provide professional knowledge-based answers to database practitioners.
![chat_knowledge](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/6e55f2e5-94f7-4906-aed6-097db5c6c721)

View File

@ -0,0 +1,102 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.6\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-29 21:14+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/chatexcel/chatexcel.md:1
#: ../../getting_started/application/chatexcel/chatexcel.md:9
#: 6efcbf4652954b27beb55f600cfe75c7 eefb0c3bc131439fb2dd4045761f1ae9
msgid "ChatExcel"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:3
#: 5fc4ddd2690f46658df1e09c601d81ad
#, fuzzy
msgid ""
"![db plugins demonstration](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/4113ac15-83c2-4350-86c0-5fc795677abd) ChatExcel uses "
"natural language to analyze and query Excel data.![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_1.png)"
msgstr "使用自然语言进行Excel数据的分析处理"
#: ../../getting_started/application/chatexcel/chatexcel.md:3
#: ../../getting_started/application/chatexcel/chatexcel.md:13
#: ../../getting_started/application/chatexcel/chatexcel.md:17
#: ../../getting_started/application/chatexcel/chatexcel.md:21
#: 4c91baf5f0b244abb021f461851674cc 4eead9a4f81e4855a5c362774696999c
#: 5f309a06170946108ae70806dff157ea 790016c9c68f4a29a84b7ef8e14d6dc2
#: 93db1eb6af69452982f6028eff626a57 e758c8b320894e2b93f8db78459b7a1f
#: ea3c99f7eafc4ae0a19706a47e4c7bf6 f18d2b88de244173ab2673f2a5e828c0
msgid "db plugins demonstration"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:6
#: 45f137031025409ba2ada9c8f7c5f1e4
msgid "1.Select And Upload Excel or CSV File"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:7
#: cd282be2b4ef49ea8b0eaa3d53042f22
msgid "Select your excel or csv file to upload and start the conversation."
msgstr "选择你的Excel或者CSV文件上传开始对话"
#: ../../getting_started/application/chatexcel/chatexcel.md:11
#: a5ebc8643eff4b44a951b28d85488143
msgid ""
"The ChatExcel function supports Excel and CSV format files, select the "
"corresponding file to use."
msgstr "ChatExcel功能支持Excel和CSV格式的文件选择对应格式的文件开始使用"
#: ../../getting_started/application/chatexcel/chatexcel.md:13
#: d52927be09654c8daf29e2ef0c60a671
msgid ""
"![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_2.png) ![db "
"plugins demonstration](../../../../assets/chat_excel/chat_excel_3.png)"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:16
#: d86202165fdc4da6be06024b45f9af55
msgid "2.Wait for Data Processing"
msgstr "等待数据处理"
#: ../../getting_started/application/chatexcel/chatexcel.md:17
#: 3de7205fbdc741e2b79996d67264c058
msgid ""
"After the data is uploaded, it will first learn and process the data "
"structure and field meaning. ![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_4.png)"
msgstr "等待数据上传完成,会自动进行数据结构的学习和处理"
#: ../../getting_started/application/chatexcel/chatexcel.md:20
#: fb0620dec5a24b469ceccf86e918fe54
msgid "3.Use Data Analysis Calculation"
msgstr "开始使用数据分析计算"
#: ../../getting_started/application/chatexcel/chatexcel.md:21
#: 221733f01fe04e38b19f191d4001c7a7
msgid ""
"Now you can use natural language to analyze and query data in the dialog "
"box. ![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_5.png) ![db "
"plugins demonstration](../../../../assets/chat_excel/chat_excel_6.png) "
"![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_7.png)"
msgstr "现在可以开始进行自然语言进行数据的分析查询对话了"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-22 13:28+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -20,12 +20,12 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/dashboard/dashboard.md:1
#: 8017757596ff4c7faa06f7e7d18902ca
#: 2a1224e675d144269e5cc3695d4d60b4
msgid "Dashboard"
msgstr "Dashboard"
#: ../../getting_started/application/dashboard/dashboard.md:3
#: 5b84a61923404d8c81d5a1430b3fa12c
#: 2b6d2f94f73d43e68806bf4c6d0d9269
msgid ""
"The purpose of the DB-GPT Dashboard is to empower data analysts with "
"efficiency. DB-GPT provides intelligent reporting technology, allowing "
@ -34,37 +34,37 @@ msgid ""
msgstr "DB-GPT Dashboard目的是赋能数据分析人员。DB-GPT通过提供智能报表技术使得业务分析人员可以直接使用简单的自然语言进行自助分析。"
#: ../../getting_started/application/dashboard/dashboard.md:8
#: 48604cca2b3f482692bb65a01f0297a7
#: 9612fa76c4264bab8e629ac50959faa9
msgid "Dashboard now support Datasource Type"
msgstr "Dashboard目前支持的数据源类型"
#: ../../getting_started/application/dashboard/dashboard.md:9
#: e4371bc220be46f0833dc7d0c804f263
#: bb0b15742ebe41628fb0d1fc38caabe2
msgid "Mysql"
msgstr "Mysql"
#: ../../getting_started/application/dashboard/dashboard.md:10
#: 719c578796fa44a3ad062289aa4650d7
#: 35491581125b4bdd8422f35b11c7bc2c
msgid "Sqlite"
msgstr "Sqlite"
#: ../../getting_started/application/dashboard/dashboard.md:11
#: c7817904bbf34dfca56a19a004937146
#: 8c4389354e0344aa9a781bdfc94c2cac
msgid "DuckDB"
msgstr "DuckDB"
#: ../../getting_started/application/dashboard/dashboard.md:13
#: 1cebeafe853d43809e6ced45d2b68812
#: 18e8c60f5c2f4aa698cec1e8e8b354c8
msgid "Steps to Dashboard In DB-GPT"
msgstr "Dashboard使用步骤"
#: ../../getting_started/application/dashboard/dashboard.md:15
#: 977520bbea44423ea290617712482148
#: 94f98e0f5c2e451ba29b9b77c4139ed9
msgid "1 add datasource"
msgstr "1.添加数据源"
#: ../../getting_started/application/dashboard/dashboard.md:17
#: a8fcef153c68498fa9886051e8d7b072
#: 34e1211e65b940c3beb6234bcfa423a1
#, fuzzy
msgid ""
"If you are using Dashboard for the first time, you need to mock some data"
@ -75,17 +75,17 @@ msgid ""
msgstr "如果你是第一次使用Dashboard需要构造测试数据DB-GPT在pilot/mock_datas/提供了测试数据,只需要将数据源进行添加即可"
#: ../../getting_started/application/dashboard/dashboard.md:17
#: 1abcaa9d7fad4b53a0622ab3e982e6d5
#: f29905929b32442ba05833b6c52a11be
msgid "add_datasource"
msgstr "添加数据源"
#: ../../getting_started/application/dashboard/dashboard.md:21
#: 21ebb5bf568741a9b3d7a4275dde69fa
#: 367a487dd1d54681a6e83d8fdda5b793
msgid "2.Choose Dashboard Mode"
msgstr "2.进入Dashboard"
#: ../../getting_started/application/dashboard/dashboard.md:23
#: 1b55d97634b44543acf8f367f77d8436
#: 1ee1e980934e4a618591b7c43921c304
msgid ""
"![create_space](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/5e888880-0e97-4b60-8e5c-b7e7224197f0)"
@ -94,17 +94,17 @@ msgstr ""
"GPT/assets/13723926/5e888880-0e97-4b60-8e5c-b7e7224197f0)"
#: ../../getting_started/application/dashboard/dashboard.md:23
#: 6c97d2aa26fa401cb3c4172bfe4aea6a
#: 12c756afdad740a9afc9cb46cc834af8
msgid "create_space"
msgstr "create_space"
#: ../../getting_started/application/dashboard/dashboard.md:25
#: ff8e96f78698428a9a578b4f90e0feb4
#: 5a575b17408c42fbacd32d8ff792d5a8
msgid "3.Select Datasource"
msgstr "3.选择数据源"
#: ../../getting_started/application/dashboard/dashboard.md:27
#: 277c924a6f2b49f98414cde95310384f
#: ae051f852a5a4044a147c853cc3fba60
msgid ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/da2ac8b5-eca4-48ef-938f-f9dc1ca711b3)"
@ -114,45 +114,53 @@ msgstr ""
#: ../../getting_started/application/dashboard/dashboard.md:27
#: ../../getting_started/application/dashboard/dashboard.md:31
#: 33164f10fb38452fbf98be5aabaeeb91 3a46cb4427cf4ba386230dff47cf7647
#: d0093988bb414c41a93e8ad6f88e8404
#: 94907bb0dc694bc3a4d2ee57a84b8242 ecc0666385904fce8bb1000735482f65
msgid "document"
msgstr "document"
#: ../../getting_started/application/dashboard/dashboard.md:29
#: 6a57e48482724d23adf51e888d126562
#: c8697e93661c48b19674e63094ba7486
msgid "4.Input your analysis goals"
msgstr "4.输入分析目标"
#: ../../getting_started/application/dashboard/dashboard.md:31
#: cb96df3f9135450fbf71177978c50141
#: 473fc0d00ab54ee6bc5c21e017591cc4
#, fuzzy
msgid ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) "
"![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926"
"/3d14a2da-165e-4b2f-a921-325c20fe5ae9)"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) ![db plugins "
"demonstration](../../../../assets/chat_dashboard/chat_dashboard_1.png)"
msgstr ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) "
"![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926"
"/3d14a2da-165e-4b2f-a921-325c20fe5ae9)"
#: ../../getting_started/application/dashboard/dashboard.md:31
#: ../../getting_started/application/dashboard/dashboard.md:35
#: 00597e1268544d97a3de368b04d5dcf8 350d04e4b7204823b7a03c0a7606c951
msgid "db plugins demonstration"
msgstr ""
#: ../../getting_started/application/dashboard/dashboard.md:34
#: ed0f008525334a36a900b82339591095
#: b48cc911c1614def9e4738d35e8b754c
msgid "5.Adjust and modify your report"
msgstr "5.调整"
#: ../../getting_started/application/dashboard/dashboard.md:36
#: 8fc26117a2e1484b9452cfaf8c7f208b
#: ../../getting_started/application/dashboard/dashboard.md:35
#: b0442bbc0f6c4c33914814ac92fc4b13
msgid ""
"![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94"
"-041b-44b4-b6ec-891bf8da52a4)"
"![db plugins "
"demonstration](../../../../assets/chat_dashboard/chat_dashboard_2.png)"
msgstr ""
"![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94"
"-041b-44b4-b6ec-891bf8da52a4)"
#: ../../getting_started/application/dashboard/dashboard.md:36
#: 6d12166c3c574651a854534cc8c7e997
msgid "upload"
msgstr "upload"
#~ msgid ""
#~ "![upload](https://github.com/eosphoros-ai/DB-"
#~ "GPT/assets/13723926/cb802b94-041b-44b4-b6ec-891bf8da52a4)"
#~ msgstr ""
#~ "![upload](https://github.com/eosphoros-ai/DB-"
#~ "GPT/assets/13723926/cb802b94-041b-44b4-b6ec-891bf8da52a4)"
#~ msgid "upload"
#~ msgstr "upload"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-29 20:30+0800\n"
"POT-Creation-Date: 2023-08-29 21:14+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -20,109 +20,117 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/kbqa/kbqa.md:1
#: 1a06a61cf9d042d780ec19d9385594e9
#: 4b44ec7d6b3b492489b84bab4471fe46
msgid "KBQA"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:3
#: 96d298acd0de48b3ae06134bc2bbc214
#: c4217f8786e24f5190354d129b21dff5
msgid ""
"![chat_knowledge](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/bc343c94-df3e-41e5-90d5-23b68c768c59)"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:3
#: ../../getting_started/application/kbqa/kbqa.md:7
#: 2f02ab2494b54fff87e7d1e310f38119 dc0349e48d5e4d89b1f9f353813c9f06
msgid "chat_knowledge"
msgstr "chat_knowledge"
#: ../../getting_started/application/kbqa/kbqa.md:5
#: 149c4c9e15004aaf992c5896deb9e541
msgid ""
"DB-GPT supports a knowledge question-answering module, which aims to "
"create an intelligent expert in the field of databases and provide "
"professional knowledge-based answers to database practitioners."
msgstr " DB-GPT支持知识问答模块知识问答的初衷是打造DB领域的智能专家为数据库从业人员解决专业的知识问题回答"
#: ../../getting_started/application/kbqa/kbqa.md:5
#: d1ea2c61f0c34ba294af01c4e7f5a57e
#: ../../getting_started/application/kbqa/kbqa.md:7
#: c7e103a20a2e4989aab8a750cdc4dbf4
msgid ""
"![chat_knowledge](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/6e55f2e5-94f7-4906-aed6-097db5c6c721)"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:5
#: ef5384441b5a40dc92613dc6e2574406
msgid "chat_knowledge"
msgstr "chat_knowledge"
#: ../../getting_started/application/kbqa/kbqa.md:7
#: ../../getting_started/application/kbqa/kbqa.md:10
#: 4959ed3310614273bf1f092119bd556d d6592a16245b49fb89daee4864c1c448
#: ../../getting_started/application/kbqa/kbqa.md:9
#: ../../getting_started/application/kbqa/kbqa.md:12
#: 3ddf012fb6d74eb1be1a0fd0ada9ddf6 e29bffef26fb44ac978d6cbe6584f48f
msgid "KBQA abilities"
msgstr "KBQA现有能力"
#: ../../getting_started/application/kbqa/kbqa.md:11
#: b476c161c24f472395ec4c97bc769f4d
#: ../../getting_started/application/kbqa/kbqa.md:13
#: 87cf02966b574d6db8baf0839b13e1e7
msgid "Knowledge Space."
msgstr "知识空间"
#: ../../getting_started/application/kbqa/kbqa.md:12
#: 7f85e2cda7a2424db23ba57c7dab9595
#: ../../getting_started/application/kbqa/kbqa.md:14
#: 2777488125234408aa7156a69fcfacef
msgid "Multi Source Knowledge Source Embedding."
msgstr "多数据源Embedding"
#: ../../getting_started/application/kbqa/kbqa.md:13
#: 33f69a1bf9fb463c99cb5d5d20ec73d1
#: ../../getting_started/application/kbqa/kbqa.md:15
#: c50f2de4ea6f4aedb7d4fd674ee3f6f7
msgid "Embedding Argument Adjust"
msgstr "Embedding参数自定义"
#: ../../getting_started/application/kbqa/kbqa.md:14
#: dc51e39242f342ddbe58ffa174beb6ab
#: ../../getting_started/application/kbqa/kbqa.md:16
#: f8a7af83e94e45239bd0efdb06eb320b
msgid "Chat Knowledge"
msgstr "知识问答"
#: ../../getting_started/application/kbqa/kbqa.md:15
#: a6dad0b121c54b3ea20c70a5db70a620
#: ../../getting_started/application/kbqa/kbqa.md:17
#: e6025e23178f4f58a6d4c75a6bc1d036
msgid "Multi Vector DB"
msgstr "多向量数据库管理"
#: ../../getting_started/application/kbqa/kbqa.md:19
#: 674b836c5b724239bbddd82bad862277
#: ../../getting_started/application/kbqa/kbqa.md:21
#: 0780e1d27af244429186caa866772c06
msgid ""
"If your DB type is Sqlite, there is nothing to do to build KBQA service "
"database schema."
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:21
#: ebeabfa4e0b4449eae8c3415a34c6c65
#: ../../getting_started/application/kbqa/kbqa.md:23
#: 38584aba054f4520a0b1d9b00d6abf06
msgid ""
"If your DB type is Mysql or other DBTYPE, you will build kbqa service "
"database schema."
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:23
#: 71163cedcdf64a5aac842bae688f4e23
#: ../../getting_started/application/kbqa/kbqa.md:25
#: 9ef800860afe4728b2103286864ed3fb
msgid "Mysql"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:24
#: da70cf9689db4434b95285c2b3610181
#: ../../getting_started/application/kbqa/kbqa.md:26
#: 3b30239fd457457eb707821794e786db
msgid ""
"$ mysql -h127.0.0.1 -uroot -paa12345678 < "
"./assets/schema/knowledge_management.sql"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:26
#: 3b4ddf0ec10b4851abb9dffc23d1a0ee
#: ../../getting_started/application/kbqa/kbqa.md:28
#: 1cd533c2f0254f8d8d10bafe5811a279
msgid "or"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:28
#: d100bbdc602c4ac79ffe90c867727983
#: ../../getting_started/application/kbqa/kbqa.md:30
#: 2b57e7d9a70f427e81122fe8d7d3c50b
msgid "execute DBGPT/assets/schema/knowledge_management.sql"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:31
#: d55e49f983e940108df5b91a9fb655b8
#: ../../getting_started/application/kbqa/kbqa.md:33
#: 45a0c8c8ff0a4b48ac2d66b4713c4108
msgid "Steps to KBQA In DB-GPT"
msgstr "怎样一步一步使用KBQA"
#: ../../getting_started/application/kbqa/kbqa.md:33
#: 3c7b3d6af58b4dca803a1bb31601067e
#: ../../getting_started/application/kbqa/kbqa.md:35
#: 2ff844b9a29f4717909d091a57d58fe8
msgid "1.Create Knowledge Space"
msgstr "1.首先创建知识空间"
#: ../../getting_started/application/kbqa/kbqa.md:34
#: baa2fb6ab666407cbfc1c89021259930
#: ../../getting_started/application/kbqa/kbqa.md:36
#: 081e1d1ef5bb42ddbd7330dd3ac1d38e
#, fuzzy
msgid ""
"If you are using Knowledge Space for the first time, you need to create a"
@ -131,18 +139,18 @@ msgid ""
"GPT/assets/13723926/a93e597b-c392-465f-89d5-b55621d068a8)"
msgstr "如果你是第一次使用,先创建知识空间,指定名字,拥有者和描述信息"
#: ../../getting_started/application/kbqa/kbqa.md:34
#: 48890c22982840eabe3e39b80d26fb90
#: ../../getting_started/application/kbqa/kbqa.md:36
#: 4a393bf0f50647d4b0cfc64db80847eb
msgid "create_space"
msgstr "create_space"
#: ../../getting_started/application/kbqa/kbqa.md:39
#: 2d271dabf8d0492e921795d68a764e03
#: ../../getting_started/application/kbqa/kbqa.md:41
#: 96303597bc364952b7249e805486e73f
msgid "2.Create Knowledge Document"
msgstr "2.上传知识"
#: ../../getting_started/application/kbqa/kbqa.md:40
#: 5ed95d36b0c147bebde38f4a6f55e5a3
#: ../../getting_started/application/kbqa/kbqa.md:42
#: 765f93f2b668491cb6824ea4706cb449
msgid ""
"DB-GPT now support Multi Knowledge Source, including Text, WebUrl, and "
"Document(PDF, Markdown, Word, PPT, HTML and CSV). After successfully "
@ -156,215 +164,215 @@ msgstr ""
"and "
"CSV)。上传文档成功后后台会自动将文档内容进行读取,切片,然后导入到向量数据库中,当然你也可以手动进行同步,你也可以点击详情查看具体的文档切片内容"
#: ../../getting_started/application/kbqa/kbqa.md:42
#: 36d3223b83614b49a92b0017d5c8ce82
#: ../../getting_started/application/kbqa/kbqa.md:44
#: a0eddeecc620479483bf50857da39ffd
msgid "2.1 Choose Knowledge Type:"
msgstr "2.1 选择知识类型"
#: ../../getting_started/application/kbqa/kbqa.md:43
#: 5a2b383ffec14ad2a1598b093426e9e7
#: ../../getting_started/application/kbqa/kbqa.md:45
#: 44de66f399324454b389d5c348af94e9
msgid ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/5b8173da-f444-4607-9d12-14bcab8179d0)"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:43
#: b1c5c73905934de4ac8cea00e482840a
#: ../../getting_started/application/kbqa/kbqa.md:45
#: 1ae65290b99d490bb72b060084ecc726
msgid "document"
msgstr "document"
#: ../../getting_started/application/kbqa/kbqa.md:45
#: c214c2780b0545a3a3edbd97f69829ed
#: ../../getting_started/application/kbqa/kbqa.md:47
#: d8720434626444b593bb3b06b50dc70f
msgid "2.2 Upload Document:"
msgstr "2.2上传文档"
#: ../../getting_started/application/kbqa/kbqa.md:46
#: 732b8e32de8247249c9c908c258ba133
#: ../../getting_started/application/kbqa/kbqa.md:48
#: 2d4bcbeb391a47b89454b06cb041dff2
msgid ""
"![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926"
"/91b338fc-d3b2-476e-9396-3f6b4f16a890)"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:46
#: ../../getting_started/application/kbqa/kbqa.md:50
#: ../../getting_started/application/kbqa/kbqa.md:55
#: ../../getting_started/application/kbqa/kbqa.md:68
#: 4498b802e7724ad4b2b8edc044abd7fe 59633ce8e91c4e8b9f31cddee14a3f91
#: f5c26d53dc584543912b24b8b8bc31ea f905664e67224a778d7c3ef0de7c42a8
#: ../../getting_started/application/kbqa/kbqa.md:48
#: ../../getting_started/application/kbqa/kbqa.md:52
#: ../../getting_started/application/kbqa/kbqa.md:57
#: ../../getting_started/application/kbqa/kbqa.md:70
#: 58b32cb59a6242679f4a1e5fc7ca819f 81041a1f25b64b19a7f662ed55029224
#: 8891e61f67014355b74c17d013c09cca f0e9c548494f4045a3dc92e993f4cfe7
msgid "upload"
msgstr "upload"
#: ../../getting_started/application/kbqa/kbqa.md:49
#: aa694e12c6884d0fbb5679af0535b05f
#: ../../getting_started/application/kbqa/kbqa.md:51
#: 645bf91a4f6a428a9e99ca29599c0722
msgid "3.Chat With Knowledge"
msgstr "3.知识问答"
#: ../../getting_started/application/kbqa/kbqa.md:50
#: 0c0cdbabb3934655a65c1c4304837efb
#: ../../getting_started/application/kbqa/kbqa.md:52
#: 85077af67bd740c1b3b02996dc287a80
msgid ""
"![upload](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/a8281be7-1454-467d-81c9-15ef108aac10)"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:52
#: ce53a33431714d31b56eb0450a1363f4
#: ../../getting_started/application/kbqa/kbqa.md:54
#: 42d5776030ad4109810a0cb18e19de37
msgid "4.Adjust Space arguments"
msgstr "4.调整知识参数"
#: ../../getting_started/application/kbqa/kbqa.md:53
#: b37255e6cd6b433e8eeb01d73c9ccf19
#: ../../getting_started/application/kbqa/kbqa.md:55
#: a8ea23895a4b4312b1c1e072865f8b90
msgid ""
"Each knowledge space supports argument customization, including the "
"relevant arguments for vector retrieval and the arguments for knowledge "
"question-answering prompts."
msgstr "每一个知识空间都支持参数自定义, 包括向量召回的相关参数以及知识问答Promp参数"
#: ../../getting_started/application/kbqa/kbqa.md:54
#: 3af4c2d8d2bf410698c73d339d0d1779
#: ../../getting_started/application/kbqa/kbqa.md:56
#: 2f5c087e4f7a49828aa797fafff237f0
msgid "4.1 Embedding"
msgstr "4.1 Embedding"
#: ../../getting_started/application/kbqa/kbqa.md:55
#: fdd136a1fe654751a22def62bdca853f
#: ../../getting_started/application/kbqa/kbqa.md:57
#: 47087bf391b642f3ba73e461d2d132a0
msgid ""
"Embedding Argument ![upload](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/f1221bd5-d049-4ceb-96e6-8709e76e502e)"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:59
#: 8566534562574811bc08375b741dfcc9
#: ../../getting_started/application/kbqa/kbqa.md:61
#: 1ed384ba0501423184a4c977d86b8b3a
msgid "Embedding arguments"
msgstr "Embedding arguments"
#: ../../getting_started/application/kbqa/kbqa.md:60
#: 87fdcd70759b43cb9d9a7929247c34a6
#: ../../getting_started/application/kbqa/kbqa.md:62
#: 848962be69a348ffab3dd48839fb100a
msgid "topk:the top k vectors based on similarity score."
msgstr "topk:相似性检索出tok条文档"
#: ../../getting_started/application/kbqa/kbqa.md:61
#: c9fc1f2052c2437e9c4654f8053f02b5
#: ../../getting_started/application/kbqa/kbqa.md:63
#: 0880963aeb9c426187809a7086c224a8
msgid "recall_score:set a threshold score for the retrieval of similar vectors."
msgstr "recall_score:向量检索相关度衡量指标分数"
#: ../../getting_started/application/kbqa/kbqa.md:62
#: 11e837b098ee41ef9d5791b5d27f4891
#: ../../getting_started/application/kbqa/kbqa.md:64
#: a6ead8ae58164749b83e5c972537fe8b
msgid "recall_type:recall type."
msgstr "recall_type:召回类型"
#: ../../getting_started/application/kbqa/kbqa.md:63
#: 01d10823d12c454cad589b37c56b7b2b
#: ../../getting_started/application/kbqa/kbqa.md:65
#: 70d900b0948849d080effcdfc79bb685
msgid "model:A model used to create vector representations of text or other data."
msgstr "model:embdding模型"
#: ../../getting_started/application/kbqa/kbqa.md:64
#: 5fab346c27f24da783b4f30c55f8e291
#: ../../getting_started/application/kbqa/kbqa.md:66
#: 489a58835a5d4c98ad5f6f904f7af370
msgid "chunk_size:The size of the data chunks used in processing."
msgstr "chunk_size:文档切片阈值大小"
#: ../../getting_started/application/kbqa/kbqa.md:65
#: 3a031f1b6d014167aed24584d7fba187
#: ../../getting_started/application/kbqa/kbqa.md:67
#: b42781f2340c478f86137147fd4a6c91
msgid "chunk_overlap:The amount of overlap between adjacent data chunks."
msgstr "chunk_overlap:文本块之间的最大重叠量。保留一些重叠可以保持文本块之间的连续性(例如使用滑动窗口)"
#: ../../getting_started/application/kbqa/kbqa.md:67
#: b14bb51708d644c5a2e1fbbcc53f62c3
#: ../../getting_started/application/kbqa/kbqa.md:69
#: 5f027d4a10394e7da3d4b50fc2663f82
msgid "4.2 Prompt"
msgstr "4.2 Prompt"
#: ../../getting_started/application/kbqa/kbqa.md:68
#: f9d61953773a470ab12a3eb5ef00ad9e
#: ../../getting_started/application/kbqa/kbqa.md:70
#: 9ad62d8626584e80a82fcef239c0f546
msgid ""
"Prompt Argument ![upload](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/9918c9c3-ed64-4804-9e05-fa7d7d177bec)"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:72
#: fe9923b1f26647718443428ae520bb88
#: ../../getting_started/application/kbqa/kbqa.md:74
#: ec0717b8d210410f9894d2a4f51642e1
msgid "Prompt arguments"
msgstr "Prompt arguments"
#: ../../getting_started/application/kbqa/kbqa.md:73
#: 7de3b80bc5864407b34a66cd655c3af8
#: ../../getting_started/application/kbqa/kbqa.md:75
#: 7cf68eacd5ae4151abdefe44feb239e1
msgid ""
"scene:A contextual parameter used to define the setting or environment in"
" which the prompt is being used."
msgstr "scene:上下文环境的场景定义"
#: ../../getting_started/application/kbqa/kbqa.md:74
#: 4902e2e4a2c14cf9b29ca2084960e964
#: ../../getting_started/application/kbqa/kbqa.md:76
#: 2d9293acfb1a495c9b08261e957b2395
msgid ""
"template:A pre-defined structure or format for the prompt, which can help"
" ensure that the AI system generates responses that are consistent with "
"the desired style or tone."
msgstr "template:预定义的提示结构或格式可以帮助确保AI系统生成与所期望的风格或语气一致的回复。"
#: ../../getting_started/application/kbqa/kbqa.md:75
#: ad60c0ba862443ed80bcdf5895e6b2c1
#: ../../getting_started/application/kbqa/kbqa.md:77
#: 307dd62261214e4e84beb7f19b3e2f26
msgid "max_token:The maximum number of tokens or words allowed in a prompt."
msgstr "max_token: prompt token最大值"
#: ../../getting_started/application/kbqa/kbqa.md:77
#: 1a381bf794b4437c844c71abe0ca1f85
#: ../../getting_started/application/kbqa/kbqa.md:79
#: 07b797731fa74738acb3e1fb4c03deac
msgid "5.Change Vector Database"
msgstr "5.Change Vector Database"
#: ../../getting_started/application/kbqa/kbqa.md:79
#: 567b9c154d6a48b29536600b324ceb8b
#: ../../getting_started/application/kbqa/kbqa.md:81
#: 43fa40ced23842b48007b4264c1423c0
msgid "Vector Store SETTINGS"
msgstr "Vector Store SETTINGS"
#: ../../getting_started/application/kbqa/kbqa.md:80
#: a54cfb386b3244b998c520ca6499883c
#: ../../getting_started/application/kbqa/kbqa.md:82
#: 0ed4e0c6a81e4265b443a2c6d05d440b
msgid "Chroma"
msgstr "Chroma"
#: ../../getting_started/application/kbqa/kbqa.md:81
#: 6cea4f4289c649d49101b2f37f14eae4
#: ../../getting_started/application/kbqa/kbqa.md:83
#: 8ef2d15ffd0c4919bdbfdb52443021eb
msgid "VECTOR_STORE_TYPE=Chroma"
msgstr "VECTOR_STORE_TYPE=Chroma"
#: ../../getting_started/application/kbqa/kbqa.md:82
#: f3c494223e994ba0beb32abc8df19388
#: ../../getting_started/application/kbqa/kbqa.md:84
#: 08504aaae0014bd9992345f036989198
msgid "MILVUS"
msgstr "MILVUS"
#: ../../getting_started/application/kbqa/kbqa.md:83
#: e710aa4a2b184618bfc57f955f6e5b41
#: ../../getting_started/application/kbqa/kbqa.md:85
#: aa30fe77dc3f46bfb153b81e0cbdfb97
msgid "VECTOR_STORE_TYPE=Milvus"
msgstr "VECTOR_STORE_TYPE=Milvus"
#: ../../getting_started/application/kbqa/kbqa.md:84
#: ec6e4c1215f94776b47a5b035f3e82ed
#: ../../getting_started/application/kbqa/kbqa.md:86
#: e14c1646e5ca4677a471645c13dca835
msgid "MILVUS_URL=127.0.0.1"
msgstr "MILVUS_URL=127.0.0.1"
#: ../../getting_started/application/kbqa/kbqa.md:85
#: 14b5fd2ddf5d417a8d12f9dd0de77c4f
#: ../../getting_started/application/kbqa/kbqa.md:87
#: 0f0614f951934fd6abfb0fd45e0b79e7
msgid "MILVUS_PORT=19530"
msgstr "MILVUS_PORT=19530"
#: ../../getting_started/application/kbqa/kbqa.md:86
#: 558fbd783bf647b59af485cf501867ca
#: ../../getting_started/application/kbqa/kbqa.md:88
#: 4dafbbd894e4469e80414b54bde69193
msgid "MILVUS_USERNAME"
msgstr "MILVUS_USERNAME"
#: ../../getting_started/application/kbqa/kbqa.md:87
#: d7778707f4c840979a5a91430b947af8
#: ../../getting_started/application/kbqa/kbqa.md:89
#: 1759ec4d744e42a5a928c6abcc2dd2ac
msgid "MILVUS_PASSWORD"
msgstr "MILVUS_PASSWORD"
#: ../../getting_started/application/kbqa/kbqa.md:88
#: a6bcecc6451749048616031cbc1f07b6
#: ../../getting_started/application/kbqa/kbqa.md:90
#: d44bcea1db4f4388bc2ae7968e541761
msgid "MILVUS_SECURE="
msgstr "MILVUS_SECURE="
#: ../../getting_started/application/kbqa/kbqa.md:90
#: acfde2a941b141d28189c4f63f803234
#: ../../getting_started/application/kbqa/kbqa.md:92
#: f33218e0880f4d888438d3333f4a0895
msgid "WEAVIATE"
msgstr "WEAVIATE"
#: ../../getting_started/application/kbqa/kbqa.md:91
#: 59d8f6f31a5549799e52851eab72bfc2
#: ../../getting_started/application/kbqa/kbqa.md:93
#: 42a2102b236447ff83820d3a1602c3f2
msgid "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network"
msgstr "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.networkc"

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-29 20:30+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -20,12 +20,12 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/deploy/deploy_faq.md:1
#: 7f17a8fdda104de18433b228ec3c1935
#: 0baefc753798469588ea011c12a0bfd3
msgid "Installation FAQ"
msgstr "Installation FAQ"
#: ../../getting_started/faq/deploy/deploy_faq.md:5
#: dc734d91ec464177834ab2e2df25f70a
#: 013bf01a02c842ee8bc576f85d127e22
#, fuzzy
msgid ""
"Q1: execute `pip install -e .` error, found some package cannot find "
@ -35,18 +35,18 @@ msgstr ""
"cannot find correct version."
#: ../../getting_started/faq/deploy/deploy_faq.md:6
#: dd04e05135ee4065b44e6196041fd15f
#: 2729928139484def827143c17f2d968c
msgid "change the pip source."
msgstr "替换pip源."
#: ../../getting_started/faq/deploy/deploy_faq.md:13
#: ../../getting_started/faq/deploy/deploy_faq.md:20
#: 159d77d65f014357ae4caa5aeba32080 7a5250f10d104e73a64012a89fb00e6f
#: 6e8bf02d7117454fbcc28c7ec27e055a acd2186c0320466f95b500dade75591b
msgid "or"
msgstr "或者"
#: ../../getting_started/faq/deploy/deploy_faq.md:27
#: 15af207c532541a2906a260266d8d2ca
#: c5aab9455827416084a1ea6792263add
msgid ""
"Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to"
" open database file"
@ -55,68 +55,68 @@ msgstr ""
" open database file"
#: ../../getting_started/faq/deploy/deploy_faq.md:29
#: b45fd323f5c84a428393e2d0e749ab17
#: 29880cfc6c7f4f6fb14a9cbb9eed07ad
msgid "make sure you pull latest code or create directory with mkdir pilot/data"
msgstr "make sure you pull latest code or create directory with mkdir pilot/data"
#: ../../getting_started/faq/deploy/deploy_faq.md:31
#: 468efa45ffe147ebb3a7bf737823f9da
#: 36264030f5cd41bebd17beae12d9be51
msgid "Q3: The model keeps getting killed."
msgstr "Q3: The model keeps getting killed."
#: ../../getting_started/faq/deploy/deploy_faq.md:33
#: ca4bb900b69740bbb40c7cb78bd8ba00
#: 0cbf6ae0fee14d239cb1cc6ddba134d7
msgid ""
"your GPU VRAM size is not enough, try replace your hardware or replace "
"other llms."
msgstr "GPU显存不够, 增加显存或者换一个显存小的模型"
#: ../../getting_started/faq/deploy/deploy_faq.md:35
#: 13d52bc8a91a4e209d0b90f68f1be853
#: 6f4ce365d20843529195aa6970d6074e
msgid "Q4: How to access website on the public network"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:37
#: 5f0c0e894fdb477eba928f9c946a9bd8
#: 9f4a119e64c74a0693fa067cd35cd833
msgid ""
"You can try to use gradio's [network](https://github.com/gradio-"
"app/gradio/blob/main/gradio/networking.py) to achieve."
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:48
#: c30ab098c2e547df81038814f870dec4
#: 4c09cfb493ba41fb8590954b986e949d
msgid "Open `url` with your browser to see the website."
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:50
#: d298adc56dce463d8deb3d3550bc382c
#: 7d905a99d1c547eb95d9c619c70bf221
msgid "Q5: (Windows) execute `pip install -e .` error"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:52
#: a4b683c41fed443a8016b56c24235612
#: fe26218168c4447a8dc89e436cdd1000
msgid "The error log like the following:"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:71
#: ccbb33427cba40fca68b075cb796c670
#: d15615f7798d4dc0ad49d9b28926fe32
msgid ""
"Download and install `Microsoft C++ Build Tools` from [visual-cpp-build-"
"tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:75
#: b9535d65d65f4a368f7b0ee6f14c7da8
#: 60ef06d3f99c44c1b568ec7c652905ee
msgid "Q6: `Torch not compiled with CUDA enabled`"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:82
#: f9cf565e02b240b4a4044293122e9910
#: 830e63627d2c48b8987ed20db3405c41
msgid "Install [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive)"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:83
#: 6a15c0284228430381f56ac3bc825361
#: 50a1c244ddf747d797825158550026b9
msgid ""
"Reinstall PyTorch [start-locally](https://pytorch.org/get-started/locally"
"/#start-locally) with CUDA support."

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-29 20:30+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -20,34 +20,34 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/deploy/deploy.md:1
#: 963de2c147e4491085e40a367ede1cb3
#: b4f766ca21d241e2849ee0a277a0e8f0
msgid "Installation From Source"
msgstr "源码安装"
#: ../../getting_started/install/deploy/deploy.md:3
#: 3597fef8c1c24663ba9ddf0240dd8a1e
#: 9cf72ef201ba4c7a99da8d7de9249cf4
msgid ""
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
"environment and data."
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
#: ../../getting_started/install/deploy/deploy.md:5
#: e9bd2165f24e41c2bebb4fed1672fd54
#: b488acb9552043df96e9f01277375b56
msgid "Installation"
msgstr "安装"
#: ../../getting_started/install/deploy/deploy.md:7
#: 7a52cf49d60a4e76a43d8e534dcac6b8
#: e1eb3aafea0c4b82b8d8163b947677dd
msgid "To get started, install DB-GPT with the following steps."
msgstr "请按照以下步骤安装DB-GPT"
#: ../../getting_started/install/deploy/deploy.md:9
#: 4dbc9d0146bb4574b31222a14c45eb46
#: 4139c4e62e874dc58136b1f8fe0715fe
msgid "1. Hardware Requirements"
msgstr "1. 硬件要求"
#: ../../getting_started/install/deploy/deploy.md:10
#: eee7223178704420a179781df476e855
#: c34a204cfa6e4973bfd94e683195c17b
msgid ""
"As our project has the ability to achieve ChatGPT performance of over "
"85%, there are certain hardware requirements. However, overall, the "
@ -56,176 +56,176 @@ msgid ""
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。但总体来说我们在消费级的显卡上即可完成项目的部署使用具体部署的硬件说明如下:"
#: ../../getting_started/install/deploy/deploy.md
#: 21613161531642fb8d62d589b8d4feaa
#: 3a92203e861b42c9af3d4b687d83de5e
msgid "GPU"
msgstr "GPU"
#: ../../getting_started/install/deploy/deploy.md
#: cf5a8b7a75034011b7305d8bd09cf69c e8b55944bb9d4b91976027b3a2ae09d0
#: 6050741571574eb8b9e498a5b3a7e347 c0a7e2aecb4b48949c3e5a4d479ee7b5
msgid "VRAM Size"
msgstr "显存"
#: ../../getting_started/install/deploy/deploy.md
#: 36641aaf9b81420fae3c2a2d89816d8a
#: 247159f568e4476ca6c5e78015c7a8f0
msgid "Performance"
msgstr "Performance"
#: ../../getting_started/install/deploy/deploy.md
#: ebc46f5c3f944ca6944774ac264ac801
#: 871113cbc58743ef989a366b76e8c645
msgid "RTX 4090"
msgstr "RTX 4090"
#: ../../getting_started/install/deploy/deploy.md
#: c387fa1e47ec4ae4bb145fbd4c18cd99 efbe071097b4421dbf06080d4ab1ec70
#: 81327b7e9a984ec99cae779743d174df c237f392162c42d28ec694d17c3f281c
msgid "24 GB"
msgstr "24 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 48d5632d53e0413d937bd4c13716591a
#: 6e19f23bae05467ba03f1ebb194e0c03
msgid "Smooth conversation inference"
msgstr "Smooth conversation inference"
#: ../../getting_started/install/deploy/deploy.md
#: 5d7a590b885c4f128716da6347d91304
#: 714a48b2c4a943819819a6af034f1998
msgid "RTX 3090"
msgstr "RTX 3090"
#: ../../getting_started/install/deploy/deploy.md
#: 93a35c5359e9432b9fac149e67087b4b
#: 06dae55d443c48b1b3fbab85222c3adb
msgid "Smooth conversation inference, better than V100"
msgstr "Smooth conversation inference, better than V100"
#: ../../getting_started/install/deploy/deploy.md
#: ec6d393791ae49eea9b2ff79f803e76c
#: 5d50db167b244d65a8be1dab4acda37d
msgid "V100"
msgstr "V100"
#: ../../getting_started/install/deploy/deploy.md
#: 7da6411bc1c642e9a1063e988044e9f2 852dac8854514958b45f7f125c11f625
#: 0d72262c85d148d8b1680d1d9f8fa2c9 e10db632889444a78e123773a30f23cf
msgid "16 GB"
msgstr "16 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 1a3c78ad04114d5ebf25edba7ebdfcb7 b8ed26b4da564660bd70f0a3af1282a8
#: 1c0379e653cf46f19d83535c568c54c8 aee8eb48e7804572af351dcfaea5b0fb
msgid "Conversation inference possible, noticeable stutter"
msgstr "Conversation inference possible, noticeable stutter"
#: ../../getting_started/install/deploy/deploy.md
#: 36603d7af55742eebc2f023fdf798f43
#: 5bc90343dcef48c197438f01efe52bfc
msgid "T4"
msgstr "T4"
#: ../../getting_started/install/deploy/deploy.md:19
#: 0a4d0daf5d44421893db967825c0e158
#: c9b5f973d19645d39b1892c00526afa7
msgid ""
"if your VRAM Size is not enough, DB-GPT supported 8-bit quantization and "
"4-bit quantization."
msgstr "如果你的显存不够DB-GPT支持8-bit和4-bit量化版本"
#: ../../getting_started/install/deploy/deploy.md:21
#: 3523d76dc2a8414495ad46f72c304226
#: 5e488271eede411d882f62ec8524dd4a
msgid ""
"Here are some of the VRAM size usage of the models we tested in some "
"common scenarios."
msgstr "这里是量化版本的相关说明"
#: ../../getting_started/install/deploy/deploy.md
#: 8ff21b833bf64962901e2e7a52e1322a
#: 2cc65f16fa364088bedd0e58b6871ec8
msgid "Model"
msgstr "Model"
#: ../../getting_started/install/deploy/deploy.md
#: be429d17a47643c797d989148d62583a
#: d0e1a0d418f74e4b9f5922b17f0c8fcf
msgid "Quantize"
msgstr "Quantize"
#: ../../getting_started/install/deploy/deploy.md
#: 2ec73c552480461dada4f44763c4623e c51b0f85b34845798d926db31eaf720c
#: 460b418ab7eb402eae7a0f86d1fda4bf 5e456423a9fa4c0392b08d32f3082f6f
msgid "vicuna-7b-v1.5"
msgstr "vicuna-7b-v1.5"
#: ../../getting_started/install/deploy/deploy.md
#: 2550147d36074525b23021a14dc55b24 5200cf5ca316459184cd247285adeb9c
#: 9698f16cfa3d4ac589e7741e2371766c a3c90ef3728240a9927da25688ca500b
#: c197d7bfe2be4f29995d6e6882a73c36 c8742a66fae049f9b2c6d333dd3fc209
#: ed4e5c0cc67c4cd3acbb45def0dbac67
#: 0f290c12b9324a07affcfd66804b82d7 29c81ce163e749b99035942a3b18582a
#: 3a4f4325774d452f8c174cac5fe8de47 584f986a1afb4086a0382a9f7e79c55f
#: 994c744ac67249f4a43b3bba360c0bbf aa9c82f660454143b9212842ffe0e0d6
#: ac7b00313284410b9253c4a768a30f0c
msgid "4-bit"
msgstr "4-bit"
#: ../../getting_started/install/deploy/deploy.md
#: 1985bf309e324843aa849c38c9819b03 998c11e1c9b846d596c4423a22a438c8
#: ffe2badbf75b4f119e22481b47866cc5
#: 27401cbb0f2542e2aaa449a586aad2d1 2a1d2d10001f4d9f9b9961c28c592280
#: b69a59c6e4a7458c91be814a98502632
msgid "8 GB"
msgstr "8 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 23fed1f7361b46139294f979144f55b9 44b5316ad4be45acaca850ac350613dd
#: 5d2642a53e444757b176c8ff86504f45 5e9a0cf9bab54b6ea805dd4c9b209b51
#: 6a2ffb34d35a4d64bcb18d88b6ed3e83 b70e17fc9a184074ac4bc641312fc8e8
#: bb6bd13b7d09417ea26949cf385d037c
#: 0a15518df1b94492b610e47f3c7bb4f6 1f1852ceae0b4c21a020dc9ef4f8b20b
#: 89ad803f6bd24b5d9708a6d4bd48a54f ac7c222678d34637a03546dcb5949668
#: b12e1599bdcb4d27ad4e4a83f12de916 c80ba4ddc1634093842a6f284b7b22bb
#: f63b900e4b844b3196c4c221b36d31f7
msgid "8-bit"
msgstr "8-bit"
#: ../../getting_started/install/deploy/deploy.md
#: 0718757036b04690bd6fa98959ae41d3 2401f442291c4fd8a7fdf69d1745b604
#: 3c3af4923aa5498ca85d87610712f9ff 620ef43ffb5441a7ac0bfcd920729518
#: bf626548d1624794ad4e16f0acbdf59f e24c281f31b44b89bb1a9151b7e0ba9a
#: 02f72ed48b784b05b2fcaf4ea33fcba8 17285314376044bf9d9a82f9001f39dc
#: 403178173a784bdf8d02fe856849a434 4875c6b595484091b622602d9ef0d3e8
#: 4b11125d4b0c40c488bffb130f4f2b9f e2418c76e7e04101821f29650d111a4a
msgid "12 GB"
msgstr "12 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 685bc5ce8e4a411a8f59026674b5967d 8223632d5da5462b977c3bd9cb2731a2
#: 01dfd16f70cf4128a49ca7bc79f77042 a615efffecb24addba759d05ef61a1c0
msgid "vicuna-13b-v1.5"
msgstr "vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md
#: 09e37f82a825411191ab74a634706866 80138163ba034d819eeb165384392655
#: da076416fa8c4fb7b28939ca7c76d06b
#: 412ddfa6e6fb4567984f757cf74b3bfc 529650341d96466a93153d58ddef0ec9
#: 6176929d59bb4e31a37cbba8a81a489f
msgid "20 GB"
msgstr "20 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 4551174e7f8449bd93fb14de039ed400 9879cd654383482784992f410ba478aa
#: 566b7aa7bc88421a9364cef6bfbeae48 ae32a218d07e44c796ca511972ea2cb0
msgid "llama-2-7b"
msgstr "llama-2-7b"
#: ../../getting_started/install/deploy/deploy.md
#: 0a74ee29547445e4a1389386249528a4 af6fa6d9dec54179808820fda8b8210b
#: 1ac748eb518b4017accb98873fe1a8e5 528109c765e54b3caf284e7794abd468
msgid "llama-2-13b"
msgstr "llama-2-13b"
#: ../../getting_started/install/deploy/deploy.md
#: 1aadc9ec7645467596e991f770e5bce5 32aa71dcf6d9497ba48af4c2c192e3ef
#: dfb5c0fa9e82423ab1de9256b3b3f215 f861be75871d40849f896859d0b8be4c
msgid "llama-2-70b"
msgstr "llama-2-70b"
#: ../../getting_started/install/deploy/deploy.md
#: 1e63e7f32ec74f4e8290254ce348c50d
#: 5568529a82cd4c49812ab2fd46ff9bf0
msgid "48 GB"
msgstr "48 GB"
#: ../../getting_started/install/deploy/deploy.md
#: d5416570171e49a88a75161f4e5f0c77
#: 4ba730f4faa64df9a0a9f72cb3eb0c88
msgid "80 GB"
msgstr "80 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 598393baadb7468ca649db8de42fd413 f6aaa22133ac4721b6740ab75c43819d
#: 47221748d6d5417abc25e28b6905bc6f 6023d535095a4cb9a99343c2dfddc927
msgid "baichuan-7b"
msgstr "baichuan-7b"
#: ../../getting_started/install/deploy/deploy.md
#: 454e6b05fc2f4f318afc7d7e5f4477b9 ebd9204f511949d695b20caa8d2b5701
#: 55011d4e0bed451dbdda75cb8b258fa5 bc296e4bd582455ca64afc74efb4ebc8
msgid "baichuan-13b"
msgstr "baichuan-13b"
#: ../../getting_started/install/deploy/deploy.md:40
#: a74a45bb031c41a9a88fab4023791430
#: 4bfd52634a974776933c93227f419cdb
msgid "2. Install"
msgstr "2. Install"
#: ../../getting_started/install/deploy/deploy.md:45
#: 480a54d6f3b944b2a26414561ccd5cf4
#: 647f09001d4c4124bed11da272306946
msgid ""
"We use Sqlite as default database, so there is no need for database "
"installation. If you choose to connect to other databases, you can "
@ -240,12 +240,12 @@ msgstr ""
" Miniconda](https://docs.conda.io/en/latest/miniconda.html)"
#: ../../getting_started/install/deploy/deploy.md:54
#: 8d9d99b9b3f6444f8e2b289ff0b3d1cf
#: bf9fcf320ca94dbd855016088800b1a9
msgid "Before use DB-GPT Knowledge"
msgstr "在使用知识库之前"
#: ../../getting_started/install/deploy/deploy.md:60
#: 3ba68f5b12134958ad3495fc03e6261f
#: e0cb6cb46a474c4ca16edf73c82b58ca
msgid ""
"Once the environment is installed, we have to create a new folder "
"\"models\" in the DB-GPT project, and then we can put all the models "
@ -253,27 +253,27 @@ msgid ""
msgstr "如果你已经安装好了环境需要创建models, 然后到huggingface官网下载模型"
#: ../../getting_started/install/deploy/deploy.md:63
#: ae6612ce4b6845cfa9396fcad5549bd0
#: 03b1bf35528d4cdeb735047aa840d6fe
msgid "Notice make sure you have install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:65
#: fbfa0caf0615484bbe861b327a67c847
#: f8183907e7c044f695f86943b412d84a
msgid "centos:yum install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:67
#: ca10fedecab243fba8a0d11b14ccddde
#: 3bc042bd5cac4007afc9f68e7b5044fe
msgid "ubuntu:app-get install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:69
#: a3026601a2304ce3a38e162cb7f44d2f
#: 5915ed1290e84ed9b6782c6733d88891
msgid "macos:brew install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:86
#: ffda61cebb534429bfa1d3a375647077
#: 104f1e75b0a54300af440ca3b64217a3
msgid ""
"The model files are large and will take a long time to download. During "
"the download, let's configure the .env file, which needs to be copied and"
@ -281,7 +281,7 @@ msgid ""
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件它需要从。env.template中复制和创建。"
#: ../../getting_started/install/deploy/deploy.md:88
#: f30593a4496c498d807bc1bf48fc333e
#: 228c6729c23f4e17b0475b834d7edb01
msgid ""
"if you want to use openai llm service, see [LLM Use FAQ](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
@ -290,19 +290,19 @@ msgstr ""
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
#: ../../getting_started/install/deploy/deploy.md:91
#: adbf4130afb442e0b2ebe9ecadc27927
#: c444514ba77b46468721888fe7df9e74
msgid "cp .env.template .env"
msgstr "cp .env.template .env"
#: ../../getting_started/install/deploy/deploy.md:94
#: 67a0efdf4138465b84f50946c50aef33
#: 1514e937757e461189b369da73884a6c
msgid ""
"You can configure basic parameters in the .env file, for example setting "
"LLM_MODEL to the model to be used"
msgstr "您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。"
#: ../../getting_started/install/deploy/deploy.md:96
#: ec7328fea11344b8ac6b821a1cd10531
#: 4643cdf76bd947fdb86fc4691b98935c
msgid ""
"([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on "
"llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-"
@ -313,37 +313,37 @@ msgstr ""
"目前Vicuna-v1.5模型(基于llama2)已经开源了我们推荐你使用这个模型通过设置LLM_MODEL=vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md:98
#: 386d91268dc54f3a92172758afe25d3e
#: acf91810f12b4ad0bd830299eb24850f
msgid "3. Run"
msgstr "3. Run"
#: ../../getting_started/install/deploy/deploy.md:100
#: 4672d31b78dc44719ad41f05fe0bf103
#: ea82d67451724c2399f8903ea3c52dff
msgid "**(Optional) load examples into SQLlite**"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:105
#: 174bb3d5c82c41ad92fa2147dbfe0bdc
#: a00987ec21364389b7feec58b878c2a1
msgid "On windows platform:"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:110
#: aa25742904354373a10b8aaa62fa6005
#: db5c000e6abe4e1cb94e6f4f14247eb7
msgid "1.Run db-gpt server"
msgstr "1.Run db-gpt server"
#: ../../getting_started/install/deploy/deploy.md:116
#: 86a196e04cd1459ab83c0402b6c73c25
#: dbeecff230174132b85d1d4549d3c07e
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/install/deploy/deploy.md:119
#: e53c6021747e48298a9a94b33161aba2
#: 22d6321e6226472e878a95d3c8a9aad8
msgid "If you want to access an external LLM service, you need to"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:121
#: a5d7891ca43a4da8a60e925cfdff829b
#: 561dfe9a864540d6ac582f0977b2c9ad
msgid ""
"1.set the variables LLM_MODEL=YOUR_MODEL_NAME, "
"MODEL_SERVER=YOUR_MODEL_SERVEReg:http://localhost:5000 in the .env "
@ -351,12 +351,12 @@ msgid ""
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:123
#: adc8ccbdda9743aab58a3fdb8aa593fd
#: 55ceca48e40147a99ab4d23392349156
msgid "2.execute dbgpt_server.py in light mode"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:126
#: 29fb5c398fd94f5eaed6382ec7332097
#: 02d42956a2734c739ad1cb9ce59142ce
msgid ""
"If you want to learn about dbgpt-webui, read https://github./csunny/DB-"
"GPT/tree/new-page-framework/datacenter"
@ -365,13 +365,13 @@ msgstr ""
"framework/datacenter"
#: ../../getting_started/install/deploy/deploy.md:132
#: d5a6113dc2844430bde6d1e62ae70eb9
#: d813eb43b97445a08e058d336249e6f6
#, fuzzy
msgid "Multiple GPUs"
msgstr "4. Multiple GPUs"
#: ../../getting_started/install/deploy/deploy.md:134
#: 15d02416586344d19adfc39f6d05ee9f
#: 0ac795f274d24de7b37f9584763e113d
msgid ""
"DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
@ -379,32 +379,32 @@ msgid ""
msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
#: ../../getting_started/install/deploy/deploy.md:136
#: d6875130c3ba4bbf80a046a77c815f26
#: 2be557e2b5414d478d375bce0474558d
msgid ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:"
msgstr "你也可以指定gpu ID启动"
#: ../../getting_started/install/deploy/deploy.md:146
#: b2d5855d7cf946a1a00ec55e0328b1a6
#: 222f1ebb5cb64675a0c319552d14303e
msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU."
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
#: ../../getting_started/install/deploy/deploy.md:148
#: 385caf97c7d149b1bb1852198c8e04de
#: fb92349f9fe049d5b23b9ead17caf895
#, fuzzy
msgid "Not Enough Memory"
msgstr "5. Not Enough Memory"
#: ../../getting_started/install/deploy/deploy.md:150
#: 30ec48dfe9f44ef88e55f7218b2d3967
#: 30a1105d728a474c9cd14638feab4b59
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
#: ../../getting_started/install/deploy/deploy.md:152
#: dad6fbec89104dd9b11ae6778942bf69
#: eb2e576379434bfa828c98ee374149f5
msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by "
@ -412,7 +412,7 @@ msgid ""
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
#: ../../getting_started/install/deploy/deploy.md:154
#: d6eebf82de5b410a979e978e489d5317
#: eeaecfd77d8546a6afc1357f9f1684bf
msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM."

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-29 20:30+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@ -20,46 +20,46 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/docker/docker.md:1
#: 6330986214f9407b89928b1fbce7c0e6
#: b1f8f6a0b8974ea09a5fe2812f31d941
msgid "Docker Install"
msgstr "Docker 安装"
#: ../../getting_started/install/docker/docker.md:4
#: bddc5010a8cf437eaaf8bf56a65e8cf7
#: d9145d02ed984d45b33eb46117a2484b
msgid "Docker (Experimental)"
msgstr "Docker (Experimental)"
#: ../../getting_started/install/docker/docker.md:6
#: c6ee3554954f44da81d89038f4b04d2f
#: 7d2a5e4016f543748b80dedcee36f3c6
#, fuzzy
msgid "1. Preparing docker images"
msgstr "1.构建Docker镜像"
#: ../../getting_started/install/docker/docker.md:8
#: 868f0199d9054609a545fdc75171560d
#: 1cb08cc1662f45579b82f0d402c39cc3
msgid ""
"**Pull docker image from the [Eosphoros AI Docker "
"Hub](https://hub.docker.com/u/eosphorosai)**"
msgstr ""
#: ../../getting_started/install/docker/docker.md:14
#: c6ee3554954f44da81d89038f4b04d2f
#: 23781bc56d394d07927186c6cf619a91
#, fuzzy
msgid "**(Optional) Building Docker image**"
msgstr "1.构建Docker镜像"
#: ../../getting_started/install/docker/docker.md:20
#: 7d10f8ae3cba4f13a011c7f0dc2ef479
#: 77a31390bd024d5f87f8d8ec386a23ae
msgid "Review images by listing them:"
msgstr "Review images by listing them:"
#: ../../getting_started/install/docker/docker.md:26
#: 9116b39c10394d4fb0dee3a0ed431eca
#: e5f6c40d68f346e1bbb57a5f7ac2f10b
msgid "Output should look something like the following:"
msgstr "输出日志应该长这样:"
#: ../../getting_started/install/docker/docker.md:33
#: 80faced8a2fb482ab12baddec9ac21b8
#: b26892bf338c484ba8ed34f09c0fda23
msgid ""
"`eosphorosai/dbgpt` is the base image, which contains the project's base "
"dependencies and a sqlite database. `eosphorosai/dbgpt-allinone` build "
@ -67,25 +67,25 @@ msgid ""
msgstr ""
#: ../../getting_started/install/docker/docker.md:35
#: 50e3fa230dbf474fbd756a52169ce1c4
#: 7aabb767dcd8439ea7ca14fd8deccb87
msgid "You can pass some parameters to docker/build_all_images.sh."
msgstr "你也可以docker/build_all_images.sh构建的时候指定参数"
#: ../../getting_started/install/docker/docker.md:43
#: 83eb1b3c3f8f4158879e32ef4e4046ba
#: 36a85e7aca484e5cb0656dea8dc3568c
msgid ""
"You can execute the command `bash docker/build_all_images.sh --help` to "
"see more usage."
msgstr "可以指定命令`bash docker/build_all_images.sh --help`查看如何使用"
#: ../../getting_started/install/docker/docker.md:45
#: 893ed5c6fe854265a3d9a3a77a0c385f
#: c0d04891ed784cd8a8403ea395c56a45
#, fuzzy
msgid "2. Run docker container"
msgstr "2. Run all in one docker container"
#: ../../getting_started/install/docker/docker.md:47
#: e18a4483b9c242b187a9bc61746f2d24
#: 7d5cb8366aa849b684fe5c3805213d0d
#, fuzzy
msgid "**Run with local model and SQLite database**"
msgstr "**Run with local model**"
@ -93,13 +93,14 @@ msgstr "**Run with local model**"
#: ../../getting_started/install/docker/docker.md:61
#: ../../getting_started/install/docker/docker.md:88
#: ../../getting_started/install/docker/docker.md:123
#: 90fda1321aa2468aa681431e36800913
#: 2ef66d0c87cf4ab48f9bbf4473071b43 9b594d33ef9f472d9fccfe1bd07d5564
#: f86c8364bd9948b29224c6ef0e1d6a83
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/install/docker/docker.md:64
#: ../../getting_started/install/docker/docker.md:91
#: d9ced1b042494210b961204d5f0a5dea
#: a462082e866f46a8b99a4d95e6fa5b83 d262fdeb9f9c4156898875db75997874
#, fuzzy
msgid ""
"`-e LLM_MODEL=vicuna-13b-v1.5`, means we use vicuna-13b-v1.5 as llm "
@ -108,7 +109,7 @@ msgstr "`-e LLM_MODEL=vicuna-13b` 指定llm model is vicuna-13b "
#: ../../getting_started/install/docker/docker.md:65
#: ../../getting_started/install/docker/docker.md:92
#: 1e7e2121ae6047eea27f6f3e8a2c730b
#: c8c172873ff145aeb4f0f9cb04c209a8 f1b2e1f27cd24ed9ae233445f3fe1301
msgid ""
"`-v /data/models:/app/models`, means we mount the local model file "
"directory `/data/models` to the docker container directory `/app/models`,"
@ -119,23 +120,23 @@ msgstr ""
#: ../../getting_started/install/docker/docker.md:67
#: ../../getting_started/install/docker/docker.md:94
#: 77b7fa4ee51b44718145d8d3bf65e59e
#: 432463d5993a4e6eb34497390d89e891 da8df3262b834b359c080e84b25e431e
msgid "You can see log with command:"
msgstr "你也可以通过命令查看日志"
#: ../../getting_started/install/docker/docker.md:73
#: 23495e4399574a91bd04f7cca44748db
#: a4963b927a87455c90a4fbb1d3814a09
#, fuzzy
msgid "**Run with local model and MySQL database**"
msgstr "**Run with local model**"
#: ../../getting_started/install/docker/docker.md:100
#: 2a8088830699426c9ab4cc890ab8d7bd
#: 81ecd658f7c54070bcb47838d3a9f533
msgid "**Run with openai interface**"
msgstr "**Run with openai interface**"
#: ../../getting_started/install/docker/docker.md:119
#: fd5f1772a3db4fa0929d84ea3451ce4e
#: 2c01b849a0fe4b4d9554a9adf9cbf8fc
msgid ""
"`-e LLM_MODEL=proxyllm`, means we use proxy llm(openai interface, "
"fastchat interface...)"
@ -144,7 +145,7 @@ msgstr ""
"interface..."
#: ../../getting_started/install/docker/docker.md:120
#: 7c9b301bd1a145b1ab622c18df7e7310
#: 262af25e0ec748e3a8b12274004624bb
msgid ""
"`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, means we mount the local text2vec model to the docker "

View File

@ -9,7 +9,7 @@ import os
import matplotlib
import seaborn as sns
# matplotlib.use("Agg")
matplotlib.use("Agg")
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontManager

View File

@ -3,8 +3,15 @@ import duckdb
import pandas as pd
import matplotlib
import seaborn as sns
import uuid
from pandas import DataFrame
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
from matplotlib import font_manager
from matplotlib.font_manager import FontManager
matplotlib.use("Agg")
import time
from fsspec import filesystem
import spatial
@ -12,42 +19,145 @@ import spatial
from pilot.scene.chat_data.chat_excel.excel_reader import ExcelReader
def data_pre_classification(df: DataFrame):
## Data pre-classification
columns = df.columns.tolist()
number_columns = []
non_numeric_colums = []
# 收集数据分类小于10个的列
non_numeric_colums_value_map = {}
numeric_colums_value_map = {}
for column_name in columns:
if pd.to_numeric(df[column_name], errors="coerce").notna().all():
number_columns.append(column_name)
unique_values = df[column_name].unique()
numeric_colums_value_map.update({column_name: len(unique_values)})
else:
non_numeric_colums.append(column_name)
unique_values = df[column_name].unique()
non_numeric_colums_value_map.update({column_name: len(unique_values)})
if len(non_numeric_colums) <= 0:
sorted_colums_value_map = dict(
sorted(numeric_colums_value_map.items(), key=lambda x: x[1])
)
numeric_colums_sort_list = list(sorted_colums_value_map.keys())
x_column = number_columns[0]
hue_column = numeric_colums_sort_list[0]
y_column = numeric_colums_sort_list[1]
elif len(number_columns) <= 0:
raise ValueError("Have No numeric Column")
else:
# 数字和非数字都存在多列,放弃部分数字列
y_column = number_columns[0]
x_column = non_numeric_colums[0]
# if len(non_numeric_colums) > 1:
#
# else:
# non_numeric_colums_sort_list.remove(non_numeric_colums[0])
# hue_column = non_numeric_colums_sort_list
return x_column, y_column, hue_column
if __name__ == "__main__":
# connect = duckdb.connect("/Users/tuyang.yhj/Downloads/example.xlsx")
#
# fonts = fm.findSystemFonts()
# for font in fonts:
# if 'Hei' in font:
# print(font)
# fm = FontManager()
# mat_fonts = set(f.name for f in fm.ttflist)
# for i in mat_fonts:
# print(i)
# print(len(mat_fonts))
# 获取系统中的默认中文字体名称
# default_font = fm.fontManager.defaultFontProperties.get_family()
#
excel_reader = ExcelReader("/Users/tuyang.yhj/Downloads/example.xlsx")
# colunms, datas = excel_reader.run( "SELECT CONCAT(Year, '-', Quarter) AS QuarterYear, SUM(Sales) AS TotalSales FROM example GROUP BY QuarterYear ORDER BY QuarterYear")
colunms, datas = excel_reader.run(
""" SELECT Year, SUM(Sales) AS Total_Sales FROM example GROUP BY Year ORDER BY Year; """
)
#
# # colunms, datas = excel_reader.run( "SELECT CONCAT(Year, '-', Quarter) AS QuarterYear, SUM(Sales) AS TotalSales FROM example GROUP BY QuarterYear ORDER BY QuarterYear")
# # colunms, datas = excel_reader.run( """ SELECT Year, SUM(Sales) AS Total_Sales FROM example GROUP BY Year ORDER BY Year; """)
df = excel_reader.get_df_by_sql_ex(
"SELECT Country, SUM(Profit) AS Total_Profit FROM example GROUP BY Country;"
""" SELECT Segment, Country, SUM(Sales) AS Total_Sales, SUM(Profit) AS Total_Profit FROM example GROUP BY Segment, Country """
)
columns = df.columns.tolist()
plt.rcParams["font.family"] = ["sans-serif"]
rc = {"font.sans-serif": "SimHei", "axes.unicode_minus": False}
sns.set_style(rc={"font.sans-serif": "Microsoft Yahei"})
sns.set(context="notebook", style="ticks", color_codes=True, rc=rc)
sns.set_palette("Set3") # 设置颜色主题
# fig, ax = plt.pie(df[columns[1]], labels=df[columns[0]], autopct='%1.1f%%', startangle=90)
x, y, hue = data_pre_classification(df)
print(x, y, hue)
columns = df.columns.tolist()
font_names = [
"Heiti TC",
"Songti SC",
"STHeiti Light",
"Microsoft YaHei",
"SimSun",
"SimHei",
"KaiTi",
]
fm = FontManager()
mat_fonts = set(f.name for f in fm.ttflist)
can_use_fonts = []
for font_name in font_names:
if font_name in mat_fonts:
can_use_fonts.append(font_name)
if len(can_use_fonts) > 0:
plt.rcParams["font.sans-serif"] = can_use_fonts
rc = {"font.sans-serif": can_use_fonts}
plt.rcParams["axes.unicode_minus"] = False # 解决无法显示符号的问题
sns.set(font="Heiti TC", font_scale=0.8) # 解决Seaborn中文显示问题
sns.set_palette("Set3") # 设置颜色主题
sns.set_style("dark")
sns.color_palette("hls", 10)
sns.hls_palette(8, l=0.5, s=0.7)
sns.set(context="notebook", style="ticks", rc=rc)
# sns.set_palette("Set3") # 设置颜色主题
# sns.set_style("dark")
# sns.color_palette("hls", 10)
# sns.hls_palette(8, l=.5, s=.7)
# sns.set(context='notebook', style='ticks', rc=rc)
fig, ax = plt.subplots(figsize=(8, 5), dpi=100)
plt.subplots_adjust(top=0.9)
ax = df.plot(
kind="pie",
y=columns[1],
ax=ax,
labels=df[columns[0]].values,
startangle=90,
autopct="%1.1f%%",
)
# 手动设置 labels 的位置和大小
ax.legend(
loc="center left", bbox_to_anchor=(-1, 0.5, 0, 0), labels=None, fontsize=10
)
plt.axis("equal") # 使饼图为正圆形
plt.show()
# plt.ticklabel_format(style='plain')
# ax = df.plot(kind='bar', ax=ax)
# sns.barplot(df, x=x, y=y, hue= "Country", ax=ax)
sns.catplot(data=df, x=x, y=y, hue="Country", kind="bar")
# 设置 y 轴刻度格式为普通数字格式
ax.yaxis.set_major_formatter(mtick.FuncFormatter(lambda x, _: "{:,.0f}".format(x)))
# fonts = font_manager.findSystemFonts()
# font_path = ""
# for font in fonts:
# if "Heiti" in font:
# font_path = font
# my_font = font_manager.FontProperties(fname=font_path)
# plt.title("测试", fontproperties=my_font)
# plt.ylabel(columns[1], fontproperties=my_font)
# plt.xlabel(columns[0], fontproperties=my_font)
chart_name = "bar_" + str(uuid.uuid1()) + ".png"
chart_path = chart_name
plt.savefig(chart_path, bbox_inches="tight", dpi=100)
# sns.set(context="notebook", style="ticks", color_codes=True)
# sns.set_palette("Set3") # 设置颜色主题
#
# # fig, ax = plt.pie(df[columns[1]], labels=df[columns[0]], autopct='%1.1f%%', startangle=90)
# fig, ax = plt.subplots(figsize=(8, 5), dpi=100)
# plt.subplots_adjust(top=0.9)
# ax = df.plot(kind='pie', y=columns[1], ax=ax, labels=df[columns[0]].values, startangle=90, autopct='%1.1f%%')
# # 手动设置 labels 的位置和大小
# ax.legend(loc='center left', bbox_to_anchor=(-1, 0.5, 0,0), labels=None, fontsize=10)
# plt.axis('equal') # 使饼图为正圆形
# plt.show()
#
#
# def csv_colunm_foramt(val):

View File

@ -21,7 +21,7 @@ Be careful to not query for columns that do not exist. Also, pay attention to wh
Question: {input}
Rrespond in JSON format as following format:
Respond in JSON format as following format:
{response}
Ensure the response is correct json and can be parsed by Python json.loads
"""