feat(editor): ChatExcel

🔥ChatExcel Mode Operation Manual
This commit is contained in:
yhjun1026
2023-08-29 20:57:09 +08:00
parent 0fb09de345
commit 91225e8b25
5 changed files with 362 additions and 178 deletions

View File

@@ -0,0 +1,99 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.6\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/chatexcel/chatexcel.md:1
#: ../../getting_started/application/chatexcel/chatexcel.md:8
#: 5858f6adf3ac431885c6783f05cfbdba 7236d35e0cc942aaaea23f55fb251c31
msgid "ChatExcel"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:3
#: 9d00fa6394d343fa8c1574d6eee13407
msgid ""
"ChatExcel uses natural language to analyze and query Excel data.![db "
"plugins demonstration](../../../../assets/chat_excel/chat_excel_1.png)"
msgstr "使用自然语言进行Excel数据的分析处理"
#: ../../getting_started/application/chatexcel/chatexcel.md:3
#: ../../getting_started/application/chatexcel/chatexcel.md:12
#: ../../getting_started/application/chatexcel/chatexcel.md:16
#: ../../getting_started/application/chatexcel/chatexcel.md:20
#: 3d006359fdfd421c9e687cbf2f135bf6 531d28926d7c43dfbb819d4c8797efcb
#: 8f7261ad60554cad8a50a87920364a4f ad5c0191f7b242068d8482e9ef3212b3
#: ade378c923ba492f947606e6d4b34326 b9dcfb83d84845fda8eebedd1c3b42d0
#: f07b1ce9bc6e4197a8bf3d3aeb983aab
msgid "db plugins demonstration"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:5
#: 22dab8f3e69b46d99f7162ce89350a64
msgid "1.Select And Upload Excel or CSV File"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:6
#: 0c4cec1f73d346bd911c45b11a78f2a5
msgid "Select your excel or csv file to upload and start the conversation."
msgstr "选择你的Excel或者CSV文件上传开始对话"
#: ../../getting_started/application/chatexcel/chatexcel.md:10
#: 889b15602cbb4c1f9a12abe36acbe30d
msgid ""
"The ChatExcel function supports Excel and CSV format files, select the "
"corresponding file to use."
msgstr "ChatExcel功能支持Excel和CSV格式的文件选择对应格式的文件开始使用"
#: ../../getting_started/application/chatexcel/chatexcel.md:12
#: ccabfe8d6195458f964d7652d0bce2db
msgid ""
"![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_2.png) ![db "
"plugins demonstration](../../../../assets/chat_excel/chat_excel_3.png)"
msgstr ""
#: ../../getting_started/application/chatexcel/chatexcel.md:15
#: 3cbea48852824b5f937acdd705f558a2
msgid "2.Wait for Data Processing"
msgstr "等待数据处理"
#: ../../getting_started/application/chatexcel/chatexcel.md:16
#: 1f33571f1eb24cc68d66afbf348dc43d
msgid ""
"After the data is uploaded, it will first learn and process the data "
"structure and field meaning. ![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_4.png)"
msgstr "等待数据上传完成,会自动进行数据结构的学习和处理"
#: ../../getting_started/application/chatexcel/chatexcel.md:19
#: f3a612db84014bf18fd2b93fb1a5f58c
msgid "3.Use Data Analysis Calculation"
msgstr "开始使用数据分析计算"
#: ../../getting_started/application/chatexcel/chatexcel.md:20
#: c40adb31e4c04a16ab9dbfefc64bde36
msgid ""
"Now you can use natural language to analyze and query data in the dialog "
"box. ![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_5.png) ![db "
"plugins demonstration](../../../../assets/chat_excel/chat_excel_6.png) "
"![db plugins "
"demonstration](../../../../assets/chat_excel/chat_excel_7.png)"
msgstr "现在可以开始进行自然语言进行数据的分析查询对话了"

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-22 13:28+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -20,12 +20,12 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/dashboard/dashboard.md:1
#: 8017757596ff4c7faa06f7e7d18902ca
#: 2a1224e675d144269e5cc3695d4d60b4
msgid "Dashboard"
msgstr "Dashboard"
#: ../../getting_started/application/dashboard/dashboard.md:3
#: 5b84a61923404d8c81d5a1430b3fa12c
#: 2b6d2f94f73d43e68806bf4c6d0d9269
msgid ""
"The purpose of the DB-GPT Dashboard is to empower data analysts with "
"efficiency. DB-GPT provides intelligent reporting technology, allowing "
@@ -34,37 +34,37 @@ msgid ""
msgstr "DB-GPT Dashboard目的是赋能数据分析人员。DB-GPT通过提供智能报表技术使得业务分析人员可以直接使用简单的自然语言进行自助分析。"
#: ../../getting_started/application/dashboard/dashboard.md:8
#: 48604cca2b3f482692bb65a01f0297a7
#: 9612fa76c4264bab8e629ac50959faa9
msgid "Dashboard now support Datasource Type"
msgstr "Dashboard目前支持的数据源类型"
#: ../../getting_started/application/dashboard/dashboard.md:9
#: e4371bc220be46f0833dc7d0c804f263
#: bb0b15742ebe41628fb0d1fc38caabe2
msgid "Mysql"
msgstr "Mysql"
#: ../../getting_started/application/dashboard/dashboard.md:10
#: 719c578796fa44a3ad062289aa4650d7
#: 35491581125b4bdd8422f35b11c7bc2c
msgid "Sqlite"
msgstr "Sqlite"
#: ../../getting_started/application/dashboard/dashboard.md:11
#: c7817904bbf34dfca56a19a004937146
#: 8c4389354e0344aa9a781bdfc94c2cac
msgid "DuckDB"
msgstr "DuckDB"
#: ../../getting_started/application/dashboard/dashboard.md:13
#: 1cebeafe853d43809e6ced45d2b68812
#: 18e8c60f5c2f4aa698cec1e8e8b354c8
msgid "Steps to Dashboard In DB-GPT"
msgstr "Dashboard使用步骤"
#: ../../getting_started/application/dashboard/dashboard.md:15
#: 977520bbea44423ea290617712482148
#: 94f98e0f5c2e451ba29b9b77c4139ed9
msgid "1 add datasource"
msgstr "1.添加数据源"
#: ../../getting_started/application/dashboard/dashboard.md:17
#: a8fcef153c68498fa9886051e8d7b072
#: 34e1211e65b940c3beb6234bcfa423a1
#, fuzzy
msgid ""
"If you are using Dashboard for the first time, you need to mock some data"
@@ -75,17 +75,17 @@ msgid ""
msgstr "如果你是第一次使用Dashboard需要构造测试数据DB-GPT在pilot/mock_datas/提供了测试数据,只需要将数据源进行添加即可"
#: ../../getting_started/application/dashboard/dashboard.md:17
#: 1abcaa9d7fad4b53a0622ab3e982e6d5
#: f29905929b32442ba05833b6c52a11be
msgid "add_datasource"
msgstr "添加数据源"
#: ../../getting_started/application/dashboard/dashboard.md:21
#: 21ebb5bf568741a9b3d7a4275dde69fa
#: 367a487dd1d54681a6e83d8fdda5b793
msgid "2.Choose Dashboard Mode"
msgstr "2.进入Dashboard"
#: ../../getting_started/application/dashboard/dashboard.md:23
#: 1b55d97634b44543acf8f367f77d8436
#: 1ee1e980934e4a618591b7c43921c304
msgid ""
"![create_space](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/5e888880-0e97-4b60-8e5c-b7e7224197f0)"
@@ -94,17 +94,17 @@ msgstr ""
"GPT/assets/13723926/5e888880-0e97-4b60-8e5c-b7e7224197f0)"
#: ../../getting_started/application/dashboard/dashboard.md:23
#: 6c97d2aa26fa401cb3c4172bfe4aea6a
#: 12c756afdad740a9afc9cb46cc834af8
msgid "create_space"
msgstr "create_space"
#: ../../getting_started/application/dashboard/dashboard.md:25
#: ff8e96f78698428a9a578b4f90e0feb4
#: 5a575b17408c42fbacd32d8ff792d5a8
msgid "3.Select Datasource"
msgstr "3.选择数据源"
#: ../../getting_started/application/dashboard/dashboard.md:27
#: 277c924a6f2b49f98414cde95310384f
#: ae051f852a5a4044a147c853cc3fba60
msgid ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/da2ac8b5-eca4-48ef-938f-f9dc1ca711b3)"
@@ -114,45 +114,53 @@ msgstr ""
#: ../../getting_started/application/dashboard/dashboard.md:27
#: ../../getting_started/application/dashboard/dashboard.md:31
#: 33164f10fb38452fbf98be5aabaeeb91 3a46cb4427cf4ba386230dff47cf7647
#: d0093988bb414c41a93e8ad6f88e8404
#: 94907bb0dc694bc3a4d2ee57a84b8242 ecc0666385904fce8bb1000735482f65
msgid "document"
msgstr "document"
#: ../../getting_started/application/dashboard/dashboard.md:29
#: 6a57e48482724d23adf51e888d126562
#: c8697e93661c48b19674e63094ba7486
msgid "4.Input your analysis goals"
msgstr "4.输入分析目标"
#: ../../getting_started/application/dashboard/dashboard.md:31
#: cb96df3f9135450fbf71177978c50141
#: 473fc0d00ab54ee6bc5c21e017591cc4
#, fuzzy
msgid ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) "
"![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926"
"/3d14a2da-165e-4b2f-a921-325c20fe5ae9)"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) ![db plugins "
"demonstration](../../../../assets/chat_dashboard/chat_dashboard_1.png)"
msgstr ""
"![document](https://github.com/eosphoros-ai/DB-"
"GPT/assets/13723926/3f427350-5bd5-4675-8f89-1bd5c63ff2c6) "
"![document](https://github.com/eosphoros-ai/DB-GPT/assets/13723926"
"/3d14a2da-165e-4b2f-a921-325c20fe5ae9)"
#: ../../getting_started/application/dashboard/dashboard.md:31
#: ../../getting_started/application/dashboard/dashboard.md:35
#: 00597e1268544d97a3de368b04d5dcf8 350d04e4b7204823b7a03c0a7606c951
msgid "db plugins demonstration"
msgstr ""
#: ../../getting_started/application/dashboard/dashboard.md:34
#: ed0f008525334a36a900b82339591095
#: b48cc911c1614def9e4738d35e8b754c
msgid "5.Adjust and modify your report"
msgstr "5.调整"
#: ../../getting_started/application/dashboard/dashboard.md:36
#: 8fc26117a2e1484b9452cfaf8c7f208b
#: ../../getting_started/application/dashboard/dashboard.md:35
#: b0442bbc0f6c4c33914814ac92fc4b13
msgid ""
"![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94"
"-041b-44b4-b6ec-891bf8da52a4)"
"![db plugins "
"demonstration](../../../../assets/chat_dashboard/chat_dashboard_2.png)"
msgstr ""
"![upload](https://github.com/eosphoros-ai/DB-GPT/assets/13723926/cb802b94"
"-041b-44b4-b6ec-891bf8da52a4)"
#: ../../getting_started/application/dashboard/dashboard.md:36
#: 6d12166c3c574651a854534cc8c7e997
msgid "upload"
msgstr "upload"
#~ msgid ""
#~ "![upload](https://github.com/eosphoros-ai/DB-"
#~ "GPT/assets/13723926/cb802b94-041b-44b4-b6ec-891bf8da52a4)"
#~ msgstr ""
#~ "![upload](https://github.com/eosphoros-ai/DB-"
#~ "GPT/assets/13723926/cb802b94-041b-44b4-b6ec-891bf8da52a4)"
#~ msgid "upload"
#~ msgstr "upload"

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-21 16:59+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -20,12 +20,12 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/deploy/deploy_faq.md:1
#: 4466dae5cd1048cd9c22450be667b05a
#: 0baefc753798469588ea011c12a0bfd3
msgid "Installation FAQ"
msgstr "Installation FAQ"
#: ../../getting_started/faq/deploy/deploy_faq.md:5
#: dfa13f5fdf1e4fb9af92b58a5bae2ae9
#: 013bf01a02c842ee8bc576f85d127e22
#, fuzzy
msgid ""
"Q1: execute `pip install -e .` error, found some package cannot find "
@@ -35,18 +35,18 @@ msgstr ""
"cannot find correct version."
#: ../../getting_started/faq/deploy/deploy_faq.md:6
#: c694387b681149d18707be047b46fa87
#: 2729928139484def827143c17f2d968c
msgid "change the pip source."
msgstr "替换pip源."
#: ../../getting_started/faq/deploy/deploy_faq.md:13
#: ../../getting_started/faq/deploy/deploy_faq.md:20
#: 5423bc84710c42ee8ba07e95467ce3ac 99aa6bb16764443f801a342eb8f212ce
#: 6e8bf02d7117454fbcc28c7ec27e055a acd2186c0320466f95b500dade75591b
msgid "or"
msgstr "或者"
#: ../../getting_started/faq/deploy/deploy_faq.md:27
#: 6cc878fe282f4a9ab024d0b884c57894
#: c5aab9455827416084a1ea6792263add
msgid ""
"Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to"
" open database file"
@@ -55,39 +55,73 @@ msgstr ""
" open database file"
#: ../../getting_started/faq/deploy/deploy_faq.md:29
#: 18a71bc1062a4b1c8247068d4d49e25d
#: 29880cfc6c7f4f6fb14a9cbb9eed07ad
msgid "make sure you pull latest code or create directory with mkdir pilot/data"
msgstr "make sure you pull latest code or create directory with mkdir pilot/data"
#: ../../getting_started/faq/deploy/deploy_faq.md:31
#: 0987d395af24440a95dd9367e3004a0b
#: 36264030f5cd41bebd17beae12d9be51
msgid "Q3: The model keeps getting killed."
msgstr "Q3: The model keeps getting killed."
#: ../../getting_started/faq/deploy/deploy_faq.md:33
#: bfd90cb8f2914bba84a44573a9acdd6d
#: 0cbf6ae0fee14d239cb1cc6ddba134d7
msgid ""
"your GPU VRAM size is not enough, try replace your hardware or replace "
"other llms."
msgstr "GPU显存不够, 增加显存或者换一个显存小的模型"
#: ../../getting_started/faq/deploy/deploy_faq.md:35
#: 09a9baca454d4b868fedffa4febe7c5c
#: 6f4ce365d20843529195aa6970d6074e
msgid "Q4: How to access website on the public network"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:37
#: 3ad8d2cf2b4348a6baed5f3e302cd58c
#: 9f4a119e64c74a0693fa067cd35cd833
msgid ""
"You can try to use gradio's [network](https://github.com/gradio-"
"app/gradio/blob/main/gradio/networking.py) to achieve."
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:48
#: 90b35959c5854b69acad9b701e21e65f
#: 4c09cfb493ba41fb8590954b986e949d
msgid "Open `url` with your browser to see the website."
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:50
#: 7d905a99d1c547eb95d9c619c70bf221
msgid "Q5: (Windows) execute `pip install -e .` error"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:52
#: fe26218168c4447a8dc89e436cdd1000
msgid "The error log like the following:"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:71
#: d15615f7798d4dc0ad49d9b28926fe32
msgid ""
"Download and install `Microsoft C++ Build Tools` from [visual-cpp-build-"
"tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:75
#: 60ef06d3f99c44c1b568ec7c652905ee
msgid "Q6: `Torch not compiled with CUDA enabled`"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:82
#: 830e63627d2c48b8987ed20db3405c41
msgid "Install [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive)"
msgstr ""
#: ../../getting_started/faq/deploy/deploy_faq.md:83
#: 50a1c244ddf747d797825158550026b9
msgid ""
"Reinstall PyTorch [start-locally](https://pytorch.org/get-started/locally"
"/#start-locally) with CUDA support."
msgstr ""
#~ msgid ""
#~ "Q2: When use Mysql, Access denied "
#~ "for user 'root@localhost'(using password :NO)"

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-21 16:59+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -20,34 +20,34 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/deploy/deploy.md:1
#: 14020ee624b545a5a034b7e357f42545
#: b4f766ca21d241e2849ee0a277a0e8f0
msgid "Installation From Source"
msgstr "源码安装"
#: ../../getting_started/install/deploy/deploy.md:3
#: eeafb53bf0e846518457084d84edece7
#: 9cf72ef201ba4c7a99da8d7de9249cf4
msgid ""
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
"environment and data."
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
#: ../../getting_started/install/deploy/deploy.md:5
#: 1d6ee2f0f1ae43e9904da4c710b13e28
#: b488acb9552043df96e9f01277375b56
msgid "Installation"
msgstr "安装"
#: ../../getting_started/install/deploy/deploy.md:7
#: 6ebdb4ae390e4077af2388c48a73430d
#: e1eb3aafea0c4b82b8d8163b947677dd
msgid "To get started, install DB-GPT with the following steps."
msgstr "请按照以下步骤安装DB-GPT"
#: ../../getting_started/install/deploy/deploy.md:9
#: 910cfe79d1064bd191d56957b76d37fa
#: 4139c4e62e874dc58136b1f8fe0715fe
msgid "1. Hardware Requirements"
msgstr "1. 硬件要求"
#: ../../getting_started/install/deploy/deploy.md:10
#: 6207b8e32b7c4b669c8874ff9267627e
#: c34a204cfa6e4973bfd94e683195c17b
msgid ""
"As our project has the ability to achieve ChatGPT performance of over "
"85%, there are certain hardware requirements. However, overall, the "
@@ -56,176 +56,176 @@ msgid ""
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。但总体来说我们在消费级的显卡上即可完成项目的部署使用具体部署的硬件说明如下:"
#: ../../getting_started/install/deploy/deploy.md
#: 45babe2e028746559e437880fdbcd5d3
#: 3a92203e861b42c9af3d4b687d83de5e
msgid "GPU"
msgstr "GPU"
#: ../../getting_started/install/deploy/deploy.md
#: 7adbc2bb5e384d419b53badfaf36b962 9307790f5c464f58a54c94659451a037
#: 6050741571574eb8b9e498a5b3a7e347 c0a7e2aecb4b48949c3e5a4d479ee7b5
msgid "VRAM Size"
msgstr "显存"
#: ../../getting_started/install/deploy/deploy.md
#: 305fdfdbd4674648a059b65736be191c
#: 247159f568e4476ca6c5e78015c7a8f0
msgid "Performance"
msgstr "Performance"
#: ../../getting_started/install/deploy/deploy.md
#: 0e719a22b08844d9be04b4bcaeb4ad87
#: 871113cbc58743ef989a366b76e8c645
msgid "RTX 4090"
msgstr "RTX 4090"
#: ../../getting_started/install/deploy/deploy.md
#: 482dd0da73f3495198ee1c9c8fb7e8ed ed30edd1a6944d6c8cb6a06c9c12d4db
#: 81327b7e9a984ec99cae779743d174df c237f392162c42d28ec694d17c3f281c
msgid "24 GB"
msgstr "24 GB"
#: ../../getting_started/install/deploy/deploy.md
#: b9a9b3179d844b97a578193eacfec8cc
#: 6e19f23bae05467ba03f1ebb194e0c03
msgid "Smooth conversation inference"
msgstr "Smooth conversation inference"
#: ../../getting_started/install/deploy/deploy.md
#: 171da7c9f0744b5aa335a5411f126eb7
#: 714a48b2c4a943819819a6af034f1998
msgid "RTX 3090"
msgstr "RTX 3090"
#: ../../getting_started/install/deploy/deploy.md
#: fbb497f41e61437ba089008c573b0cc7
#: 06dae55d443c48b1b3fbab85222c3adb
msgid "Smooth conversation inference, better than V100"
msgstr "Smooth conversation inference, better than V100"
#: ../../getting_started/install/deploy/deploy.md
#: 2cb4fba16b664e1e9c22a1076f837a80
#: 5d50db167b244d65a8be1dab4acda37d
msgid "V100"
msgstr "V100"
#: ../../getting_started/install/deploy/deploy.md
#: 05cccda43ffb41d7b73c2d5dfbc7f1c5 8c471150ab0746d8998ddca30ad86404
#: 0d72262c85d148d8b1680d1d9f8fa2c9 e10db632889444a78e123773a30f23cf
msgid "16 GB"
msgstr "16 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 335fa32f77a349abb9a813a3a9dd6974 8546d09080b0421597540e92ab485254
#: 1c0379e653cf46f19d83535c568c54c8 aee8eb48e7804572af351dcfaea5b0fb
msgid "Conversation inference possible, noticeable stutter"
msgstr "Conversation inference possible, noticeable stutter"
#: ../../getting_started/install/deploy/deploy.md
#: dd7a18f4bc144413b90863598c5e9a83
#: 5bc90343dcef48c197438f01efe52bfc
msgid "T4"
msgstr "T4"
#: ../../getting_started/install/deploy/deploy.md:19
#: 932dc4db2fba4272b72c31eb7d319255
#: c9b5f973d19645d39b1892c00526afa7
msgid ""
"if your VRAM Size is not enough, DB-GPT supported 8-bit quantization and "
"4-bit quantization."
msgstr "如果你的显存不够DB-GPT支持8-bit和4-bit量化版本"
#: ../../getting_started/install/deploy/deploy.md:21
#: 7dd6fabaf1ea43718f26e8b83a7299e3
#: 5e488271eede411d882f62ec8524dd4a
msgid ""
"Here are some of the VRAM size usage of the models we tested in some "
"common scenarios."
msgstr "这里是量化版本的相关说明"
#: ../../getting_started/install/deploy/deploy.md
#: b50ded065d4943e3a5bfdfdf3a723f82
#: 2cc65f16fa364088bedd0e58b6871ec8
msgid "Model"
msgstr "Model"
#: ../../getting_started/install/deploy/deploy.md
#: 30621cf2407f4beca262eb47023d0b84
#: d0e1a0d418f74e4b9f5922b17f0c8fcf
msgid "Quantize"
msgstr "Quantize"
#: ../../getting_started/install/deploy/deploy.md
#: 492112b927ce46308c50917beaa9e23a 8450f0b95a05475d9136906ec64d43b2
#: 460b418ab7eb402eae7a0f86d1fda4bf 5e456423a9fa4c0392b08d32f3082f6f
msgid "vicuna-7b-v1.5"
msgstr "vicuna-7b-v1.5"
#: ../../getting_started/install/deploy/deploy.md
#: 1379c29cb10340848ce3e9bf9ec67348 39221fd99f0a41d29141fb73e1c9217d
#: 3fd499ced4884e2aa6633784432f085c 6e872d37f2ab4571961465972092f439
#: 917c7d492a4943f5963a210f9c997cb7 fb204224fb484019b344315d03f50571
#: ff9f5aea13d04176912ebf141cc15d44
#: 0f290c12b9324a07affcfd66804b82d7 29c81ce163e749b99035942a3b18582a
#: 3a4f4325774d452f8c174cac5fe8de47 584f986a1afb4086a0382a9f7e79c55f
#: 994c744ac67249f4a43b3bba360c0bbf aa9c82f660454143b9212842ffe0e0d6
#: ac7b00313284410b9253c4a768a30f0c
msgid "4-bit"
msgstr "4-bit"
#: ../../getting_started/install/deploy/deploy.md
#: 13976c780bc3451fae4ad398b39f5245 32fe2b6c2f1e40c8928c8537f9239d07
#: 74f4ff229a314abf97c8fa4d6d73c339
#: 27401cbb0f2542e2aaa449a586aad2d1 2a1d2d10001f4d9f9b9961c28c592280
#: b69a59c6e4a7458c91be814a98502632
msgid "8 GB"
msgstr "8 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 14ca9edebe794c7692738e5d14a15c06 453394769a20437da7a7a758c35345af
#: 860b51d1ab6742c5bdefc6e1ffc923a3 988110ac5e7f418d8a46ea9c42238ecc
#: d446bc441f8946d9a95a2a965157478e d683ae90e3da473f876cb948dd5dce5e
#: f1e2f6fb36624185ab2876bdc87301ec
#: 0a15518df1b94492b610e47f3c7bb4f6 1f1852ceae0b4c21a020dc9ef4f8b20b
#: 89ad803f6bd24b5d9708a6d4bd48a54f ac7c222678d34637a03546dcb5949668
#: b12e1599bdcb4d27ad4e4a83f12de916 c80ba4ddc1634093842a6f284b7b22bb
#: f63b900e4b844b3196c4c221b36d31f7
msgid "8-bit"
msgstr "8-bit"
#: ../../getting_started/install/deploy/deploy.md
#: 1b6230690780434184bab3a68d501b60 734f124ce767437097d9cb3584796df5
#: 7da1c33a6d4746eba7f976cc43e6ad59 c0e7bd4672014afe88be1e11b8e772da
#: d045ee257f8a40e8bdad0e2b91c64018 e231e0b6f7ee4f5b85d28a18bbc32175
#: 02f72ed48b784b05b2fcaf4ea33fcba8 17285314376044bf9d9a82f9001f39dc
#: 403178173a784bdf8d02fe856849a434 4875c6b595484091b622602d9ef0d3e8
#: 4b11125d4b0c40c488bffb130f4f2b9f e2418c76e7e04101821f29650d111a4a
msgid "12 GB"
msgstr "12 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 2ee16dd760e74439bbbec13186f9e44e 65628a09b7bf487eb4b4f0268eab2751
#: 01dfd16f70cf4128a49ca7bc79f77042 a615efffecb24addba759d05ef61a1c0
msgid "vicuna-13b-v1.5"
msgstr "vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md
#: 3d633ad2c90547d1b47a06aafe4aa177 4a9e81e6303748ada67fa8b6ec1a8f57
#: d33dc735503649348590e03efabef94d
#: 412ddfa6e6fb4567984f757cf74b3bfc 529650341d96466a93153d58ddef0ec9
#: 6176929d59bb4e31a37cbba8a81a489f
msgid "20 GB"
msgstr "20 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 6c02020d52e94f27b2eb1d10b20e9cca d58f181d10c04980b6d0bdc2be51c01c
#: 566b7aa7bc88421a9364cef6bfbeae48 ae32a218d07e44c796ca511972ea2cb0
msgid "llama-2-7b"
msgstr "llama-2-7b"
#: ../../getting_started/install/deploy/deploy.md
#: b9f25fa66dc04e539dfa30b2297ac8aa e1e80256f5444e9aacebb64e053a6b70
#: 1ac748eb518b4017accb98873fe1a8e5 528109c765e54b3caf284e7794abd468
msgid "llama-2-13b"
msgstr "llama-2-13b"
#: ../../getting_started/install/deploy/deploy.md
#: 077c1d5b6db248d4aa6a3b0c5b2cc237 987cc7f04eea4d94acd7ca3ee0fdfe20
#: dfb5c0fa9e82423ab1de9256b3b3f215 f861be75871d40849f896859d0b8be4c
msgid "llama-2-70b"
msgstr "llama-2-70b"
#: ../../getting_started/install/deploy/deploy.md
#: 18a1070ab2e048c7b3ea90e25d58b38f
#: 5568529a82cd4c49812ab2fd46ff9bf0
msgid "48 GB"
msgstr "48 GB"
#: ../../getting_started/install/deploy/deploy.md
#: da0474d2c8214e678021181191a651e5
#: 4ba730f4faa64df9a0a9f72cb3eb0c88
msgid "80 GB"
msgstr "80 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 49e5fedf491b4b569459c23e4f6ebd69 8b54730a306a46f7a531c13ef825b20a
#: 47221748d6d5417abc25e28b6905bc6f 6023d535095a4cb9a99343c2dfddc927
msgid "baichuan-7b"
msgstr "baichuan-7b"
#: ../../getting_started/install/deploy/deploy.md
#: 2158255ea61c4a0c9e96bec0df21fa06 694d9b2fe90740c5b9704089d7ddac9d
#: 55011d4e0bed451dbdda75cb8b258fa5 bc296e4bd582455ca64afc74efb4ebc8
msgid "baichuan-13b"
msgstr "baichuan-13b"
#: ../../getting_started/install/deploy/deploy.md:40
#: a9f9e470d41f4122a95cbc8bd2bc26dc
#: 4bfd52634a974776933c93227f419cdb
msgid "2. Install"
msgstr "2. Install"
#: ../../getting_started/install/deploy/deploy.md:45
#: c78fd491a6374224ab95dc39849f871f
#: 647f09001d4c4124bed11da272306946
msgid ""
"We use Sqlite as default database, so there is no need for database "
"installation. If you choose to connect to other databases, you can "
@@ -240,12 +240,12 @@ msgstr ""
" Miniconda](https://docs.conda.io/en/latest/miniconda.html)"
#: ../../getting_started/install/deploy/deploy.md:54
#: 12180cd023a04152b1591a87d96d227a
#: bf9fcf320ca94dbd855016088800b1a9
msgid "Before use DB-GPT Knowledge"
msgstr "在使用知识库之前"
#: ../../getting_started/install/deploy/deploy.md:60
#: 2d5c1e241a0b47de81c91eca2c4999c6
#: e0cb6cb46a474c4ca16edf73c82b58ca
msgid ""
"Once the environment is installed, we have to create a new folder "
"\"models\" in the DB-GPT project, and then we can put all the models "
@@ -253,27 +253,27 @@ msgid ""
msgstr "如果你已经安装好了环境需要创建models, 然后到huggingface官网下载模型"
#: ../../getting_started/install/deploy/deploy.md:63
#: 8a79092303e74ab2974fb6edd3d14a1c
#: 03b1bf35528d4cdeb735047aa840d6fe
msgid "Notice make sure you have install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:65
#: 68e9b02da9994192856fd4572732041d
#: f8183907e7c044f695f86943b412d84a
msgid "centos:yum install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:67
#: 30cae887feee4ec897d787801d1900db
#: 3bc042bd5cac4007afc9f68e7b5044fe
msgid "ubuntu:app-get install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:69
#: 292a52e7f50242c59e7e95bafc8102da
#: 5915ed1290e84ed9b6782c6733d88891
msgid "macos:brew install git-lfs"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:86
#: a66af00ea6a0403dbc90df23b0e3c40e
#: 104f1e75b0a54300af440ca3b64217a3
msgid ""
"The model files are large and will take a long time to download. During "
"the download, let's configure the .env file, which needs to be copied and"
@@ -281,7 +281,7 @@ msgid ""
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件它需要从。env.template中复制和创建。"
#: ../../getting_started/install/deploy/deploy.md:88
#: 98934f9d2dda41e9b9bc20078ed750eb
#: 228c6729c23f4e17b0475b834d7edb01
msgid ""
"if you want to use openai llm service, see [LLM Use FAQ](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
@@ -290,19 +290,19 @@ msgstr ""
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
#: ../../getting_started/install/deploy/deploy.md:91
#: 727fc8be76dc40d59416b20273faee13
#: c444514ba77b46468721888fe7df9e74
msgid "cp .env.template .env"
msgstr "cp .env.template .env"
#: ../../getting_started/install/deploy/deploy.md:94
#: 65a5307640484800b2d667585e92b340
#: 1514e937757e461189b369da73884a6c
msgid ""
"You can configure basic parameters in the .env file, for example setting "
"LLM_MODEL to the model to be used"
msgstr "您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。"
#: ../../getting_started/install/deploy/deploy.md:96
#: ab83b0d6663441e588ceb52bb2e5934c
#: 4643cdf76bd947fdb86fc4691b98935c
msgid ""
"([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on "
"llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-"
@@ -313,45 +313,50 @@ msgstr ""
"目前Vicuna-v1.5模型(基于llama2)已经开源了我们推荐你使用这个模型通过设置LLM_MODEL=vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md:98
#: 4ed331ceacb84a339a7e0038029e356e
#: acf91810f12b4ad0bd830299eb24850f
msgid "3. Run"
msgstr "3. Run"
#: ../../getting_started/install/deploy/deploy.md:100
#: e674fe7a6a9542dfae6e76f8c586cb04
#: ea82d67451724c2399f8903ea3c52dff
msgid "**(Optional) load examples into SQLlite**"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:105
#: b2de940ecd9444d2ab9ebf8762565fb6
#: a00987ec21364389b7feec58b878c2a1
msgid "On windows platform:"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:110
#: db5c000e6abe4e1cb94e6f4f14247eb7
msgid "1.Run db-gpt server"
msgstr "1.Run db-gpt server"
#: ../../getting_started/install/deploy/deploy.md:111
#: 929d863e28eb4c5b8bd4c53956d3bc76
#: ../../getting_started/install/deploy/deploy.md:116
#: dbeecff230174132b85d1d4549d3c07e
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/install/deploy/deploy.md:114
#: 43dd8b4a017f448f9be4f5432a083c08
#: ../../getting_started/install/deploy/deploy.md:119
#: 22d6321e6226472e878a95d3c8a9aad8
msgid "If you want to access an external LLM service, you need to"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:116
#: 452a8bbc3e7f43e9a89f244ab0910fd6
#: ../../getting_started/install/deploy/deploy.md:121
#: 561dfe9a864540d6ac582f0977b2c9ad
msgid ""
"1.set the variables LLM_MODEL=YOUR_MODEL_NAME, "
"MODEL_SERVER=YOUR_MODEL_SERVEReg:http://localhost:5000 in the .env "
"file."
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:118
#: 216350f67d1a4056afb1c0277dd46a0c
#: ../../getting_started/install/deploy/deploy.md:123
#: 55ceca48e40147a99ab4d23392349156
msgid "2.execute dbgpt_server.py in light mode"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:121
#: fc9c90ca4d974ee1ade9e972320debba
#: ../../getting_started/install/deploy/deploy.md:126
#: 02d42956a2734c739ad1cb9ce59142ce
msgid ""
"If you want to learn about dbgpt-webui, read https://github./csunny/DB-"
"GPT/tree/new-page-framework/datacenter"
@@ -359,55 +364,55 @@ msgstr ""
"如果你想了解web-ui, 请访问https://github./csunny/DB-GPT/tree/new-page-"
"framework/datacenter"
#: ../../getting_started/install/deploy/deploy.md:127
#: c0df0cdc5dea4ef3bef4d4c1f4cc52ad
#: ../../getting_started/install/deploy/deploy.md:132
#: d813eb43b97445a08e058d336249e6f6
#, fuzzy
msgid "Multiple GPUs"
msgstr "4. Multiple GPUs"
#: ../../getting_started/install/deploy/deploy.md:129
#: c3aed00fa8364e6eaab79a23cf649558
#: ../../getting_started/install/deploy/deploy.md:134
#: 0ac795f274d24de7b37f9584763e113d
msgid ""
"DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs."
msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
#: ../../getting_started/install/deploy/deploy.md:131
#: 88b7510fef5943c5b4807bc92398a604
#: ../../getting_started/install/deploy/deploy.md:136
#: 2be557e2b5414d478d375bce0474558d
msgid ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:"
msgstr "你也可以指定gpu ID启动"
#: ../../getting_started/install/deploy/deploy.md:141
#: 2cf93d3291bd4f56a3677e607f6185e7
#: ../../getting_started/install/deploy/deploy.md:146
#: 222f1ebb5cb64675a0c319552d14303e
msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU."
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
#: ../../getting_started/install/deploy/deploy.md:143
#: 00aac35bec094f99bd1f2f5f344cd3f5
#: ../../getting_started/install/deploy/deploy.md:148
#: fb92349f9fe049d5b23b9ead17caf895
#, fuzzy
msgid "Not Enough Memory"
msgstr "5. Not Enough Memory"
#: ../../getting_started/install/deploy/deploy.md:145
#: d3d4b8cd24114a24929f62bbe7bae1a2
#: ../../getting_started/install/deploy/deploy.md:150
#: 30a1105d728a474c9cd14638feab4b59
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
#: ../../getting_started/install/deploy/deploy.md:147
#: eb9412b79f044aebafd59ecf3cc4f873
#: ../../getting_started/install/deploy/deploy.md:152
#: eb2e576379434bfa828c98ee374149f5
msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by "
"default)."
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
#: ../../getting_started/install/deploy/deploy.md:149
#: b27c4982ec9b4ff0992d477be1100488
#: ../../getting_started/install/deploy/deploy.md:154
#: eeaecfd77d8546a6afc1357f9f1684bf
msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM."

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"POT-Creation-Date: 2023-08-29 20:50+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -20,99 +20,137 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/docker/docker.md:1
#: ea5d6b95dea844b89d2f5d0e8f6ebfd3
#: b1f8f6a0b8974ea09a5fe2812f31d941
msgid "Docker Install"
msgstr "Docker 安装"
#: ../../getting_started/install/docker/docker.md:4
#: c1125facb7334346a8e5b66bab892b1c
#: d9145d02ed984d45b33eb46117a2484b
msgid "Docker (Experimental)"
msgstr "Docker (Experimental)"
#: ../../getting_started/install/docker/docker.md:6
#: 33a64984d0694ec9aee272f8a7ecd4cf
msgid "1. Building Docker image"
#: 7d2a5e4016f543748b80dedcee36f3c6
#, fuzzy
msgid "1. Preparing docker images"
msgstr "1.构建Docker镜像"
#: ../../getting_started/install/docker/docker.md:12
#: 656beee32e6f4a49ad48a219910ba36c
#: ../../getting_started/install/docker/docker.md:8
#: 1cb08cc1662f45579b82f0d402c39cc3
msgid ""
"**Pull docker image from the [Eosphoros AI Docker "
"Hub](https://hub.docker.com/u/eosphorosai)**"
msgstr ""
#: ../../getting_started/install/docker/docker.md:14
#: 23781bc56d394d07927186c6cf619a91
#, fuzzy
msgid "**(Optional) Building Docker image**"
msgstr "1.构建Docker镜像"
#: ../../getting_started/install/docker/docker.md:20
#: 77a31390bd024d5f87f8d8ec386a23ae
msgid "Review images by listing them:"
msgstr "Review images by listing them:"
#: ../../getting_started/install/docker/docker.md:18
#: a8fd727500de480299e5bdfc86151473
#: ../../getting_started/install/docker/docker.md:26
#: e5f6c40d68f346e1bbb57a5f7ac2f10b
msgid "Output should look something like the following:"
msgstr "输出日志应该长这样:"
#: ../../getting_started/install/docker/docker.md:25
#: 965edb9fe5184571b8afc5232cfd2773
#: ../../getting_started/install/docker/docker.md:33
#: b26892bf338c484ba8ed34f09c0fda23
msgid ""
"`eosphorosai/dbgpt` is the base image, which contains the project's base "
"dependencies and a sqlite database. `eosphorosai/dbgpt-allinone` build "
"from `eosphorosai/dbgpt`, which contains a mysql database."
msgstr ""
#: ../../getting_started/install/docker/docker.md:35
#: 7aabb767dcd8439ea7ca14fd8deccb87
msgid "You can pass some parameters to docker/build_all_images.sh."
msgstr "你也可以docker/build_all_images.sh构建的时候指定参数"
#: ../../getting_started/install/docker/docker.md:33
#: a1743c21a4db468db108a60540dd4754
#: ../../getting_started/install/docker/docker.md:43
#: 36a85e7aca484e5cb0656dea8dc3568c
msgid ""
"You can execute the command `bash docker/build_all_images.sh --help` to "
"see more usage."
msgstr "可以指定命令`bash docker/build_all_images.sh --help`查看如何使用"
#: ../../getting_started/install/docker/docker.md:35
#: cb7f05675c674fcf931a6afa6fb7d24c
msgid "2. Run all in one docker container"
#: ../../getting_started/install/docker/docker.md:45
#: c0d04891ed784cd8a8403ea395c56a45
#, fuzzy
msgid "2. Run docker container"
msgstr "2. Run all in one docker container"
#: ../../getting_started/install/docker/docker.md:37
#: 2b2d95e668ed428e97eac851c604a74c
msgid "**Run with local model**"
#: ../../getting_started/install/docker/docker.md:47
#: 7d5cb8366aa849b684fe5c3805213d0d
#, fuzzy
msgid "**Run with local model and SQLite database**"
msgstr "**Run with local model**"
#: ../../getting_started/install/docker/docker.md:52
#: ../../getting_started/install/docker/docker.md:87
#: 731540ca95004dd2bb2c3a9871ddb404 e3ab41d312ea40d492b48cf8629553fe
#: ../../getting_started/install/docker/docker.md:61
#: ../../getting_started/install/docker/docker.md:88
#: ../../getting_started/install/docker/docker.md:123
#: 2ef66d0c87cf4ab48f9bbf4473071b43 9b594d33ef9f472d9fccfe1bd07d5564
#: f86c8364bd9948b29224c6ef0e1d6a83
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/install/docker/docker.md:55
#: 2cbab25caaa749f5b753c58947332cb2
#: ../../getting_started/install/docker/docker.md:64
#: ../../getting_started/install/docker/docker.md:91
#: a462082e866f46a8b99a4d95e6fa5b83 d262fdeb9f9c4156898875db75997874
#, fuzzy
msgid ""
"`-e LLM_MODEL=vicuna-13b`, means we use vicuna-13b as llm model, see "
"/pilot/configs/model_config.LLM_MODEL_CONFIG"
"`-e LLM_MODEL=vicuna-13b-v1.5`, means we use vicuna-13b-v1.5 as llm "
"model, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr "`-e LLM_MODEL=vicuna-13b` 指定llm model is vicuna-13b "
#: ../../getting_started/install/docker/docker.md:56
#: ed7e79d43ee940dca788f15d520850a3
#: ../../getting_started/install/docker/docker.md:65
#: ../../getting_started/install/docker/docker.md:92
#: c8c172873ff145aeb4f0f9cb04c209a8 f1b2e1f27cd24ed9ae233445f3fe1301
msgid ""
"`-v /data/models:/app/models`, means we mount the local model file "
"directory `/data/models` to the docker container directory `/app/models`,"
" please replace it with your model file directory."
msgstr "`-v /data/models:/app/models`, 指定挂载的模型文件 "
"directory `/data/models` to the docker container directory `/app/models`,"
" 你也可以替换成你自己的模型."
msgstr ""
"`-v /data/models:/app/models`, 指定挂载的模型文件 directory `/data/models` to the "
"docker container directory `/app/models`, 你也可以替换成你自己的模型."
#: ../../getting_started/install/docker/docker.md:58
#: 98e7bff5dab04979a0bb15abdf2ac1e0
#: ../../getting_started/install/docker/docker.md:67
#: ../../getting_started/install/docker/docker.md:94
#: 432463d5993a4e6eb34497390d89e891 da8df3262b834b359c080e84b25e431e
msgid "You can see log with command:"
msgstr "你也可以通过命令查看日志"
#: ../../getting_started/install/docker/docker.md:64
#: 7ce23cecd6f24a6b8d3e4708d5f6265d
#: ../../getting_started/install/docker/docker.md:73
#: a4963b927a87455c90a4fbb1d3814a09
#, fuzzy
msgid "**Run with local model and MySQL database**"
msgstr "**Run with local model**"
#: ../../getting_started/install/docker/docker.md:100
#: 81ecd658f7c54070bcb47838d3a9f533
msgid "**Run with openai interface**"
msgstr "**Run with openai interface**"
#: ../../getting_started/install/docker/docker.md:83
#: bdf315780f454aaf9ead6414723f34c7
#: ../../getting_started/install/docker/docker.md:119
#: 2c01b849a0fe4b4d9554a9adf9cbf8fc
msgid ""
"`-e LLM_MODEL=proxyllm`, means we use proxy llm(openai interface, "
"fastchat interface...)"
msgstr "`-e LLM_MODEL=proxyllm`, 通过设置模型为第三方模型服务API 可以是openai, 也可以是fastchat interface..."
msgstr ""
"`-e LLM_MODEL=proxyllm`, 通过设置模型为第三方模型服务API 可以是openai, 也可以是fastchat "
"interface..."
#: ../../getting_started/install/docker/docker.md:84
#: 12b3e78d3ebd47288a8d081eea278b45
#: ../../getting_started/install/docker/docker.md:120
#: 262af25e0ec748e3a8b12274004624bb
msgid ""
"`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, means we mount the local text2vec model to the docker "
"container."
msgstr "`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, 设置知识库embedding模型为text2vec. "
"container.""
msgstr ""
"`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, 设置知识库embedding模型为text2vec. container.\""