mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-09-25 11:39:11 +00:00
update:doc
This commit is contained in:
@@ -179,6 +179,8 @@ In the .env configuration file, modify the LANGUAGE parameter to switch between
|
||||
|
||||
1.Place personal knowledge files or folders in the pilot/datasets directory.
|
||||
|
||||
We currently support many document formats: txt, pdf, md, html, doc, ppt, and url.
|
||||
|
||||
2.set .env configuration set your vector store type, eg:VECTOR_STORE_TYPE=Chroma, now we support Chroma and Milvus(version > 2.1)
|
||||
|
||||
3.Run the knowledge repository script in the tools directory.
|
||||
|
@@ -18,6 +18,8 @@
|
||||
|
||||
DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地化的GPT大模型与您的数据和环境进行交互,无数据泄露风险,100% 私密,100% 安全。
|
||||
|
||||
[DB-GPT视频介绍](https://www.bilibili.com/video/BV1SM4y1a7Nj/?buvid=551b023900b290f9497610b2155a2668&is_story_h5=false&mid=%2BVyE%2Fwau5woPcUKieCWS0A%3D%3D&p=1&plat_id=116&share_from=ugc&share_medium=iphone&share_plat=ios&share_session_id=5D08B533-82A4-4D40-9615-7826065B4574&share_source=GENERIC&share_tag=s_i×tamp=1686307943&unique_k=bhO3lgQ&up_id=31375446)
|
||||
|
||||
## 最新发布
|
||||
|
||||
- [2023/06/01]🔥 在Vicuna-13B基础模型的基础上,通过插件实现任务链调用。例如单句创建数据库的实现.[演示](./assets/dbgpt_bytebase_plugin.gif)
|
||||
@@ -174,6 +176,8 @@ $ python webserver.py
|
||||
|
||||
1.将个人知识文件或者文件夹放入pilot/datasets目录中
|
||||
|
||||
当前支持的文档格式: txt, pdf, md, html, doc, ppt, and url.
|
||||
|
||||
2.在.env文件指定你的向量数据库类型,VECTOR_STORE_TYPE(默认Chroma),目前支持Chroma,Milvus(需要设置MILVUS_URL和MILVUS_PORT)
|
||||
|
||||
注意Milvus版本需要>2.1
|
||||
|
@@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.1.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-06-11 14:10+0800\n"
|
||||
"POT-Creation-Date: 2023-06-13 11:38+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@@ -17,17 +17,43 @@ msgstr ""
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=utf-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.11.0\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/tutorials.md:1 12b03941d64f4bdf96eaaeec0147a387
|
||||
#: ../../getting_started/tutorials.md:1 7011a2ab0e7f45ddb1fa85b6479cc442
|
||||
msgid "Tutorials"
|
||||
msgstr "教程"
|
||||
|
||||
#: ../../getting_started/tutorials.md:4 b966c15b01f94a1e84d4b6142b8f4111
|
||||
#: ../../getting_started/tutorials.md:4 960f88b9c1b64940bfa0576bab5b0314
|
||||
msgid "This is a collection of DB-GPT tutorials on Medium."
|
||||
msgstr "这是知乎上DB-GPT教程的集合。."
|
||||
|
||||
#: ../../getting_started/tutorials.md:6 869431aac3864180acb41b852d48d29e
|
||||
msgid "Comming soon..."
|
||||
msgstr "未完待续"
|
||||
#: ../../getting_started/tutorials.md:6 3915395cc45742519bf0c607eeafc489
|
||||
msgid ""
|
||||
"###Introduce [What is DB-"
|
||||
"GPT](https://www.youtube.com/watch?v=QszhVJerc0I) by csunny "
|
||||
"(https://github.com/csunny/DB-GPT)"
|
||||
msgstr "###Introduce [什么是DB-GPT](https://www.bilibili.com/video/BV1SM4y1a7Nj/?buvid=551b023900b290f9497610b2155a2668&is_story_h5=false&mid=%2BVyE%2Fwau5woPcUKieCWS0A%3D%3D&p=1&plat_id=116&share_from=ugc&share_medium=iphone&share_plat=ios&share_session_id=5D08B533-82A4-4D40-9615-7826065B4574&share_source=GENERIC&share_tag=s_i×tamp=1686307943&unique_k=bhO3lgQ&up_id=31375446) by csunny (https://github.com/csunny/DB-GPT)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:9 e213736923574b2cb039a457d789c27c
|
||||
msgid "Knowledge"
|
||||
msgstr "知识库"
|
||||
|
||||
#: ../../getting_started/tutorials.md:11 90b5472735a644168d51c054ed882748
|
||||
msgid ""
|
||||
"[How to Create your own knowledge repository](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/modules/knownledge.html)"
|
||||
msgstr "[怎么创建自己的知识库](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/modules/knownledge.html)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:13 6a851e1e88ea4bcbaf7ee742a12224ef
|
||||
msgid "[Add new Knowledge demonstration](../../assets/new_knownledge_en.gif)"
|
||||
msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:15 4487ef393e004e7c936f5104727212a4
|
||||
msgid "DB Plugins"
|
||||
msgstr "DB Plugins"
|
||||
|
||||
#: ../../getting_started/tutorials.md:16 ee5decd8441d40ae8a240a19c1a5a74a
|
||||
msgid "[db plugins demonstration](../../assets/auto_sql_en.gif)"
|
||||
msgstr "[db plugins 演示](../../assets/auto_sql_en.gif)"
|
||||
|
||||
|
@@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.1.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-06-11 14:10+0800\n"
|
||||
"POT-Creation-Date: 2023-06-13 11:38+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@@ -17,13 +17,13 @@ msgstr ""
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=utf-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.11.0\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../modules/llms.md:1 9c05a511436b4a408e2d1acd2f2568e7
|
||||
#: ../../modules/llms.md:1 34386f3fecba48fbbd86718283ba593c
|
||||
msgid "LLMs"
|
||||
msgstr "大语言模型"
|
||||
|
||||
#: ../../modules/llms.md:3 c6549cbde17e42e596470a537286cedb
|
||||
#: ../../modules/llms.md:3 241b39ad980f4cfd90a7f0fdae05a1d2
|
||||
#, python-format
|
||||
msgid ""
|
||||
"In the underlying large model integration, we have designed an open "
|
||||
@@ -34,23 +34,64 @@ msgid ""
|
||||
" of 85% or higher. We use higher standards to select models, hoping to "
|
||||
"save users the cumbersome testing and evaluation process in the process "
|
||||
"of use."
|
||||
msgstr "在底层大模型接入中,我们设计了开放的接口,支持对接多种大模型。同时对于接入模型的效果,"
|
||||
"我们有非常严格的把控与评审机制。对大模型能力上与ChatGPT对比,在准确率上需要满足85%"
|
||||
"以上的能力对齐。我们用更高的标准筛选模型,是期望在用户使用过程中,可以省去前面繁琐的测试评估环节。"
|
||||
msgstr "在底层大模型接入中,我们设计了开放的接口,支持对接多种大模型。同时对于接入模型的效果,我们有非常严格的把控与评审机制。对大模型能力上与ChatGPT对比,在准确率上需要满足85%以上的能力对齐。我们用更高的标准筛选模型,是期望在用户使用过程中,可以省去前面繁琐的测试评估环节。"
|
||||
|
||||
#: ../../modules/llms.md:5 1b18ef91924442f7ab7a117aec6122d5
|
||||
#: ../../modules/llms.md:5 25175e87a62e41bca86798eb783cefd6
|
||||
msgid "Multi LLMs Usage"
|
||||
msgstr "多模型使用"
|
||||
|
||||
#: ../../modules/llms.md:6 b14256f1768d45ef929be664b8afb31e
|
||||
#: ../../modules/llms.md:6 8c35341e9ca94202ba779567813f9973
|
||||
msgid ""
|
||||
"To use multiple models, modify the LLM_MODEL parameter in the .env "
|
||||
"configuration file to switch between the models."
|
||||
msgstr "如果要使用不同的模型,请修改.env配置文件中的LLM MODEL参数以在模型之间切换。"
|
||||
|
||||
#: ../../modules/llms.md:8 42cbe90a1a524d8381a0a743ef1a927e
|
||||
#: ../../modules/llms.md:8 2edf3309a6554f39ad74e19faff09cee
|
||||
msgid ""
|
||||
"Notice: you can create .env file from .env.template, just use command "
|
||||
"like this:"
|
||||
msgstr "注意:你可以从 .env.template 创建 .env 文件。只需使用如下命令:"
|
||||
|
||||
#: ../../modules/llms.md:14 5fa7639ef294425e89e13b7c6617fb4b
|
||||
msgid ""
|
||||
"now we support models vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, "
|
||||
"guanaco-33b-merged, falcon-40b, gorilla-7b."
|
||||
msgstr "现在我们支持的模型有vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, "
|
||||
"guanaco-33b-merged, falcon-40b, gorilla-7b."
|
||||
|
||||
#: ../../modules/llms.md:16 96c9a5ad00264bd2a07bdbdec87e471e
|
||||
msgid ""
|
||||
"DB-GPT provides a model load adapter and chat adapter. load adapter which"
|
||||
" allows you to easily adapt load different LLM models by inheriting the "
|
||||
"BaseLLMAdapter. You just implement match() and loader() method."
|
||||
msgstr "DB-GPT提供了多模型适配器load adapter和chat adapter.load adapter通过继承BaseLLMAdapter类, 实现match和loader方法允许你适配不同的LLM."
|
||||
|
||||
#: ../../modules/llms.md:18 1033714691464f50900c04c9e1bb5643
|
||||
msgid "vicuna llm load adapter"
|
||||
msgstr "vicuna llm load adapter"
|
||||
|
||||
#: ../../modules/llms.md:35 faa6432575be45bcae5deb1cc7fee3fb
|
||||
msgid "chatglm load adapter"
|
||||
msgstr "chatglm load adapter"
|
||||
|
||||
#: ../../modules/llms.md:62 61c4189cabf04e628132c2bf5f02bb50
|
||||
msgid ""
|
||||
"chat adapter which allows you to easily adapt chat different LLM models "
|
||||
"by inheriting the BaseChatAdpter.you just implement match() and "
|
||||
"get_generate_stream_func() method"
|
||||
msgstr "chat adapter通过继承BaseChatAdpter允许你通过实现match和get_generate_stream_func方法允许你适配不同的LLM."
|
||||
|
||||
#: ../../modules/llms.md:64 407a67e4e2c6414b9cde346961d850c0
|
||||
msgid "vicuna llm chat adapter"
|
||||
msgstr "vicuna llm chat adapter"
|
||||
|
||||
#: ../../modules/llms.md:76 53a55238cd90406db58c50dc64465195
|
||||
msgid "chatglm llm chat adapter"
|
||||
msgstr "chatglm llm chat adapter"
|
||||
|
||||
#: ../../modules/llms.md:89 b0c5ff72c05e40b3b301d6b81205fe63
|
||||
msgid ""
|
||||
"if you want to integrate your own model, just need to inheriting "
|
||||
"BaseLLMAdaper and BaseChatAdpter and implement the methods"
|
||||
msgstr "如果你想集成自己的模型,只需要继承BaseLLMAdaper和BaseChatAdpter类,然后实现里面的方法即可"
|
||||
|
||||
|
@@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.1.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-06-11 14:10+0800\n"
|
||||
"POT-Creation-Date: 2023-06-13 11:38+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@@ -17,13 +17,13 @@ msgstr ""
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=utf-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.11.0\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../use_cases/knownledge_based_qa.md:1 a03c7a5aa5cc4a3e9bc7bd3734d47176
|
||||
#: ../../use_cases/knownledge_based_qa.md:1 ddfe412b92e14324bdc11ffe58114e5f
|
||||
msgid "Knownledge based qa"
|
||||
msgstr "知识问答"
|
||||
|
||||
#: ../../use_cases/knownledge_based_qa.md:3 37607733852c4ade97c80fbcca66d573
|
||||
#: ../../use_cases/knownledge_based_qa.md:3 48635316cc704a779089ff7b5cb9a836
|
||||
msgid ""
|
||||
"Chat with your own knowledge is a very interesting thing. In the usage "
|
||||
"scenarios of this chapter, we will introduce how to build your own "
|
||||
@@ -33,25 +33,26 @@ msgid ""
|
||||
"base, which was introduced in the previous knowledge base module. Of "
|
||||
"course, you can also call our provided knowledge embedding API to store "
|
||||
"knowledge."
|
||||
msgstr "用自己的知识聊天是一件很有趣的事情。在本章的使用场景中,"
|
||||
"我们将介绍如何通过知识库API构建自己的知识库。首先,"
|
||||
"构建知识存储目前可以通过执行“python tool/knowledge_init.py”"
|
||||
"来初始化您自己的知识库的内容,这在前面的知识库模块中已经介绍过了"
|
||||
"。当然,你也可以调用我们提供的知识嵌入API来存储知识。"
|
||||
msgstr ""
|
||||
"用自己的知识聊天是一件很有趣的事情。在本章的使用场景中,我们将介绍如何通过知识库API构建自己的知识库。首先,构建知识存储目前可以通过执行“python"
|
||||
" "
|
||||
"tool/knowledge_init.py”来初始化您自己的知识库的内容,这在前面的知识库模块中已经介绍过了。当然,你也可以调用我们提供的知识嵌入API来存储知识。"
|
||||
|
||||
#: ../../use_cases/knownledge_based_qa.md:6 ea5ad6cec29d49228c03d57d255c42fe
|
||||
msgid "We currently support four document formats: txt, pdf, url, and md."
|
||||
#: ../../use_cases/knownledge_based_qa.md:6 0a5c68429c9343cf8b88f4f1dddb18eb
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"We currently support many document formats: txt, pdf, md, html, doc, ppt,"
|
||||
" and url."
|
||||
msgstr "“我们目前支持四种文件格式: txt, pdf, url, 和md。"
|
||||
|
||||
#: ../../use_cases/knownledge_based_qa.md:20 01908d4b18b345908004a251462d42b3
|
||||
#: ../../use_cases/knownledge_based_qa.md:20 83f3544c06954e5cbc0cc7788f699eb1
|
||||
msgid ""
|
||||
"Now we currently support vector databases: Chroma (default) and Milvus. "
|
||||
"You can switch between them by modifying the \"VECTOR_STORE_TYPE\" field "
|
||||
"in the .env file."
|
||||
msgstr "“我们目前支持向量数据库:Chroma(默认)和Milvus。"
|
||||
"你可以通过修改.env文件中的“VECTOR_STORE_TYPE”参数在它们之间切换。"
|
||||
msgstr "“我们目前支持向量数据库:Chroma(默认)和Milvus。你可以通过修改.env文件中的“VECTOR_STORE_TYPE”参数在它们之间切换。"
|
||||
|
||||
#: ../../use_cases/knownledge_based_qa.md:31 f37d80faa3f84c8cb176a59f4ff8140c
|
||||
#: ../../use_cases/knownledge_based_qa.md:31 ac12f26b81384fc4bf44ccce1c0d86b4
|
||||
msgid "Below is an example of using the knowledge base API to query knowledge:"
|
||||
msgstr "下面是一个使用知识库API进行查询的例子:"
|
||||
|
||||
|
@@ -10,6 +10,9 @@ As the knowledge base is currently the most significant user demand scenario, we
|
||||
|
||||
1.Place personal knowledge files or folders in the pilot/datasets directory.
|
||||
|
||||
We currently support many document formats: txt, pdf, md, html, doc, ppt, and url.
|
||||
|
||||
|
||||
2.Update your .env, set your vector store type, VECTOR_STORE_TYPE=Chroma
|
||||
(now only support Chroma and Milvus, if you set Milvus, please set MILVUS_URL and MILVUS_PORT)
|
||||
|
||||
|
Reference in New Issue
Block a user