mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-07-30 15:21:02 +00:00
doc:add how to use proxy api (#731)
This commit is contained in:
commit
02de3d4732
@ -77,7 +77,7 @@ macos:brew install git-lfs
|
||||
```
|
||||
##### Download LLM Model and Embedding Model
|
||||
|
||||
If you use OpenAI llm service, see [LLM Use FAQ](https://db-gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)
|
||||
If you use OpenAI llm service, see [How to Use LLM REST API](https://db-gpt.readthedocs.io/en/latest/getting_started/faq/llm/proxyllm/proxyllm.html)
|
||||
|
||||
```{tip}
|
||||
If you use openai or Axzure or tongyi llm api service, you don't need to download llm model.
|
||||
|
@ -28,6 +28,7 @@ Multi LLMs Support, Supports multiple large language models, currently supportin
|
||||
:name: llama_cpp
|
||||
:hidden:
|
||||
|
||||
./proxyllm/proxyllm.md
|
||||
./llama/llama_cpp.md
|
||||
./quantization/quantization.md
|
||||
./vllm/vllm.md
|
||||
|
74
docs/getting_started/install/llm/proxyllm/proxyllm.md
Normal file
74
docs/getting_started/install/llm/proxyllm/proxyllm.md
Normal file
@ -0,0 +1,74 @@
|
||||
Proxy LLM API
|
||||
==================================
|
||||
Now DB-GPT supports connect LLM service through proxy rest api.
|
||||
|
||||
LLM rest api now supports
|
||||
```{note}
|
||||
* OpenAI
|
||||
* Azure
|
||||
* Aliyun tongyi
|
||||
* Baidu wenxin
|
||||
* Zhipu
|
||||
* Baichuan
|
||||
* Bard
|
||||
```
|
||||
|
||||
|
||||
### How to Integrate LLM rest API, like OpenAI, Azure, tongyi, wenxin llm api service?
|
||||
update your `.env` file
|
||||
```commandline
|
||||
#OpenAI
|
||||
LLM_MODEL=chatgpt_proxyllm
|
||||
PROXY_API_KEY={your-openai-sk}
|
||||
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
|
||||
|
||||
#Azure
|
||||
LLM_MODEL=chatgpt_proxyllm
|
||||
PROXY_API_KEY={your-openai-sk}
|
||||
PROXY_SERVER_URL=https://xx.openai.azure.com/v1/chat/completions
|
||||
|
||||
#Aliyun tongyi
|
||||
LLM_MODEL=tongyi_proxyllm
|
||||
TONGYI_PROXY_API_KEY={your-tongyi-sk}
|
||||
PROXY_SERVER_URL={your_service_url}
|
||||
|
||||
## Baidu wenxin
|
||||
LLM_MODEL=wenxin_proxyllm
|
||||
PROXY_SERVER_URL={your_service_url}
|
||||
WEN_XIN_MODEL_VERSION={version}
|
||||
WEN_XIN_API_KEY={your-wenxin-sk}
|
||||
WEN_XIN_SECRET_KEY={your-wenxin-sct}
|
||||
|
||||
## Zhipu
|
||||
LLM_MODEL=zhipu_proxyllm
|
||||
PROXY_SERVER_URL={your_service_url}
|
||||
ZHIPU_MODEL_VERSION={version}
|
||||
ZHIPU_PROXY_API_KEY={your-zhipu-sk}
|
||||
|
||||
## Baichuan
|
||||
LLM_MODEL=bc_proxyllm
|
||||
PROXY_SERVER_URL={your_service_url}
|
||||
BAICHUN_MODEL_NAME={version}
|
||||
BAICHUAN_PROXY_API_KEY={your-baichuan-sk}
|
||||
BAICHUAN_PROXY_API_SECRET={your-baichuan-sct}
|
||||
|
||||
## bard
|
||||
LLM_MODEL=bard_proxyllm
|
||||
PROXY_SERVER_URL={your_service_url}
|
||||
# from https://bard.google.com/ f12-> application-> __Secure-1PSID
|
||||
BARD_PROXY_API_KEY={your-bard-token}
|
||||
```
|
||||
```{tip}
|
||||
Make sure your .env configuration is not overwritten
|
||||
```
|
||||
|
||||
### How to Integrate Embedding rest API, like OpenAI, Azure api service?
|
||||
|
||||
```commandline
|
||||
## Openai embedding model, See /pilot/model/parameter.py
|
||||
EMBEDDING_MODEL=proxy_openai
|
||||
proxy_openai_proxy_server_url=https://api.openai.com/v1
|
||||
proxy_openai_proxy_api_key={your-openai-sk}
|
||||
proxy_openai_proxy_backend=text-embedding-ada-002
|
||||
```
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-10-20 22:29+0800\n"
|
||||
"POT-Creation-Date: 2023-10-26 00:03+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -20,47 +20,47 @@ msgstr ""
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:1
|
||||
#: 7bcf028ff0884ea88f25b7e2c9608153
|
||||
#: c9d9195862204bb9b526d728b1527a98
|
||||
msgid "Installation From Source"
|
||||
msgstr "源码安装"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:3
|
||||
#: 61f0b1135c84423bbaeb5f9f0942ad7d
|
||||
#: e462f24ec27645c3afd23866fdeea761
|
||||
msgid ""
|
||||
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
|
||||
"environment and data."
|
||||
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:5
|
||||
#: d7622cd5f69f4a32b3c8e979c6b9f601
|
||||
#: 065a4cf91565437cbad46726e5aee89c
|
||||
msgid "Installation"
|
||||
msgstr "安装"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:7
|
||||
#: 4368072b6384496ebeaff3c09ca2f888
|
||||
#: 07ebe19dbb5040419c6016258d975904
|
||||
msgid "To get started, install DB-GPT with the following steps."
|
||||
msgstr "请按照以下步骤安装DB-GPT"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:9
|
||||
#: 0dfdf8ac6e314fe7b624a685d9beebd5
|
||||
#: 1a552a8bb4fe481ba7695e1a2f8985f8
|
||||
msgid "1. Hardware Requirements"
|
||||
msgstr "1. 硬件要求"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:10
|
||||
#: cff920f8732f4f1da3063ec2bc099271
|
||||
#: bd292acafdb74b99a570c4a8e126df5d
|
||||
msgid ""
|
||||
"DB-GPT can be deployed on servers with low hardware requirements or on "
|
||||
"servers with high hardware requirements."
|
||||
msgstr "DB-GPT可以部署在对硬件要求不高的服务器,也可以部署在对硬件要求高的服务器"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:12
|
||||
#: 8e3818824d6146c6b265731c277fbd0b
|
||||
#: 913c8d0630f2460997fb856b81967903
|
||||
#, fuzzy
|
||||
msgid "Low hardware requirements"
|
||||
msgstr "1. 硬件要求"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:13
|
||||
#: ca95d66526994173ac1fea20bdea5d67
|
||||
#: 70ca521385c642049789e14ab61bc46b
|
||||
msgid ""
|
||||
"The low hardware requirements mode is suitable for integrating with "
|
||||
"third-party LLM services' APIs, such as OpenAI, Tongyi, Wenxin, or "
|
||||
@ -68,23 +68,23 @@ msgid ""
|
||||
msgstr "Low hardware requirements模式适用于对接第三方模型服务的api,比如OpenAI, 通义千问, 文心.cpp。"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:15
|
||||
#: 83fc53cc1b4248139f69f490b859ad8d
|
||||
#: 7aacaa2505c447bfa3d0ef6418ae73d2
|
||||
msgid "DB-GPT provides set proxy api to support LLM api."
|
||||
msgstr "DB-GPT可以通过设置proxy api来支持第三方大模型服务"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:17
|
||||
#: 418a9f24eafc4571b74d86c3f1e57a2d
|
||||
#: 6b5a9f7d61d54a559363a9a5d270a580
|
||||
msgid "As our project has the ability to achieve ChatGPT performance of over 85%,"
|
||||
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:19
|
||||
#: 6f85149ab0024cc99e43804206a595ed
|
||||
#: cf4b1f1115c041cbafb15af61946378c
|
||||
#, fuzzy
|
||||
msgid "High hardware requirements"
|
||||
msgstr "1. 硬件要求"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:20
|
||||
#: 31635ffff5084814a14deb3220dd2c17
|
||||
#: dae1e9a698144a919dc3740cd676eb81
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"The high hardware requirements mode is suitable for independently "
|
||||
@ -98,67 +98,67 @@ msgstr ""
|
||||
"chatglm,vicuna等私有大模型所以对硬件有一定的要求。但总体来说,我们在消费级的显卡上即可完成项目的部署使用,具体部署的硬件说明如下:"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: d806b90be1614ad3b2e06c92f4b17e5c
|
||||
#: a6457364eccd49c99dc6a020a9aa5185
|
||||
msgid "GPU"
|
||||
msgstr "GPU"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 4b02f41145484389ace0b547384ac269 bbba2ff3fab94482a1761264264deef9
|
||||
#: 2d737c43c9fd45efbaf1e4204227ab51 bc0aa12f56dd4d26af74d7c10187fc0c
|
||||
msgid "VRAM Size"
|
||||
msgstr "显存"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 0ea63c2dcc0e43858a61e01d59ad09f9
|
||||
#: 8bfd4e58a63a42858b6be2d0ce11b2fa
|
||||
msgid "Performance"
|
||||
msgstr "Performance"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 6521683eb91e450c928a72688550a63d
|
||||
#: 9ff11248bfdc43e18e6c29ed95d4f807
|
||||
msgid "RTX 4090"
|
||||
msgstr "RTX 4090"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: bb6340c9cdc048fbb0ed55defc1aaeb6 d991b39845ee404198e1a1e35cc416f3
|
||||
#: 229e141d0a0e4d558b11c204654f36a9 94a76fe065164a67883a79f75d10139c
|
||||
msgid "24 GB"
|
||||
msgstr "24 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 4134d3a89d364e33b2bdf1c7667e4755
|
||||
#: 2edf99edf84e43ea8a5dce2a5ad0056a
|
||||
msgid "Smooth conversation inference"
|
||||
msgstr "丝滑的对话体验"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 096ff425ac7646a990a7133961c6e6af
|
||||
#: df67002e7eb84939a1f452acc88a1fa2
|
||||
msgid "RTX 3090"
|
||||
msgstr "RTX 3090"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: ecf670cdbec3493f804e6a785a83c608
|
||||
#: 333303d5e7054d2697f11bb2b53e92ec
|
||||
msgid "Smooth conversation inference, better than V100"
|
||||
msgstr "丝滑的对话体验,性能好于V100"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 837b14e0a3d243bda0df7ab35b70b7e7
|
||||
#: acfb9e152ec74bacb715cd758a9be964
|
||||
msgid "V100"
|
||||
msgstr "V100"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 3b20a087c8e342c89ccb807ffc3817c2 b8b6b45253084436a5893896b35a2bd5
|
||||
#: 1127ebc45a1b4fdb828f13d55f99ce79 ed6dd464a7f640cd866712eb7f4d4b1b
|
||||
msgid "16 GB"
|
||||
msgstr "16 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 772e18bb0ace4f7ea68b51bfc05816ce 9351389a1fac479cbe67b1f8c2c37de5
|
||||
#: 0b6ef92756204be6984f39ee1b95d423 a5aad9380cd742c59934e5ead433a22e
|
||||
msgid "Conversation inference possible, noticeable stutter"
|
||||
msgstr "Conversation inference possible, noticeable stutter"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: aadb62bf48bb49d99a714bcdf3092260
|
||||
#: 7a5a394f02b04cdcb3a9785cfa0cfc7c
|
||||
msgid "T4"
|
||||
msgstr "T4"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:30
|
||||
#: 4de80d9fcf34470bae806d829836b7d7
|
||||
#: 055b00b30901485a841d958a63750341
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"If your VRAM Size is not enough, DB-GPT supported 8-bit quantization and "
|
||||
@ -166,109 +166,109 @@ msgid ""
|
||||
msgstr "如果你的显存不够,DB-GPT支持8-bit和4-bit量化版本"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:32
|
||||
#: 00d81cbf48b549f3b9128d3840d01b2e
|
||||
#: 34175af61e2e4ecc9e56180b8872a30b
|
||||
msgid ""
|
||||
"Here are some of the VRAM size usage of the models we tested in some "
|
||||
"common scenarios."
|
||||
msgstr "这里是量化版本的相关说明"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: dc346f2bca794bb7ae34b330e82ccbcf
|
||||
#: 1d1adf0f341b4ed7b47887c5923fbe08
|
||||
msgid "Model"
|
||||
msgstr "Model"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 8de6cd40de78460ba774650466f8df26
|
||||
#: 2e36f6ffdb084de3ae3d1568af60cecc
|
||||
msgid "Quantize"
|
||||
msgstr "Quantize"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 3e412b8f4852482ab07a0f546e37ae7f f30054e0558b41a192cc9a2462b299ec
|
||||
#: 54df9f9813274db482a683c001003e86 61380cdbc434467fbf9cb7cb1efe49b7
|
||||
msgid "vicuna-7b-v1.5"
|
||||
msgstr "vicuna-7b-v1.5"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 14358fa40cf94614acf39a803987631f 2a3f52b26b444783be04ffa795246a03
|
||||
#: 3956734b19aa44c3be08d56348b47a38 751034ca7d00447895fda1d9b8a7364f
|
||||
#: a66d16e5424a42a3a1309dfb8ffc33f9 b8ebce0a9e7e481da5f16214f955665d
|
||||
#: f533b3f37e6f4594aec5e0f59f241683
|
||||
#: 4390ae926c094187bf2905361a5d6cff 467d31fb940c4daf9b2afec6bb7ea7f0
|
||||
#: 525896b27182457486018a348c068c01 6788cb202dc044e59e6ae42936b1aca8
|
||||
#: 74e07aaf7fa7461e824c653129240ad1 83b491c17b434fdb910b92c4cbc007a0
|
||||
#: c14cc4d0ac384fc699a196bf62573f01
|
||||
msgid "4-bit"
|
||||
msgstr "4-bit"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 9eac7e866ebe45169c64a952c363ce43 aa56722db3014abd9022067ed5fc4f98
|
||||
#: af4df898fb47471fbb487fcf6e2d40d6
|
||||
#: 25bbb9742e604003aeb2da782e50fa46 334153c71b624064b4f50342ab79c30e
|
||||
#: 6039634dfcfc4ad3a3298938534ef1e4
|
||||
msgid "8 GB"
|
||||
msgstr "8 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 211aaf2e234d46108b5eee5006d5f4bb 40214b2f71ce452db3501ea9d81a0c8a
|
||||
#: 72fcd5e0634e48d79813f1037e6acb45 7756b67568cc40c4b73079b26e79c85d
|
||||
#: 8c21f8e90154407682c093a46b93939d ad937c14bbcd41ac92a3dbbdb8339eed
|
||||
#: d1e7ee217dd64b15b934456c3a72c450
|
||||
#: 0d31a4a235b949eabd8e98c2dcb6d5ff 15bcdb7aedf1497fb0790c1bf3e5ee47
|
||||
#: 3125bc143cb0476db0a07b7788bc9928 be257ed0772d448b95873db9e044a713
|
||||
#: cf4e3ed197a84dba87a60cc5fc70f8ac eb399647bafe418c90212787e695afbb
|
||||
#: ee40d24463a143ca8768da7423d25b9b
|
||||
msgid "8-bit"
|
||||
msgstr "8-bit"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 4812504dfb9a4b25a5db773d9a08f34f 76ae2407ba4e4013953b9f243d9a5d92
|
||||
#: 927054919de047fd8a83df67e1400622 9773e73eb89847f8a85a2dc55b562916
|
||||
#: ce33d0c3792f43398fc7e2694653d8fc d3dc0d4cceb24d2b9dc5c7120fbed94e
|
||||
#: 21efb501692440cf80fd29401e1f0afa 246e4fc9b5f44f42a36bb49fd65c08f2
|
||||
#: 9adc65ad9d9344efa83f3507fb6ed2fd b00fd0d15bf441f48f9ea75d8877d4fd
|
||||
#: d92aa46491b442a69709b9b8d7322c2e f8499e166ca04bc2a84bfa2d42cee890
|
||||
msgid "12 GB"
|
||||
msgstr "12 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 83e6d6ba1aa74946858f0162424752ab b6b99caeaeff44c488e3e819ed337074
|
||||
#: 39950e71d2884d2c8ce4dbc9b0cb4491 ab3349160edf447ea67b15d7a056cc6e
|
||||
msgid "vicuna-13b-v1.5"
|
||||
msgstr "vicuna-13b-v1.5"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 492c5f0d560946fe879f6c339975ba37 970063dda21e4dd8be6f89a3c87832a5
|
||||
#: a66bad6054b24dd99b370312bc8b6fa6
|
||||
#: 53bd397b40e449988d9fdfd201030387 6123ea3af97f4f3dafc6be44ecfed416
|
||||
#: f493c256788e4a318cf57fa9340948a4
|
||||
msgid "20 GB"
|
||||
msgstr "20 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: a75f3405085441d8920db49f159588d2 cf635931c55846aea4cbccd92e4f0377
|
||||
#: 11903db1ac944b60a40c92f752c1f3dc 76831b0fad014348a081f3f64260c73e
|
||||
msgid "llama-2-7b"
|
||||
msgstr "llama-2-7b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 61d632df8c5149b393d03ac802141125 bc98c895d457495ea26e3537de83b432
|
||||
#: 2576bd18e1714557b33420e5ed56a95b 997fe3a3a46e4891b39171b81386a601
|
||||
msgid "llama-2-13b"
|
||||
msgstr "llama-2-13b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 3ccb1f6d8a924aeeacb5373edc168103 9ecce68e159a4649a8d5e69157af17a1
|
||||
#: 8a1cf1ae302c4c2d8bfbb1a666cc9ba6 9cbe2dae8bf44181b24d9806b654b80f
|
||||
msgid "llama-2-70b"
|
||||
msgstr "llama-2-70b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: ca1da6ce08674b3daa0ab9ee0330203f
|
||||
#: d5e875e84f534aac8f2cac4eacde7ead
|
||||
msgid "48 GB"
|
||||
msgstr "48 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 34d4d20e57c1410fbdcabd09a5968cdd
|
||||
#: 2e32ea59cce24ff0a96c8c152afeb09b
|
||||
msgid "80 GB"
|
||||
msgstr "80 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 4ec2213171054c96ac9cd46e259ce7bf 68a1752f76a54287a73e82724723ea75
|
||||
#: 176feab13e554986a1e09ce3c1d060ee edcf6676d3274fd3a578d337894467bf
|
||||
msgid "baichuan-7b"
|
||||
msgstr "baichuan-7b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 103b020a575744ad964c60a367aa1651 c659a720a1024869b09d7cc161bcd8a2
|
||||
#: 3da976a65887483abc029acb7e7640d4 837745bfb1ac41a99604f55e94fd4099
|
||||
msgid "baichuan-13b"
|
||||
msgstr "baichuan-13b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:51
|
||||
#: 2259a008d0e14f9e8d1e1d9234b97298
|
||||
#: 3c573a548e6a4767b4acc2e4d2dbd20c
|
||||
msgid "2. Install"
|
||||
msgstr "2. Install"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:56
|
||||
#: 875c7d8e32574552a48199577c78ccdd
|
||||
#: 40d61f4ee30d4d70a45a0ffb97001cd6
|
||||
msgid ""
|
||||
"We use Sqlite as default database, so there is no need for database "
|
||||
"installation. If you choose to connect to other databases, you can "
|
||||
@ -283,7 +283,7 @@ msgstr ""
|
||||
" Miniconda](https://docs.conda.io/en/latest/miniconda.html)"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:67
|
||||
#: c03e3290e1144320a138d015171ac596
|
||||
#: 69c950c69b204b94a768d1f023cc978a
|
||||
msgid ""
|
||||
"Once the environment is installed, we have to create a new folder "
|
||||
"\"models\" in the DB-GPT project, and then we can put all the models "
|
||||
@ -291,49 +291,50 @@ msgid ""
|
||||
msgstr "如果你已经安装好了环境需要创建models, 然后到huggingface官网下载模型"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:70
|
||||
#: 933401ac909741ada4acf6bcd4142ed6
|
||||
#: 2ab1a4d9de8e412f80bd04ce6b40cdf6
|
||||
msgid "Notice make sure you have install git-lfs"
|
||||
msgstr "注意确认你已经安装了git-lfs"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:72
|
||||
#: e8e4886a83dd402c85fe3fa989322991
|
||||
#: e718db01f5404485a05857b8403df93c
|
||||
msgid "centos:yum install git-lfs"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:74
|
||||
#: 5ead7e98bddf4fa4845c3d3955f18054
|
||||
#: e98570774d28430295a094cc5f5220ae
|
||||
msgid "ubuntu:apt-get install git-lfs"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:76
|
||||
#: 08acfaaaa2544182a59df54cdf61cd84
|
||||
#: 34c33b64b8de4b00be92b525ad038f23
|
||||
msgid "macos:brew install git-lfs"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:78
|
||||
#: 312ad44170c34531865576067c58701a
|
||||
#: 34bd5fbc8a8c4898a4caa7d630137061
|
||||
msgid "Download LLM Model and Embedding Model"
|
||||
msgstr "下载LLM模型和Embedding模型"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:80
|
||||
#: de54793643434528a417011d2919b2c4
|
||||
#: a67d0e365e684a3bbb5e8618c98884ce
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"If you use OpenAI llm service, see [LLM Use FAQ](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
|
||||
"If you use OpenAI llm service, see [How to Use LLM REST API](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/proxyllm/proxyllm.html)"
|
||||
msgstr ""
|
||||
"如果想使用openai大模型服务, 可以参考[LLM Use FAQ](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:83
|
||||
#: 50ec1eb7c56a46ac8fbf911c7adc9b0e
|
||||
#: f1a43cd2eba3458c863bfc77cf13ac1f
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"If you use openai or Azure or tongyi llm api service, you don't need to "
|
||||
"If you use openai or Axzure or tongyi llm api service, you don't need to "
|
||||
"download llm model."
|
||||
msgstr "如果你想通过openai or Azure or tongyi第三方api访问模型服务,你可以不用下载llm模型"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:103
|
||||
#: 03950b2a480149388fb7b88f7d251ef5
|
||||
#: 9f46746726ec4791b6963a2e2c4376c4
|
||||
msgid ""
|
||||
"The model files are large and will take a long time to download. During "
|
||||
"the download, let's configure the .env file, which needs to be copied and"
|
||||
@ -341,19 +342,19 @@ msgid ""
|
||||
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件,它需要从。env.template中复制和创建。"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:106
|
||||
#: 441c4333216a402a84fd52f8e56fc81b
|
||||
#: 53e713c8a9664d92a6e4055789c4a7da
|
||||
msgid "cp .env.template .env"
|
||||
msgstr "cp .env.template .env"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:109
|
||||
#: 4eac3d98df6a4e788234ff0ec1ffd03e
|
||||
#: 21c38ebc721242478e7ad4be4d672dc6
|
||||
msgid ""
|
||||
"You can configure basic parameters in the .env file, for example setting "
|
||||
"LLM_MODEL to the model to be used"
|
||||
msgstr "您可以在.env文件中配置基本参数,例如将LLM_MODEL设置为要使用的模型。"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:111
|
||||
#: a36bd6d6236b4c74b161a935ae792b91
|
||||
#: 6ca180c2142b455ebcc3a8b37f3bc25a
|
||||
msgid ""
|
||||
"([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on "
|
||||
"llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-"
|
||||
@ -364,39 +365,39 @@ msgstr ""
|
||||
"目前Vicuna-v1.5模型(基于llama2)已经开源了,我们推荐你使用这个模型通过设置LLM_MODEL=vicuna-13b-v1.5"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:113
|
||||
#: 78334cbf0c364eb3bc41a2a6c55ebb0d
|
||||
#: 1a346658ec4b4074b3458fc806538aae
|
||||
msgid "3. Run"
|
||||
msgstr "3. Run"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:115
|
||||
#: 6d5ad6eb067d4e9fa1c574b7b706233f
|
||||
#: 70f6300673834c9eb3e80145bb5bfcb8
|
||||
msgid "**(Optional) load examples into SQLite**"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:120
|
||||
#: 07219a4ed3c44e349314ae04ebdf58e1
|
||||
#: c10f16bd300e473a950617b814d743a0
|
||||
msgid "On windows platform:"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:125
|
||||
#: 819be2bb22044440ae00c2e7687ea249
|
||||
#: 64213616228643a7b0413805061b7a12
|
||||
#, fuzzy
|
||||
msgid "Run db-gpt server"
|
||||
msgstr "1.Run db-gpt server"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:131
|
||||
#: 5ba6d7c9bf9146c797036ab4b9b4f59e
|
||||
#: cb3248ebbade45bfba1366a66e4220f6
|
||||
msgid "Open http://localhost:5000 with your browser to see the product."
|
||||
msgstr "打开浏览器访问http://localhost:5000"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:134
|
||||
#: be3a2729ef3b4742a403017b31bda7e3
|
||||
#: 0499acfd344b4d70a0cdce31f245971c
|
||||
#, fuzzy
|
||||
msgid "Multiple GPUs"
|
||||
msgstr "4. Multiple GPUs"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:136
|
||||
#: 00ffa1cc145e4afa830c592a629246f9
|
||||
#: d56b6c4aa795428aa64e3740401645d3
|
||||
msgid ""
|
||||
"DB-GPT will use all available gpu by default. And you can modify the "
|
||||
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
|
||||
@ -404,32 +405,32 @@ msgid ""
|
||||
msgstr "DB-GPT默认加载可利用的gpu,你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:138
|
||||
#: bde32a5a8fea4350868be579e9ee6baa
|
||||
#: ca208283904441ab802827d94b3b9590
|
||||
msgid ""
|
||||
"Optionally, you can also specify the gpu ID to use before the starting "
|
||||
"command, as shown below:"
|
||||
msgstr "你也可以指定gpu ID启动"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:148
|
||||
#: 791ed2db2cff44c48342a7828cbd4c45
|
||||
#: 30f7c3b3d9784a4698cdd15a0c046e81
|
||||
msgid ""
|
||||
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
|
||||
"configure the maximum memory used by each GPU."
|
||||
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:150
|
||||
#: f86b37c8943e4f5595610706e75b4add
|
||||
#: ab02884bb7f24e59b21b797501c3794b
|
||||
#, fuzzy
|
||||
msgid "Not Enough Memory"
|
||||
msgstr "5. Not Enough Memory"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:152
|
||||
#: 8a7bd02cbeca497aa8eecaaf1910a6ad
|
||||
#: 5b246b0456f8448bb0207312a17d40c5
|
||||
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
|
||||
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:154
|
||||
#: 5ad49b99fe774ba79c50de0cd694807c
|
||||
#: a002dc2572d34b0f9b3ca6ac3b3b6147
|
||||
msgid ""
|
||||
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
|
||||
"in `.env` file to use quantization(8-bit quantization is enabled by "
|
||||
@ -437,7 +438,7 @@ msgid ""
|
||||
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:156
|
||||
#: b9c80e92137447da91eb944443144c69
|
||||
#: 16d429bd42a44b02875e505273d35228
|
||||
msgid ""
|
||||
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
|
||||
" quantization can run with 48 GB of VRAM."
|
||||
|
@ -0,0 +1,96 @@
|
||||
# SOME DESCRIPTIVE TITLE.
|
||||
# Copyright (C) 2023, csunny
|
||||
# This file is distributed under the same license as the DB-GPT package.
|
||||
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
|
||||
#
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 👏👏 0.4.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-10-26 00:03+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
"Language-Team: zh_CN <LL@li.org>\n"
|
||||
"Plural-Forms: nplurals=1; plural=0;\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=utf-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:1
|
||||
#: b006d689cfd2430da6a2b503a4f2fef3
|
||||
msgid "Proxy LLM API"
|
||||
msgstr "Proxy LLM API"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:3
|
||||
#: b60328fb6b074edba31c34825038bbf4
|
||||
msgid "Now DB-GPT supports connect LLM service through proxy rest api."
|
||||
msgstr "DB-GPT支持对接第三方的LLM REST API 服务"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:5
|
||||
#: 82dcc5fc9d314a6f871851c842c3b6b3
|
||||
msgid "LLM rest api now supports"
|
||||
msgstr "LLM REST API服务目前支持"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:7
|
||||
#: 2a894db1f42544b2bdc932b50050eaf4
|
||||
msgid "OpenAI"
|
||||
msgstr "OpenAI"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:8
|
||||
#: 52c288434b6b42a1a376f8d698d0aad1
|
||||
msgid "Azure"
|
||||
msgstr "Azure"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:9
|
||||
#: eeca0b58cf504586b8695e433e1a4458
|
||||
msgid "Aliyun tongyi"
|
||||
msgstr "通义千问API"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:10
|
||||
#: 7b30a85b145545f0b2d8dd3b85f98bcf
|
||||
msgid "Baidu wenxin"
|
||||
msgstr "百度文心API"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:11
|
||||
#: b4cfeba632cb4f898564cf76d9c1551d
|
||||
msgid "Zhipu"
|
||||
msgstr "智谱API"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:12
|
||||
#: faa92560db2b47d9b9a41bbf703fd84d
|
||||
msgid "Baichuan"
|
||||
msgstr "百川API"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:13
|
||||
#: aba2dcc36b854b6193ababca772e1cf0
|
||||
msgid "bard"
|
||||
msgstr "bard API"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:17
|
||||
#: f66f0bfcad2a4f428c953452d5f6963b
|
||||
msgid ""
|
||||
"How to Integrate LLM rest API, like OpenAI, Azure, tongyi, wenxin llm "
|
||||
"api service?"
|
||||
msgstr "如何集成这些LLM rest API呢"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:18
|
||||
#: 01284baeb4a24bb18d48e51ad8503997
|
||||
msgid "update your `.env` file"
|
||||
msgstr "更新`.env`配置文件"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:62
|
||||
#: f602876d957446fd8056854b6b2121a1
|
||||
msgid "Make sure your .env configuration is not overwritten"
|
||||
msgstr "确保文件配置不会被覆盖"
|
||||
|
||||
#: ../../getting_started/install/llm/proxyllm/proxyllm.md:65
|
||||
#: 51cb501d1500440981b3b93f01ff36f4
|
||||
msgid "How to Integrate Embedding rest API, like OpenAI, Azure api service?"
|
||||
msgstr "如何集成想OpenAI Embedding rest api"
|
||||
|
||||
#~ msgid "Now DB-GPT support connect LLM service through proxy rest api."
|
||||
#~ msgstr ""
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.3.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-08-17 21:58+0800\n"
|
||||
"POT-Creation-Date: 2023-10-25 23:56+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -19,31 +19,31 @@ msgstr ""
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../index.rst:34 ../../index.rst:45 3566ae9872dd4f97844f9e0680de5f5d
|
||||
#: ../../index.rst:34 ../../index.rst:45 71dd3acc56354242aad5a920c2805328
|
||||
msgid "Getting Started"
|
||||
msgstr "开始"
|
||||
|
||||
#: ../../index.rst:59 ../../index.rst:80 6845a928056940b28be8c13d766009f8
|
||||
#: ../../index.rst:60 ../../index.rst:81 0f2fc16a44b043019556f5f3e0d0e2c0
|
||||
msgid "Modules"
|
||||
msgstr "模块"
|
||||
|
||||
#: ../../index.rst:94 ../../index.rst:110 ef3bcf6f837345fca8539d51434e3c2c
|
||||
#: ../../index.rst:95 ../../index.rst:111 2624521b920a4b3b9eac3fec76635ab8
|
||||
msgid "Use Cases"
|
||||
msgstr "示例"
|
||||
|
||||
#: ../../index.rst:124 ../../index.rst:127 b10b8b49e4f5459f8bea881ffc9259d5
|
||||
#: ../../index.rst:125 ../../index.rst:128 accec2bb9c5149f184a87e03955d6b22
|
||||
msgid "Reference"
|
||||
msgstr "参考"
|
||||
|
||||
#: ../../index.rst:137 ../../index.rst:143 12e75881253a4c4383ac7364c1103348
|
||||
#: ../../index.rst:138 ../../index.rst:144 26278dabd4944d1a9f14330e83935162
|
||||
msgid "Resources"
|
||||
msgstr "资源"
|
||||
|
||||
#: ../../index.rst:7 f1a30af655744c0b8fb7197a5fc3a45b
|
||||
#: ../../index.rst:7 9277d505dda74ae0862cd09d05cf5e63
|
||||
msgid "Welcome to DB-GPT!"
|
||||
msgstr "欢迎来到DB-GPT中文文档"
|
||||
|
||||
#: ../../index.rst:8 426686c829a342798eaae9f789260621
|
||||
#: ../../index.rst:8 9fa76a01965746978a00ac411fca13a8
|
||||
msgid ""
|
||||
"As large models are released and iterated upon, they are becoming "
|
||||
"increasingly intelligent. However, in the process of using large models, "
|
||||
@ -61,7 +61,7 @@ msgstr ""
|
||||
",我们启动了DB-"
|
||||
"GPT项目,为所有基于数据库的场景构建一个完整的私有大模型解决方案。该方案“”支持本地部署,既可应用于“独立私有环境”,又可根据业务模块进行“独立部署”和“隔离”,确保“大模型”的能力绝对私有、安全、可控。"
|
||||
|
||||
#: ../../index.rst:10 0a7fbd17ecfd48cb8e0593e35a225e1b
|
||||
#: ../../index.rst:10 b12b6f91c5664f61aa9e4d7cd500b922
|
||||
msgid ""
|
||||
"**DB-GPT** is an experimental open-source project that uses localized GPT"
|
||||
" large models to interact with your data and environment. With this "
|
||||
@ -71,102 +71,103 @@ msgstr ""
|
||||
"DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地化的GPT大模型与您的数据和环境进行交互,无数据泄露风险100% 私密,100%"
|
||||
" 安全。"
|
||||
|
||||
#: ../../index.rst:12 50e02964b1e24c4fa598f820796aec61
|
||||
#: ../../index.rst:12 7032e17191394f7090141927644fb512
|
||||
msgid "**Features**"
|
||||
msgstr "特性"
|
||||
|
||||
#: ../../index.rst:13 62b74478b9b046dfa7606785939ca70e
|
||||
#: ../../index.rst:13 5a7a8e5eace34d5f9a4f779bf5122928
|
||||
msgid ""
|
||||
"Currently, we have released multiple key features, which are listed below"
|
||||
" to demonstrate our current capabilities:"
|
||||
msgstr "目前我们已经发布了多种关键的特性,这里一一列举展示一下当前发布的能力。"
|
||||
|
||||
#: ../../index.rst:15 f593159ba8dd4388bd2ba189f9efd5ea
|
||||
#: ../../index.rst:15 5d0f67aacb8b4bc893a306ccbd6a3778
|
||||
msgid "SQL language capabilities - SQL generation - SQL diagnosis"
|
||||
msgstr "SQL语言能力 - SQL生成 - SQL诊断"
|
||||
|
||||
#: ../../index.rst:19 b829655a3ef146528beb9c50538be84e
|
||||
#: ../../index.rst:19 556eaf756fec431ca5c453208292ab4f
|
||||
msgid ""
|
||||
"Private domain Q&A and data processing - Database knowledge Q&A - Data "
|
||||
"processing"
|
||||
msgstr "私有领域问答与数据处理 - 数据库知识问答 - 数据处理"
|
||||
|
||||
#: ../../index.rst:23 43e988a100a740358f1a0be1710d7960
|
||||
#: ../../index.rst:23 5148ad898ec041858eddbeaa646d3f1b
|
||||
msgid ""
|
||||
"Plugins - Support custom plugin execution tasks and natively support the "
|
||||
"Auto-GPT plugin, such as:"
|
||||
msgstr "插件模型 - 支持自定义插件执行任务,并原生支持Auto-GPT插件,例如:* SQL自动执行,获取查询结果 * 自动爬取学习知识"
|
||||
|
||||
#: ../../index.rst:26 888186524aef4fe5a0ba643d55783fd9
|
||||
#: ../../index.rst:26 34c7ff33bc1c401480603a5197ecb1c4
|
||||
msgid ""
|
||||
"Unified vector storage/indexing of knowledge base - Support for "
|
||||
"unstructured data such as PDF, Markdown, CSV, and WebURL"
|
||||
msgstr "知识库统一向量存储/索引 - 非结构化数据支持包括PDF、MarkDown、CSV、WebURL"
|
||||
|
||||
#: ../../index.rst:29 fab4f961fe1746dca3d5e369de714108
|
||||
#: ../../index.rst:29 9d7095e5b08249e6bb5c724929537e6c
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"Milti LLMs Support - Supports multiple large language models, currently "
|
||||
"Multi LLMs Support - Supports multiple large language models, currently "
|
||||
"supporting Vicuna (7b, 13b), ChatGLM-6b (int4, int8) - TODO: codegen2, "
|
||||
"codet5p"
|
||||
msgstr "多模型支持 - 支持多种大语言模型, 当前已支持Vicuna(7b,13b), ChatGLM-6b(int4, int8)"
|
||||
|
||||
#: ../../index.rst:35 7bc21f280364448da0edc046378be622
|
||||
#: ../../index.rst:35 caa368eab40e4efb953865740a3c9018
|
||||
msgid ""
|
||||
"How to get started using DB-GPT to interact with your data and "
|
||||
"environment."
|
||||
msgstr "开始使用DB-GPT与您的数据环境进行交互。"
|
||||
|
||||
#: ../../index.rst:36 5362221ecdf5427faa51df83d4a939ee
|
||||
#: ../../index.rst:36 34cecad11f8b4a3e96bfa0a31814e3d2
|
||||
#, fuzzy
|
||||
msgid "`Quickstart Guide <./getting_started/getting_started.html>`_"
|
||||
msgstr "`使用指南 <./getting_started/getting_started.html>`_"
|
||||
|
||||
#: ../../index.rst:38 e028ed5afce842fbb76a6ce825d5a8e2
|
||||
#: ../../index.rst:38 892598cdc16d45c68383033b08b7233f
|
||||
msgid "Concepts and terminology"
|
||||
msgstr "相关概念"
|
||||
|
||||
#: ../../index.rst:40 f773ac5e50054c308920c0b95a44b0cb
|
||||
#: ../../index.rst:40 887cc43a3a134aba96eb7ca11e5ca86f
|
||||
#, fuzzy
|
||||
msgid "`Concepts and Terminology <./getting_started/concepts.html>`_"
|
||||
msgstr "`相关概念 <./getting_started/concepts.html>`_"
|
||||
|
||||
#: ../../index.rst:42 afeee818ec45454da12b80161b5f1de0
|
||||
#: ../../index.rst:42 133e25c7dce046b1ab262489ecb60b4a
|
||||
msgid "Coming soon..."
|
||||
msgstr ""
|
||||
|
||||
#: ../../index.rst:44 cb4f911316234b86aad88b83d2784ad3
|
||||
#: ../../index.rst:44 a9e0812d32714a6f81ed75aa70f0c20e
|
||||
msgid "`Tutorials <.getting_started/tutorials.html>`_"
|
||||
msgstr "`教程 <.getting_started/tutorials.html>`_"
|
||||
|
||||
#: ../../index.rst:61 de365722d61442b99f44fcde8a8c9efb
|
||||
#: ../../index.rst:62 4fccfd3082174f58926a9811f39e4d96
|
||||
msgid ""
|
||||
"These modules are the core abstractions with which we can interact with "
|
||||
"data and environment smoothly."
|
||||
msgstr "这些模块是我们可以与数据和环境顺利地进行交互的核心组成。"
|
||||
|
||||
#: ../../index.rst:62 8fbb6303d3cd4fcc956815f44ef1fa8d
|
||||
#: ../../index.rst:63 ecac40207ada454e9a68356f575dbca9
|
||||
msgid ""
|
||||
"It's very important for DB-GPT, DB-GPT also provide standard, extendable "
|
||||
"interfaces."
|
||||
msgstr "DB-GPT还提供了标准的、可扩展的接口。"
|
||||
|
||||
#: ../../index.rst:64 797ac43f459b4662a5097c8cb783c4ba
|
||||
#: ../../index.rst:65 9d852bed582449e89dc13312ddf29eed
|
||||
msgid ""
|
||||
"The docs for each module contain quickstart examples, how to guides, "
|
||||
"reference docs, and conceptual guides."
|
||||
msgstr "每个模块的文档都包含快速入门的例子、操作指南、参考文档和相关概念等内容。"
|
||||
|
||||
#: ../../index.rst:66 70d7b89c1c154c65aabedbb3c94c8771
|
||||
#: ../../index.rst:67 3167446539de449aba2de694fe901bcf
|
||||
msgid "The modules are as follows"
|
||||
msgstr "组成模块如下:"
|
||||
|
||||
#: ../../index.rst:68 794d8a939e274894910fcdbb3ee52429
|
||||
#: ../../index.rst:69 442683a5f154429da87f452e49bcbb5c
|
||||
msgid ""
|
||||
"`LLMs <./modules/llms.html>`_: Supported multi models management and "
|
||||
"integrations."
|
||||
msgstr "`LLMs <./modules/llms.html>`_:基于FastChat提供大模型的运行环境。支持多模型管理和集成。 "
|
||||
|
||||
#: ../../index.rst:70 58b0d2fcac2c471494fe2b6b5b3f1b49
|
||||
#: ../../index.rst:71 3cf320b6199e4ce78235bce8b1be60a2
|
||||
msgid ""
|
||||
"`Prompts <./modules/prompts.html>`_: Prompt management, optimization, and"
|
||||
" serialization for multi database."
|
||||
@ -174,59 +175,59 @@ msgstr ""
|
||||
"`Prompt自动生成与优化 <./modules/prompts.html>`_: 自动化生成高质量的Prompt "
|
||||
",并进行优化,提高系统的响应效率"
|
||||
|
||||
#: ../../index.rst:72 d433b62e11f64e18995bd334f93992a6
|
||||
#: ../../index.rst:73 4f29ed67ea2a4a3ca824ac8b8b33cae6
|
||||
msgid "`Plugins <./modules/plugins.html>`_: Plugins management, scheduler."
|
||||
msgstr "`Agent与插件: <./modules/plugins.html>`_:提供Agent和插件机制,使得用户可以自定义并增强系统的行为。"
|
||||
|
||||
#: ../../index.rst:74 3e7eb10f64274c07ace90b84ffc904b4
|
||||
#: ../../index.rst:75 d651f9d93bb54b898ef97407501cc6cf
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"`Knowledge <./modules/knowledge.html>`_: Knowledge management, embedding,"
|
||||
" and search."
|
||||
msgstr "`知识库能力: <./modules/knowledge.html>`_: 支持私域知识库问答能力, "
|
||||
|
||||
#: ../../index.rst:76 3af796f287f54de0869a086cfa24b568
|
||||
#: ../../index.rst:77 fd6dd2adcd844baa84602b650d89e507
|
||||
msgid ""
|
||||
"`Connections <./modules/connections.html>`_: Supported multi databases "
|
||||
"connection. management connections and interact with this."
|
||||
msgstr "`连接模块 <./modules/connections.html>`_: 用于连接不同的模块和数据源,实现数据的流转和交互 "
|
||||
|
||||
#: ../../index.rst:78 27273a4020a540f4b28e7c54ea9c9232
|
||||
#: ../../index.rst:79 7e388a9d8c044169923508ccdeb2d9a5
|
||||
#, fuzzy
|
||||
msgid "`Vector <./modules/vector.html>`_: Supported multi vector database."
|
||||
msgstr "`LLMs <./modules/llms.html>`_:基于FastChat提供大模型的运行环境。支持多模型管理和集成。 "
|
||||
|
||||
#: ../../index.rst:96 d887747669d1429f950c56131cd35a62
|
||||
#: ../../index.rst:97 9d37bf061a784d5ca92d1de33b0834f3
|
||||
msgid "Best Practices and built-in implementations for common DB-GPT use cases:"
|
||||
msgstr "DB-GPT用例的最佳实践和内置方法:"
|
||||
|
||||
#: ../../index.rst:98 63ea6a449012432baeeef975db5c3ac1
|
||||
#: ../../index.rst:99 ba264bbe31d24c7887a30cbd5442e157
|
||||
msgid ""
|
||||
"`Sql generation and diagnosis "
|
||||
"<./use_cases/sql_generation_and_diagnosis.html>`_: SQL generation and "
|
||||
"diagnosis."
|
||||
msgstr "`Sql生成和诊断 <./use_cases/sql_generation_and_diagnosis.html>`_: Sql生成和诊断。"
|
||||
|
||||
#: ../../index.rst:100 c7388a2147af48d1a6619492a3b926db
|
||||
#: ../../index.rst:101 1166f6aeba064a3990d4b0caa87db274
|
||||
msgid ""
|
||||
"`knownledge Based QA <./use_cases/knownledge_based_qa.html>`_: A "
|
||||
"important scene for user to chat with database documents, codes, bugs and"
|
||||
" schemas."
|
||||
msgstr "`知识库问答 <./use_cases/knownledge_based_qa.html>`_: 用户与数据库文档、代码和bug聊天的重要场景\""
|
||||
|
||||
#: ../../index.rst:102 f5a459f5a84241648a6a05f7ba3026c0
|
||||
#: ../../index.rst:103 b32610ada3a0440e9b029b8dffe7c79e
|
||||
msgid ""
|
||||
"`Chatbots <./use_cases/chatbots.html>`_: Language model love to chat, use"
|
||||
" multi models to chat."
|
||||
msgstr "`聊天机器人 <./use_cases/chatbots.html>`_: 使用多模型进行对话"
|
||||
|
||||
#: ../../index.rst:104 d566174db6eb4834854c00ce7295c297
|
||||
#: ../../index.rst:105 d9348b8112df4839ab14a74a42b63715
|
||||
msgid ""
|
||||
"`Querying Database Data <./use_cases/query_database_data.html>`_: Query "
|
||||
"and Analysis data from databases and give charts."
|
||||
msgstr "`查询数据库数据 <./use_cases/query_database_data.html>`_:从数据库中查询和分析数据并给出图表。"
|
||||
|
||||
#: ../../index.rst:106 89aac5a738ae4aeb84bc324803ada354
|
||||
#: ../../index.rst:107 ef978dc1f4254e5eb4ca487c31c03f7c
|
||||
msgid ""
|
||||
"`Interacting with apis <./use_cases/interacting_with_api.html>`_: "
|
||||
"Interact with apis, such as create a table, deploy a database cluster, "
|
||||
@ -235,25 +236,25 @@ msgstr ""
|
||||
"`API交互 <./use_cases/interacting_with_api.html>`_: "
|
||||
"与API交互,例如创建表、部署数据库集群、创建数据库等。"
|
||||
|
||||
#: ../../index.rst:108 f100fb0cdd264cf186bf554771488aa1
|
||||
#: ../../index.rst:109 49a549d7d38f493ba48e162785b4ac5d
|
||||
msgid ""
|
||||
"`Tool use with plugins <./use_cases/tool_use_with_plugin>`_: According to"
|
||||
" Plugin use tools to manage databases autonomoly."
|
||||
msgstr "`插件工具 <./use_cases/tool_use_with_plugin>`_: 根据插件使用工具自主管理数据库。"
|
||||
|
||||
#: ../../index.rst:125 93809b95cc4a41249d7ab6b264981167
|
||||
#: ../../index.rst:126 db86500484fa4f14918b0ad4e5a7326d
|
||||
msgid ""
|
||||
"Full documentation on all methods, classes, installation methods, and "
|
||||
"integration setups for DB-GPT."
|
||||
msgstr "关于DB-GPT的所有方法、类、安装方法和集成设置的完整文档。"
|
||||
|
||||
#: ../../index.rst:139 4a95e65e128e4fc0907a3f51f1f2611b
|
||||
#: ../../index.rst:140 a44abbb370a841658801bb2729fa62c9
|
||||
msgid ""
|
||||
"Additional resources we think may be useful as you develop your "
|
||||
"application!"
|
||||
msgstr "“我们认为在您开发应用程序时可能有用的其他资源!”"
|
||||
|
||||
#: ../../index.rst:141 4934fbae909644769dd83c7f99c0fcd0
|
||||
#: ../../index.rst:142 4d0da8471db240dba842949b6796be7a
|
||||
msgid ""
|
||||
"`Discord <https://discord.gg/eZHE94MN>`_: if your have some problem or "
|
||||
"ideas, you can talk from discord."
|
||||
|
Loading…
Reference in New Issue
Block a user