doc:llm faq

This commit is contained in:
aries_ckt
2023-09-25 21:09:17 +08:00
parent efc89348da
commit 486b30a8c5
2 changed files with 35 additions and 27 deletions

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n" "Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-09-14 14:35+0800\n" "POT-Creation-Date: 2023-09-25 20:58+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n" "Language: zh_CN\n"
@@ -19,33 +19,33 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n" "Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/llm/llm_faq.md:1 73790502b62745ec88bbe9fe124254f0 #: ../../getting_started/faq/llm/llm_faq.md:1 0d4fc79dbfce4f968ab310de12d69f3b
msgid "LLM USE FAQ" msgid "LLM USE FAQ"
msgstr "LLM模型使用FAQ" msgstr "LLM模型使用FAQ"
#: ../../getting_started/faq/llm/llm_faq.md:3 473bdd77bbb242f497f514e6e63d0c5f #: ../../getting_started/faq/llm/llm_faq.md:3 08873df3ef2741dca8916c4c0d503b4f
msgid "Q1:how to use openai chatgpt service" msgid "Q1:how to use openai chatgpt service"
msgstr "我怎么使用OPENAI服务" msgstr "我怎么使用OPENAI服务"
#: ../../getting_started/faq/llm/llm_faq.md:4 6e073181e48e4604a301f3d7359c91ef #: ../../getting_started/faq/llm/llm_faq.md:4 7741b098acd347659ccf663b5323666c
msgid "change your LLM_MODEL" msgid "change your LLM_MODEL"
msgstr "通过在.env文件设置LLM_MODEL" msgstr "通过在.env文件设置LLM_MODEL"
#: ../../getting_started/faq/llm/llm_faq.md:9 a88cd162dca448b198c0551a70e70da3 #: ../../getting_started/faq/llm/llm_faq.md:9 018115ec074c48739b730310a8bafa44
msgid "set your OPENAPI KEY" msgid "set your OPENAPI KEY"
msgstr "set your OPENAPI KEY" msgstr "set your OPENAPI KEY"
#: ../../getting_started/faq/llm/llm_faq.md:16 ebaa67e9d31f4c70b4bccbd4394d1c27 #: ../../getting_started/faq/llm/llm_faq.md:16 42408d9c11994a848da41c3ab87d7a78
msgid "make sure your openapi API_KEY is available" msgid "make sure your openapi API_KEY is available"
msgstr "确认openapi API_KEY是否可用" msgstr "确认openapi API_KEY是否可用"
#: ../../getting_started/faq/llm/llm_faq.md:18 8e88363a43b9460dae90a772360dcc5a #: ../../getting_started/faq/llm/llm_faq.md:18 d9aedc07578d4562bad0ba1f130651de
msgid "" msgid ""
"Q2 What difference between `python dbgpt_server --light` and `python " "Q2 What difference between `python dbgpt_server --light` and `python "
"dbgpt_server`" "dbgpt_server`"
msgstr "Q2 `python dbgpt_server --light` 和 `python dbgpt_server`的区别是什么?" msgstr "Q2 `python dbgpt_server --light` 和 `python dbgpt_server`的区别是什么?"
#: ../../getting_started/faq/llm/llm_faq.md:21 1bbf3891883b43659b7ef39ce5e91918 #: ../../getting_started/faq/llm/llm_faq.md:21 03c03fedaa2f4bfdaefb42fd4164c902
msgid "" msgid ""
"`python dbgpt_server --light` dbgpt_server does not start the llm " "`python dbgpt_server --light` dbgpt_server does not start the llm "
"service. Users can deploy the llm service separately by using `python " "service. Users can deploy the llm service separately by using `python "
@@ -57,54 +57,54 @@ msgstr ""
"用户可以通过`python " "用户可以通过`python "
"llmserver`单独部署模型服务dbgpt_server通过LLM_SERVER环境变量来访问模型服务。目的是为了可以将dbgpt后台服务和大模型服务分离部署。" "llmserver`单独部署模型服务dbgpt_server通过LLM_SERVER环境变量来访问模型服务。目的是为了可以将dbgpt后台服务和大模型服务分离部署。"
#: ../../getting_started/faq/llm/llm_faq.md:23 96a6b6be655c4f85a7c18e813f67517e #: ../../getting_started/faq/llm/llm_faq.md:23 61354a0859284346adc3e07c820aa61a
msgid "" msgid ""
"`python dbgpt_server` dbgpt_server service and the llm service are " "`python dbgpt_server` dbgpt_server service and the llm service are "
"deployed on the same instance. when dbgpt_server starts the service, it " "deployed on the same instance. when dbgpt_server starts the service, it "
"also starts the llm service at the same time." "also starts the llm service at the same time."
msgstr "`python dbgpt_server` 是将后台服务和模型服务部署在同一台实例上.dbgpt_server在启动服务的时候同时开启模型服务." msgstr "`python dbgpt_server` 是将后台服务和模型服务部署在同一台实例上.dbgpt_server在启动服务的时候同时开启模型服务."
#: ../../getting_started/faq/llm/llm_faq.md:27 8a0138f4ceab476a97f112776669c7ca #: ../../getting_started/faq/llm/llm_faq.md:27 41ee95bf0b224be995f7530d0b67f712
#, fuzzy #, fuzzy
msgid "Q3 how to use MultiGPUs" msgid "Q3 how to use MultiGPUs"
msgstr "Q2 怎么使用 MultiGPUs" msgstr "Q2 怎么使用 MultiGPUs"
#: ../../getting_started/faq/llm/llm_faq.md:29 6b2f25a5a2b243f78c2f96e3b045bf97 #: ../../getting_started/faq/llm/llm_faq.md:29 7fce22f0327646399b98b0e20574a2fd
msgid "" msgid ""
"DB-GPT will use all available gpu by default. And you can modify the " "DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu" "setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs." " IDs."
msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs" msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
#: ../../getting_started/faq/llm/llm_faq.md:32 2adf75ffb0ab451999d2f446389eea6c #: ../../getting_started/faq/llm/llm_faq.md:32 3f4eb824dc924d7ca309dc5057f8360a
msgid "" msgid ""
"Optionally, you can also specify the gpu ID to use before the starting " "Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:" "command, as shown below:"
msgstr "你也可以指定gpu ID启动" msgstr "你也可以指定gpu ID启动"
#: ../../getting_started/faq/llm/llm_faq.md:42 793d6d8503b74323b4997cf2981cc098 #: ../../getting_started/faq/llm/llm_faq.md:42 a77d72f91b864d0aac344b317c100950
msgid "" msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to " "You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU." "configure the maximum memory used by each GPU."
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存" msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
#: ../../getting_started/faq/llm/llm_faq.md:44 bdfc8eb5bc89460ea3979f61b8aeca7f #: ../../getting_started/faq/llm/llm_faq.md:44 b3bb92777a1244d5967a4308d14722fc
#, fuzzy #, fuzzy
msgid "Q4 Not Enough Memory" msgid "Q4 Not Enough Memory"
msgstr "Q3 机器显存不够 " msgstr "Q3 机器显存不够 "
#: ../../getting_started/faq/llm/llm_faq.md:46 e0e60f0263d34eec818b72c38d214b8f #: ../../getting_started/faq/llm/llm_faq.md:46 c3976d81aafa4c6081e37c0d0a115d96
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization." msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization." msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
#: ../../getting_started/faq/llm/llm_faq.md:48 98c954d9fcf449f4b47610fc96091c4f #: ../../getting_started/faq/llm/llm_faq.md:48 93ade142f949449d8f54c0b6d8c8d261
msgid "" msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` " "You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by " "in `.env` file to use quantization(8-bit quantization is enabled by "
"default)." "default)."
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`" msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
#: ../../getting_started/faq/llm/llm_faq.md:50 2568b441f7e54654b405c7791f08036a #: ../../getting_started/faq/llm/llm_faq.md:50 be2573907d624ebf8c901301f938577b
msgid "" msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit" "Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM." " quantization can run with 48 GB of VRAM."
@@ -112,41 +112,49 @@ msgstr ""
"Llama-2-70b with 8-bit quantization 可以运行在 80 GB VRAM机器 4-bit " "Llama-2-70b with 8-bit quantization 可以运行在 80 GB VRAM机器 4-bit "
"quantization可以运行在 48 GB VRAM" "quantization可以运行在 48 GB VRAM"
#: ../../getting_started/faq/llm/llm_faq.md:52 f8d1e4312f9743c7b03820b4a8dbf992 #: ../../getting_started/faq/llm/llm_faq.md:52 c084d4624e794f7e8ceebadb6f260b49
msgid "" msgid ""
"Note: you need to install the latest dependencies according to " "Note: you need to install the latest dependencies according to "
"[requirements.txt](https://github.com/eosphoros-ai/DB-" "[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)." "GPT/blob/main/requirements.txt)."
msgstr "" msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:54 5fe0d9ced7e848799f4d7bce92a5c130 #: ../../getting_started/faq/llm/llm_faq.md:54 559bcd62af7340f79f5eca817187e13e
#, fuzzy #, fuzzy
msgid "Q5 How to Add LLM Service dynamic local mode" msgid "Q5 How to Add LLM Service dynamic local mode"
msgstr "Q5 怎样动态新增模型服务" msgstr "Q5 怎样动态新增模型服务"
#: ../../getting_started/faq/llm/llm_faq.md:56 fd921148e3e547beb6c74035a6b6a8b0 #: ../../getting_started/faq/llm/llm_faq.md:56 e47101d7d47e486e8572f6acd609fa92
msgid "" msgid ""
"Now DB-GPT through multi-llm service switch, so how to add llm service " "Now DB-GPT through multi-llm service switch, so how to add llm service "
"dynamic," "dynamic,"
msgstr "DB-GPT支持多个模型服务切换, 怎样添加一个模型服务呢" msgstr "DB-GPT支持多个模型服务切换, 怎样添加一个模型服务呢"
#: ../../getting_started/faq/llm/llm_faq.md:67 5fe0d9ced7e848799f4d7bce92a5c130 #: ../../getting_started/faq/llm/llm_faq.md:67 5710dd9bf8f54bd388354079b29acdd2
#, fuzzy #, fuzzy
msgid "Q6 How to Add LLM Service dynamic in remote mode" msgid "Q6 How to Add LLM Service dynamic in remote mode"
msgstr "Q5 怎样动态新增模型服务" msgstr "Q5 怎样动态新增模型服务"
#: ../../getting_started/faq/llm/llm_faq.md:68 bd29cd6d29a64908af15b391d73ea82a #: ../../getting_started/faq/llm/llm_faq.md:68 9c9311d6daad402a8e0748f00e69e8cf
msgid "" msgid ""
"If you deploy llm service in remote machine instance, and you want to " "If you deploy llm service in remote machine instance, and you want to "
"add model service to dbgpt server to manage" "add model service to dbgpt server to manage"
msgstr "如果你想在远程机器实例部署大模型服务并添加到本地dbgpt_server进行管理" msgstr "如果你想在远程机器实例部署大模型服务并添加到本地dbgpt_server进行管理"
#: ../../getting_started/faq/llm/llm_faq.md:70 ace16dfc4326431dbe4a9a32e4a83ba4 #: ../../getting_started/faq/llm/llm_faq.md:70 3ec1565e74384beab23df9d8d4a19a39
msgid "use dbgpt start worker and set --controller_addr." msgid "use dbgpt start worker and set --controller_addr."
msgstr "使用1`dbgpt start worker`命令并设置注册地址--controller_addr" msgstr "使用1`dbgpt start worker`命令并设置注册地址--controller_addr"
#: ../../getting_started/faq/llm/llm_faq.md:81 f8c024339da447ce8160a4eb9f87c125 #: ../../getting_started/faq/llm/llm_faq.md:80 e2b8a9119f7843beb787d021c973eea4
#, fuzzy #, fuzzy
msgid "Q7 dbgpt command not found" msgid "Q7 dbgpt command not found"
msgstr "Q6 dbgpt command not found" msgstr "Q6 dbgpt command not found"
#: ../../getting_started/faq/llm/llm_faq.md:86 257ae9c462cd4a9abe7d2ff00f6bc891
msgid ""
"Q8 When starting the worker_manager on a cloud server and registering it "
"with the controller, it is noticed that the worker's exposed IP is a "
"private IP instead of a public IP, which leads to the inability to access"
" the service."
msgstr "云服务器启动worker_manager注册到controller时发现worker暴露的ip是私网ip, 没有以公网ip暴露导致服务访问不到"

View File

@@ -66,15 +66,15 @@ async def model_list():
last_heartbeat=model.last_heartbeat, last_heartbeat=model.last_heartbeat,
prompt_template=model.prompt_template, prompt_template=model.prompt_template,
) )
response.manager_host = model.host if manager_map[model.host] else None response.manager_host = model.host if manager_map.get(model.host) else None
response.manager_port = ( response.manager_port = (
manager_map[model.host].port if manager_map[model.host] else None manager_map[model.host].port if manager_map.get(model.host) else None
) )
responses.append(response) responses.append(response)
return Result.succ(responses) return Result.succ(responses)
except Exception as e: except Exception as e:
return Result.faild(code="E000X", msg=f"space list error {e}") return Result.faild(code="E000X", msg=f"model list error {e}")
@router.post("/v1/worker/model/stop") @router.post("/v1/worker/model/stop")