feat(model): multi-model supports embedding model and simple component design implementation

This commit is contained in:
FangYin Cheng
2023-09-13 12:14:03 +08:00
parent 68d30dd4bb
commit 581cf361bf
47 changed files with 1050 additions and 211 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 361 KiB

View File

@@ -9,6 +9,7 @@ DB-GPT product is a Web application that you can chat database, chat knowledge,
- docker
- docker_compose
- environment
- cluster deployment
- deploy_faq
.. toctree::
@@ -20,6 +21,7 @@ DB-GPT product is a Web application that you can chat database, chat knowledge,
./install/deploy/deploy.md
./install/docker/docker.md
./install/docker_compose/docker_compose.md
./install/cluster/cluster.rst
./install/llm/llm.rst
./install/environment/environment.md
./install/faq/deploy_faq.md

View File

@@ -0,0 +1,19 @@
Cluster deployment
==================================
In order to deploy DB-GPT to multiple nodes, you can deploy a cluster. The cluster architecture diagram is as follows:
.. raw:: html
<img src="../../../_static/img/muti-model-cluster-overview.png" />
* On :ref:`Deploying on local machine <local-cluster-index>`. Local cluster deployment.
.. toctree::
:maxdepth: 2
:caption: Cluster deployment
:name: cluster_deploy
:hidden:
./vms/index.md

View File

@@ -0,0 +1,3 @@
Kubernetes cluster deployment
==================================
(kubernetes-cluster-index)=

View File

@@ -1,6 +1,6 @@
Cluster deployment
Local cluster deployment
==================================
(local-cluster-index)=
## Model cluster deployment
@@ -17,7 +17,7 @@ dbgpt start controller
By default, the Model Controller starts on port 8000.
### Launch Model Worker
### Launch LLM Model Worker
If you are starting `chatglm2-6b`:
@@ -39,6 +39,18 @@ dbgpt start worker --model_name vicuna-13b-v1.5 \
Note: Be sure to use your own model name and model path.
### Launch Embedding Model Worker
```bash
dbgpt start worker --model_name text2vec \
--model_path /app/models/text2vec-large-chinese \
--worker_type text2vec \
--port 8003 \
--controller_addr http://127.0.0.1:8000
```
Note: Be sure to use your own model name and model path.
Check your model:
@@ -51,8 +63,12 @@ You will see the following output:
+-----------------+------------+------------+------+---------+---------+-----------------+----------------------------+
| Model Name | Model Type | Host | Port | Healthy | Enabled | Prompt Template | Last Heartbeat |
+-----------------+------------+------------+------+---------+---------+-----------------+----------------------------+
| chatglm2-6b | llm | 172.17.0.6 | 8001 | True | True | None | 2023-08-31T04:48:45.252939 |
| vicuna-13b-v1.5 | llm | 172.17.0.6 | 8002 | True | True | None | 2023-08-31T04:48:55.136676 |
| chatglm2-6b | llm | 172.17.0.2 | 8001 | True | True | | 2023-09-12T23:04:31.287654 |
| WorkerManager | service | 172.17.0.2 | 8001 | True | True | | 2023-09-12T23:04:31.286668 |
| WorkerManager | service | 172.17.0.2 | 8003 | True | True | | 2023-09-12T23:04:29.845617 |
| WorkerManager | service | 172.17.0.2 | 8002 | True | True | | 2023-09-12T23:04:24.598439 |
| text2vec | text2vec | 172.17.0.2 | 8003 | True | True | | 2023-09-12T23:04:29.844796 |
| vicuna-13b-v1.5 | llm | 172.17.0.2 | 8002 | True | True | | 2023-09-12T23:04:24.597775 |
+-----------------+------------+------------+------+---------+---------+-----------------+----------------------------+
```
@@ -69,7 +85,7 @@ MODEL_SERVER=http://127.0.0.1:8000
#### Start the webserver
```bash
python pilot/server/dbgpt_server.py --light
dbgpt start webserver --light
```
`--light` indicates not to start the embedded model service.
@@ -77,7 +93,7 @@ python pilot/server/dbgpt_server.py --light
Alternatively, you can prepend the command with `LLM_MODEL=chatglm2-6b` to start:
```bash
LLM_MODEL=chatglm2-6b python pilot/server/dbgpt_server.py --light
LLM_MODEL=chatglm2-6b dbgpt start webserver --light
```
@@ -101,9 +117,11 @@ Options:
--help Show this message and exit.
Commands:
model Clients that manage model serving
start Start specific server.
stop Start specific server.
install Install dependencies, plugins, etc.
knowledge Knowledge command line tool
model Clients that manage model serving
start Start specific server.
stop Start specific server.
```
**View the `dbgpt start` help**
@@ -146,10 +164,11 @@ Options:
--model_name TEXT Model name [required]
--model_path TEXT Model path [required]
--worker_type TEXT Worker type
--worker_class TEXT Model worker class, pilot.model.worker.defau
lt_worker.DefaultModelWorker
--worker_class TEXT Model worker class,
pilot.model.cluster.DefaultModelWorker
--host TEXT Model worker deploy host [default: 0.0.0.0]
--port INTEGER Model worker deploy port [default: 8000]
--port INTEGER Model worker deploy port [default: 8001]
--daemon Run Model Worker in background
--limit_model_concurrency INTEGER
Model concurrency limit [default: 5]
--standalone Standalone mode. If True, embedded Run
@@ -166,7 +185,7 @@ Options:
(seconds) [default: 20]
--device TEXT Device to run model. If None, the device is
automatically determined
--model_type TEXT Model type, huggingface or llama.cpp
--model_type TEXT Model type, huggingface, llama.cpp and proxy
[default: huggingface]
--prompt_template TEXT Prompt template. If None, the prompt
template is automatically determined from
@@ -190,7 +209,7 @@ Options:
--compute_dtype TEXT Model compute type
--trust_remote_code Trust remote code [default: True]
--verbose Show verbose output.
--help Show this message and exit.
--help Show this message and exit.
```
**View the `dbgpt model`help**
@@ -208,10 +227,13 @@ Usage: dbgpt model [OPTIONS] COMMAND [ARGS]...
Options:
--address TEXT Address of the Model Controller to connect to. Just support
light deploy model [default: http://127.0.0.1:8000]
light deploy model, If the environment variable
CONTROLLER_ADDRESS is configured, read from the environment
variable
--help Show this message and exit.
Commands:
chat Interact with your bot from the command line
list List model instances
restart Restart model instances
start Start model instances

View File

@@ -6,6 +6,7 @@ DB-GPT provides a management and deployment solution for multiple models. This c
Multi LLMs Support, Supports multiple large language models, currently supporting
- 🔥 Baichuan2(7b,13b)
- 🔥 Vicuna-v1.5(7b,13b)
- 🔥 llama-2(7b,13b,70b)
- WizardLM-v1.2(13b)
@@ -19,7 +20,6 @@ Multi LLMs Support, Supports multiple large language models, currently supportin
- llama_cpp
- quantization
- cluster deployment
.. toctree::
:maxdepth: 2
@@ -29,4 +29,3 @@ Multi LLMs Support, Supports multiple large language models, currently supportin
./llama/llama_cpp.md
./quantization/quantization.md
./cluster/model_cluster.md

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"POT-Creation-Date: 2023-09-13 09:06+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -19,34 +19,38 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install.rst:2 ../../getting_started/install.rst:14
#: 2861085e63144eaca1bb825e5f05d089
#: ../../getting_started/install.rst:2 ../../getting_started/install.rst:15
#: e2c13385046b4da6b6838db6ba2ea59c
msgid "Install"
msgstr "Install"
#: ../../getting_started/install.rst:3 01a6603d91fa4520b0f839379d4eda23
#: ../../getting_started/install.rst:3 3cb6cd251ed440dabe5d4f556435f405
msgid ""
"DB-GPT product is a Web application that you can chat database, chat "
"knowledge, text2dashboard."
msgstr "DB-GPT 可以生成sql智能报表, 知识库问答的产品"
#: ../../getting_started/install.rst:8 beca85cddc9b4406aecf83d5dfcce1f7
#: ../../getting_started/install.rst:8 6fe8104b70d24f5fbfe2ad9ebf3bc3ba
msgid "deploy"
msgstr "部署"
#: ../../getting_started/install.rst:9 601e9b9eb91f445fb07d2f1c807f0370
#: ../../getting_started/install.rst:9 e67974b3672346809febf99a3b9a55d3
msgid "docker"
msgstr "docker"
#: ../../getting_started/install.rst:10 6d1e094ac9284458a32a3e7fa6241c81
#: ../../getting_started/install.rst:10 64de16a047c74598966e19a656bf6c4f
msgid "docker_compose"
msgstr "docker_compose"
#: ../../getting_started/install.rst:11 ff1d1c60bbdc4e8ca82b7a9f303dd167
#: ../../getting_started/install.rst:11 9f87d65e8675435b87cb9376a5bfd85c
msgid "environment"
msgstr "environment"
#: ../../getting_started/install.rst:12 33bfbe8defd74244bfc24e8fbfd640f6
#: ../../getting_started/install.rst:12 e60fa13bb24544ed9d4f902337093ebc
msgid "cluster deployment"
msgstr "集群部署"
#: ../../getting_started/install.rst:13 7451712679c2412e858e7d3e2af6b174
msgid "deploy_faq"
msgstr "deploy_faq"

View File

@@ -0,0 +1,42 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.6\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-09-13 10:11+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/cluster/cluster.rst:2
#: ../../getting_started/install/cluster/cluster.rst:13
#: 69804208b580447798d6946150da7bdf
msgid "Cluster deployment"
msgstr "集群部署"
#: ../../getting_started/install/cluster/cluster.rst:4
#: fa3e4e0ae60a45eb836bcd256baa9d91
msgid ""
"In order to deploy DB-GPT to multiple nodes, you can deploy a cluster. "
"The cluster architecture diagram is as follows:"
msgstr "为了能将 DB-GPT 部署到多个节点上,你可以部署一个集群,集群的架构图如下:"
#: ../../getting_started/install/cluster/cluster.rst:11
#: e739449099ca43cabe9883233ca7e572
#, fuzzy
msgid ""
"On :ref:`Deploying on local machine <local-cluster-index>`. Local cluster"
" deployment."
msgstr "关于 :ref:`在本地机器上部署 <local-cluster-index>`。本地集群部署。"

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.6\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-09-13 09:06+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/cluster/kubernetes/index.md:1
#: 48e6f08f27c74f31a8b12758fe33dc24
msgid "Kubernetes cluster deployment"
msgstr "Kubernetes 集群部署"

View File

@@ -0,0 +1,176 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.6\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-09-13 09:06+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/cluster/vms/index.md:1
#: 2d2e04ba49364eae9b8493bb274765a6
msgid "Local cluster deployment"
msgstr "本地集群部署"
#: ../../getting_started/install/cluster/vms/index.md:4
#: e405d0e7ad8c4b2da4b4ca27c77f5fea
msgid "Model cluster deployment"
msgstr "模型集群部署"
#: ../../getting_started/install/cluster/vms/index.md:7
#: bba397ddac754a2bab8edca163875b65
msgid "**Installing Command-Line Tool**"
msgstr "**安装命令行工具**"
#: ../../getting_started/install/cluster/vms/index.md:9
#: bc45851124354522af8c9bb9748ff1fa
msgid ""
"All operations below are performed using the `dbgpt` command. To use the "
"`dbgpt` command, you need to install the DB-GPT project with `pip install"
" -e .`. Alternatively, you can use `python pilot/scripts/cli_scripts.py` "
"as a substitute for the `dbgpt` command."
msgstr ""
"以下所有操作都使用 `dbgpt` 命令完成。要使用 `dbgpt` 命令您需要安装DB-GPT项目方法是使用`pip install -e .`。或者,您可以使用 `python pilot/scripts/cli_scripts.py` 作为 `dbgpt` 命令的替代。"
#: ../../getting_started/install/cluster/vms/index.md:11
#: 9d11f7807fd140c8949b634700adc966
msgid "Launch Model Controller"
msgstr "启动 Model Controller"
#: ../../getting_started/install/cluster/vms/index.md:17
#: 97716be92ba64ce9a215433bddf77add
msgid "By default, the Model Controller starts on port 8000."
msgstr "默认情况下Model Controller 启动在 8000 端口。"
#: ../../getting_started/install/cluster/vms/index.md:20
#: 3f65e6a1e59248a59c033891d1ab7ba8
msgid "Launch LLM Model Worker"
msgstr "启动 LLM Model Worker"
#: ../../getting_started/install/cluster/vms/index.md:22
#: 60241d97573e4265b7fb150c378c4a08
msgid "If you are starting `chatglm2-6b`:"
msgstr "如果您启动的是 `chatglm2-6b`"
#: ../../getting_started/install/cluster/vms/index.md:31
#: 18bbeb1de110438fa96dd5c736b9a7b1
msgid "If you are starting `vicuna-13b-v1.5`:"
msgstr "如果您启动的是 `vicuna-13b-v1.5`"
#: ../../getting_started/install/cluster/vms/index.md:40
#: ../../getting_started/install/cluster/vms/index.md:53
#: 24b1a27313c64224aaeab6cbfad1fe19 fc94a698a7904c6893eef7e7a6e52972
msgid "Note: Be sure to use your own model name and model path."
msgstr "注意:确保使用您自己的模型名称和模型路径。"
#: ../../getting_started/install/cluster/vms/index.md:42
#: 19746195e85f4784bf66a9e67378c04b
msgid "Launch Embedding Model Worker"
msgstr "启动 Embedding Model Worker"
#: ../../getting_started/install/cluster/vms/index.md:55
#: e93ce68091f64d0294b3f912a66cc18b
msgid "Check your model:"
msgstr "检查您的模型:"
#: ../../getting_started/install/cluster/vms/index.md:61
#: fa0b8f3a18fe4bab88fbf002bf26d32e
msgid "You will see the following output:"
msgstr "您将看到以下输出:"
#: ../../getting_started/install/cluster/vms/index.md:75
#: 695262fb4f224101902bc7865ac7871f
msgid "Connect to the model service in the webserver (dbgpt_server)"
msgstr "在 webserver (dbgpt_server) 中连接到模型服务 (dbgpt_server)"
#: ../../getting_started/install/cluster/vms/index.md:77
#: 73bf4c2ae5c64d938e3b7e77c06fa21e
msgid ""
"**First, modify the `.env` file to change the model name and the Model "
"Controller connection address.**"
msgstr ""
"**首先,修改 `.env` 文件以更改模型名称和模型控制器连接地址。**"
#: ../../getting_started/install/cluster/vms/index.md:85
#: 8ab126fd72ed4368a79b821ba50e62c8
msgid "Start the webserver"
msgstr "启动 webserver"
#: ../../getting_started/install/cluster/vms/index.md:91
#: 5a7e25c84ca2412bb64310bfad9e2403
msgid "`--light` indicates not to start the embedded model service."
msgstr "`--light` 表示不启动嵌入式模型服务。"
#: ../../getting_started/install/cluster/vms/index.md:93
#: 8cd9ec4fa9cb4c0fa8ff05c05a85ea7f
msgid ""
"Alternatively, you can prepend the command with `LLM_MODEL=chatglm2-6b` "
"to start:"
msgstr ""
"或者,您可以在命令前加上 `LLM_MODEL=chatglm2-6b` 来启动:"
#: ../../getting_started/install/cluster/vms/index.md:100
#: 13ed16758a104860b5fc982d36638b17
msgid "More Command-Line Usages"
msgstr "更多命令行用法"
#: ../../getting_started/install/cluster/vms/index.md:102
#: 175f614d547a4391bab9a77762f9174e
msgid "You can view more command-line usages through the help command."
msgstr "您可以通过帮助命令查看更多命令行用法。"
#: ../../getting_started/install/cluster/vms/index.md:104
#: 6a4475d271c347fbbb35f2936a86823f
msgid "**View the `dbgpt` help**"
msgstr "**查看 `dbgpt` 帮助**"
#: ../../getting_started/install/cluster/vms/index.md:109
#: 3eb11234cf504cc9ac369d8462daa14b
msgid "You will see the basic command parameters and usage:"
msgstr "您将看到基本的命令参数和用法:"
#: ../../getting_started/install/cluster/vms/index.md:127
#: 6eb47aecceec414e8510fe022b6fddbd
msgid "**View the `dbgpt start` help**"
msgstr "**查看 `dbgpt start` 帮助**"
#: ../../getting_started/install/cluster/vms/index.md:133
#: 1f4c0a4ce0704ca8ac33178bd13c69ad
msgid "Here you can see the related commands and usage for start:"
msgstr "在这里,您可以看到启动的相关命令和用法:"
#: ../../getting_started/install/cluster/vms/index.md:150
#: 22e8e67bc55244e79764d091f334560b
msgid "**View the `dbgpt start worker`help**"
msgstr "**查看 `dbgpt start worker` 帮助**"
#: ../../getting_started/install/cluster/vms/index.md:156
#: 5631b83fda714780855e99e90d4eb542
msgid "Here you can see the parameters to start Model Worker:"
msgstr "在这里,您可以看到启动 Model Worker 的参数:"
#: ../../getting_started/install/cluster/vms/index.md:215
#: cf4a31fd3368481cba1b3ab382615f53
msgid "**View the `dbgpt model`help**"
msgstr "**查看 `dbgpt model` 帮助**"
#: ../../getting_started/install/cluster/vms/index.md:221
#: 3740774ec4b240f2882b5b59da224d55
msgid ""
"The `dbgpt model ` command can connect to the Model Controller via the "
"Model Controller address and then manage a remote model:"
msgstr ""
"`dbgpt model` 命令可以通过 Model Controller 地址连接到 Model Controller然后管理远程模型"

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-31 16:38+0800\n"
"POT-Creation-Date: 2023-09-13 10:46+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -21,84 +21,88 @@ msgstr ""
#: ../../getting_started/install/llm/llm.rst:2
#: ../../getting_started/install/llm/llm.rst:24
#: b348d4df8ca44dd78b42157a8ff6d33d
#: e693a8d3769b4d9e99c4442ca77dc43c
msgid "LLM Usage"
msgstr "LLM使用"
#: ../../getting_started/install/llm/llm.rst:3 7f5960a7e5634254b330da27be87594b
#: ../../getting_started/install/llm/llm.rst:3 0a73562d18ba455bab04277b715c3840
msgid ""
"DB-GPT provides a management and deployment solution for multiple models."
" This chapter mainly discusses how to deploy different models."
msgstr "DB-GPT提供了多模型的管理和部署方案本章主要讲解针对不同的模型该怎么部署"
#: ../../getting_started/install/llm/llm.rst:18
#: b844ab204ec740ec9d7d191bb841f09e
#: ../../getting_started/install/llm/llm.rst:19
#: d7e4de2a7e004888897204ec76b6030b
msgid ""
"Multi LLMs Support, Supports multiple large language models, currently "
"supporting"
msgstr "目前DB-GPT已适配如下模型"
#: ../../getting_started/install/llm/llm.rst:9 c141437ddaf84c079360008343041b2f
#: ../../getting_started/install/llm/llm.rst:9 4616886b8b2244bd93355e871356d89e
#, fuzzy
msgid "🔥 Baichuan2(7b,13b)"
msgstr "Baichuan(7b,13b)"
#: ../../getting_started/install/llm/llm.rst:10
#: ad0e4793d4e744c1bdf59f5a3d9c84be
msgid "🔥 Vicuna-v1.5(7b,13b)"
msgstr "🔥 Vicuna-v1.5(7b,13b)"
#: ../../getting_started/install/llm/llm.rst:10
#: d32b1e3f114c4eab8782b497097c1b37
#: ../../getting_started/install/llm/llm.rst:11
#: d291e58001ae487bbbf2a1f9f889f5fd
msgid "🔥 llama-2(7b,13b,70b)"
msgstr "🔥 llama-2(7b,13b,70b)"
#: ../../getting_started/install/llm/llm.rst:11
#: 0a417ee4d008421da07fff7add5d05eb
#: ../../getting_started/install/llm/llm.rst:12
#: 1e49702ee40b4655945a2a13efaad536
msgid "WizardLM-v1.2(13b)"
msgstr "WizardLM-v1.2(13b)"
#: ../../getting_started/install/llm/llm.rst:12
#: 199e1a9fe3324dc8a1bcd9cd0b1ef047
#: ../../getting_started/install/llm/llm.rst:13
#: 4ef5913ddfe840d7a12289e6e1d4cb60
msgid "Vicuna (7b,13b)"
msgstr "Vicuna (7b,13b)"
#: ../../getting_started/install/llm/llm.rst:13
#: a9e4c5100534450db3a583fa5850e4be
#: ../../getting_started/install/llm/llm.rst:14
#: ea46c2211257459285fa48083cb59561
msgid "ChatGLM-6b (int4,int8)"
msgstr "ChatGLM-6b (int4,int8)"
#: ../../getting_started/install/llm/llm.rst:14
#: 943324289eb94042b52fd824189cd93f
#: ../../getting_started/install/llm/llm.rst:15
#: 90688302bae4452a84f14e8ecb7f1a21
msgid "ChatGLM2-6b (int4,int8)"
msgstr "ChatGLM2-6b (int4,int8)"
#: ../../getting_started/install/llm/llm.rst:15
#: f1226fdfac3b4e9d88642ffa69d75682
#: ../../getting_started/install/llm/llm.rst:16
#: ee1469545a314696a36e7296c7b71960
msgid "guanaco(7b,13b,33b)"
msgstr "guanaco(7b,13b,33b)"
#: ../../getting_started/install/llm/llm.rst:16
#: 3f2457f56eb341b6bc431c9beca8f4df
#: ../../getting_started/install/llm/llm.rst:17
#: 25abad241f4d4eee970d5938bf71311f
msgid "Gorilla(7b,13b)"
msgstr "Gorilla(7b,13b)"
#: ../../getting_started/install/llm/llm.rst:17
#: 86c8ce37be1c4a7ea3fc382100d77a9c
#: ../../getting_started/install/llm/llm.rst:18
#: 8e3d0399431a4c6a9065a8ae0ad3c8ac
msgid "Baichuan(7b,13b)"
msgstr "Baichuan(7b,13b)"
#: ../../getting_started/install/llm/llm.rst:18
#: 538111af95ad414cb2e631a89f9af379
#: ../../getting_started/install/llm/llm.rst:19
#: c285fa7c9c6c4e3e9840761a09955348
msgid "OpenAI"
msgstr "OpenAI"
#: ../../getting_started/install/llm/llm.rst:20
#: a203325b7ec248f7bff61ae89226a000
#: ../../getting_started/install/llm/llm.rst:21
#: 4ac13a21f323455982750bd2e0243b72
msgid "llama_cpp"
msgstr "llama_cpp"
#: ../../getting_started/install/llm/llm.rst:21
#: 21a50634198047228bc51a03d2c31292
#: ../../getting_started/install/llm/llm.rst:22
#: 7231edceef584724a6f569c6b363e083
msgid "quantization"
msgstr "quantization"
#: ../../getting_started/install/llm/llm.rst:22
#: dfaec4b04e6e45ff9c884b41534b1a79
msgid "cluster deployment"
msgstr ""
#~ msgid "cluster deployment"
#~ msgstr ""

View File

@@ -1,4 +1,4 @@
autodoc_pydantic==1.8.0
autodoc_pydantic
myst_parser
nbsphinx==0.8.9
sphinx==4.5.0