feat(model): Support vLLM

This commit is contained in:
FangYin Cheng
2023-10-09 20:01:29 +08:00
parent 1cdaaeb820
commit d5a52f79f1
32 changed files with 957 additions and 155 deletions

View File

@@ -47,7 +47,7 @@ You can execute the command `bash docker/build_all_images.sh --help` to see more
**Run with local model and SQLite database**
```bash
docker run --gpus all -d \
docker run --ipc host --gpus all -d \
-p 5000:5000 \
-e LOCAL_DB_TYPE=sqlite \
-e LOCAL_DB_PATH=data/default_sqlite.db \
@@ -73,7 +73,7 @@ docker logs dbgpt -f
**Run with local model and MySQL database**
```bash
docker run --gpus all -d -p 3306:3306 \
docker run --ipc host --gpus all -d -p 3306:3306 \
-p 5000:5000 \
-e LOCAL_DB_HOST=127.0.0.1 \
-e LOCAL_DB_PASSWORD=aa123456 \

View File

@@ -30,3 +30,4 @@ Multi LLMs Support, Supports multiple large language models, currently supportin
./llama/llama_cpp.md
./quantization/quantization.md
./vllm/vllm.md

View File

@@ -0,0 +1,26 @@
vLLM
==================================
[vLLM](https://github.com/vllm-project/vllm) is a fast and easy-to-use library for LLM inference and serving.
## Running vLLM
### Installing Dependencies
vLLM is an optional dependency in DB-GPT, and you can manually install it using the following command:
```bash
pip install -e ".[vllm]"
```
### Modifying the Configuration File
Next, you can directly modify your `.env` file to enable vllm.
```env
LLM_MODEL=vicuna-13b-v1.5
MODEL_TYPE=vllm
```
You can view the models supported by vLLM [here](https://vllm.readthedocs.io/en/latest/models/supported_models.html#supported-models)
Then you can run it according to [Run](https://db-gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run).

View File

@@ -0,0 +1,79 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.9\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-10-09 19:46+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/llm/vllm/vllm.md:1
#: 9193438ba52148a3b71f190beeb4ef42
msgid "vLLM"
msgstr ""
#: ../../getting_started/install/llm/vllm/vllm.md:4
#: c30d032965794d7e81636581324be45d
msgid ""
"[vLLM](https://github.com/vllm-project/vllm) is a fast and easy-to-use "
"library for LLM inference and serving."
msgstr "[vLLM](https://github.com/vllm-project/vllm) 是一个快速且易于使用的 LLM 推理和服务的库。"
#: ../../getting_started/install/llm/vllm/vllm.md:6
#: b399c7268e0448cb893fbcb11a480849
msgid "Running vLLM"
msgstr "运行 vLLM"
#: ../../getting_started/install/llm/vllm/vllm.md:8
#: 7bed52b8bac946069a24df9e94098df5
msgid "Installing Dependencies"
msgstr "安装依赖"
#: ../../getting_started/install/llm/vllm/vllm.md:10
#: fd50a9f3e1b1459daa3b1a0cd610d1a3
msgid ""
"vLLM is an optional dependency in DB-GPT, and you can manually install it"
" using the following command:"
msgstr "vLLM 在 DB-GPT 是一个可选依赖, 你可以使用下面的命令手动安装它:"
#: ../../getting_started/install/llm/vllm/vllm.md:16
#: 44b251bc6b2c41ebaad9fd5a6a204c7c
msgid "Modifying the Configuration File"
msgstr "修改配置文件"
#: ../../getting_started/install/llm/vllm/vllm.md:18
#: 37f4e65148fa4339969265107b70b8fe
msgid "Next, you can directly modify your `.env` file to enable vllm."
msgstr "你可以直接修改你的 `.env` 文件。"
#: ../../getting_started/install/llm/vllm/vllm.md:24
#: 15d79c9417d04e779fa00a08a05e30d7
msgid ""
"You can view the models supported by vLLM "
"[here](https://vllm.readthedocs.io/en/latest/models/supported_models.html"
"#supported-models)"
msgstr ""
"你可以在 "
"[这里](https://vllm.readthedocs.io/en/latest/models/supported_models.html"
"#supported-models) 查看 vLLM 支持的模型。"
#: ../../getting_started/install/llm/vllm/vllm.md:26
#: 28d90b1fdf6943d9969c8668a7c1094b
msgid ""
"Then you can run it according to [Run](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run)."
msgstr ""
"然后你可以根据[运行]"
"(https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/getting_started/install/deploy/deploy.html#run)来启动项目。"