mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-09-10 13:29:35 +00:00
doc:deploy doc
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
LLM USE FAQ
|
||||
==================================
|
||||
##### Q1:how to use openai chatgpt service
|
||||
change your LLM_MODEL
|
||||
change your LLM_MODEL in `.env`
|
||||
````shell
|
||||
LLM_MODEL=proxyllm
|
||||
````
|
||||
@@ -16,7 +16,6 @@ PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
|
||||
make sure your openapi API_KEY is available
|
||||
|
||||
##### Q2 What difference between `python dbgpt_server --light` and `python dbgpt_server`
|
||||
|
||||
```{note}
|
||||
* `python dbgpt_server --light` dbgpt_server does not start the llm service. Users can deploy the llm service separately by using `python llmserver`, and dbgpt_server accesses the llm service through set the LLM_SERVER environment variable in .env. The purpose is to allow for the separate deployment of dbgpt's backend service and llm service.
|
||||
|
||||
@@ -24,7 +23,19 @@ make sure your openapi API_KEY is available
|
||||
|
||||
```
|
||||
|
||||
##### Q3 how to use MultiGPUs
|
||||
```{tip}
|
||||
If you want to access an external LLM service(deployed by DB-GPT), you need to
|
||||
|
||||
1.set the variables LLM_MODEL=YOUR_MODEL_NAME, MODEL_SERVER=YOUR_MODEL_SERVER(eg:http://localhost:5000) in the .env file.
|
||||
|
||||
2.execute dbgpt_server.py in light mode
|
||||
|
||||
python pilot/server/dbgpt_server.py --light
|
||||
|
||||
```
|
||||
|
||||
|
||||
##### Q3 How to use MultiGPUs
|
||||
|
||||
DB-GPT will use all available gpu by default. And you can modify the setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file
|
||||
to use the specific gpu IDs.
|
||||
|
@@ -60,12 +60,8 @@ For the entire installation process of DB-GPT, we use the miniconda3 virtual env
|
||||
python>=3.10
|
||||
conda create -n dbgpt_env python=3.10
|
||||
conda activate dbgpt_env
|
||||
# it will take some minutes
|
||||
pip install -e ".[default]"
|
||||
```
|
||||
Before use DB-GPT Knowledge
|
||||
```bash
|
||||
python -m spacy download zh_core_web_sm
|
||||
|
||||
```
|
||||
|
||||
Once the environment is installed, we have to create a new folder "models" in the DB-GPT project, and then we can put all the models downloaded from huggingface in this directory
|
||||
@@ -78,26 +74,34 @@ centos:yum install git-lfs
|
||||
ubuntu:apt-get install git-lfs
|
||||
|
||||
macos:brew install git-lfs
|
||||
```
|
||||
##### Download LLM Model and Embedding Model
|
||||
|
||||
If you use OpenAI llm service, see [LLM Use FAQ](https://db-gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)
|
||||
|
||||
```{tip}
|
||||
If you use openai or Axzure or tongyi llm api service, you don't need to download llm model.
|
||||
|
||||
```
|
||||
|
||||
```bash
|
||||
cd DB-GPT
|
||||
mkdir models and cd models
|
||||
#### llm model
|
||||
git clone https://huggingface.co/lmsys/vicuna-13b-v1.5
|
||||
or
|
||||
git clone https://huggingface.co/THUDM/chatglm2-6b
|
||||
|
||||
#### embedding model
|
||||
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
|
||||
or
|
||||
git clone https://huggingface.co/moka-ai/m3e-large
|
||||
|
||||
#### llm model, if you use openai or Azure or tongyi llm api service, you don't need to download llm model
|
||||
git clone https://huggingface.co/lmsys/vicuna-13b-v1.5
|
||||
or
|
||||
git clone https://huggingface.co/THUDM/chatglm2-6b
|
||||
|
||||
```
|
||||
|
||||
The model files are large and will take a long time to download. During the download, let's configure the .env file, which needs to be copied and created from the .env.template
|
||||
|
||||
if you want to use openai llm service, see [LLM Use FAQ](https://db-gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)
|
||||
|
||||
```{tip}
|
||||
cp .env.template .env
|
||||
```
|
||||
@@ -108,7 +112,7 @@ You can configure basic parameters in the .env file, for example setting LLM_MOD
|
||||
|
||||
### 3. Run
|
||||
|
||||
**(Optional) load examples into SQLlite**
|
||||
**(Optional) load examples into SQLite**
|
||||
```bash
|
||||
bash ./scripts/examples/load_examples.sh
|
||||
```
|
||||
@@ -118,7 +122,7 @@ On windows platform:
|
||||
.\scripts\examples\load_examples.bat
|
||||
```
|
||||
|
||||
1.Run db-gpt server
|
||||
Run db-gpt server
|
||||
|
||||
```bash
|
||||
python pilot/server/dbgpt_server.py
|
||||
@@ -126,19 +130,6 @@ python pilot/server/dbgpt_server.py
|
||||
|
||||
Open http://localhost:5000 with your browser to see the product.
|
||||
|
||||
```{tip}
|
||||
If you want to access an external LLM service, you need to
|
||||
|
||||
1.set the variables LLM_MODEL=YOUR_MODEL_NAME, MODEL_SERVER=YOUR_MODEL_SERVER(eg:http://localhost:5000) in the .env file.
|
||||
|
||||
2.execute dbgpt_server.py in light mode
|
||||
```
|
||||
|
||||
If you want to learn about dbgpt-webui, read https://github./csunny/DB-GPT/tree/new-page-framework/datacenter
|
||||
|
||||
```bash
|
||||
python pilot/server/dbgpt_server.py --light
|
||||
```
|
||||
|
||||
### Multiple GPUs
|
||||
|
||||
|
Reference in New Issue
Block a user