Merge remote-tracking branch 'origin/main' into dbgpt_api

This commit is contained in:
aries_ckt
2023-08-18 17:58:42 +08:00
8 changed files with 235 additions and 13 deletions

View File

@@ -2,26 +2,26 @@ Installation FAQ
==================================
##### Q1: execute `pip install -r requirements.txt` error, found some package cannot find correct version.
##### Q1: execute `pip install -e .` error, found some package cannot find correct version.
change the pip source.
```bash
# pypi
$ pip install -r requirements.txt -i https://pypi.python.org/simple
$ pip install -e . -i https://pypi.python.org/simple
```
or
```bash
# tsinghua
$ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
$ pip install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple/
```
or
```bash
# aliyun
$ pip install -r requirements.txt -i http://mirrors.aliyun.com/pypi/simple/
$ pip install -e . -i http://mirrors.aliyun.com/pypi/simple/
```
##### Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
@@ -29,5 +29,20 @@ $ pip install -r requirements.txt -i http://mirrors.aliyun.com/pypi/simple/
make sure you pull latest code or create directory with mkdir pilot/data
##### Q3: The model keeps getting killed.
your GPU VRAM size is not enough, try replace your hardware or replace other llms.
##### Q4: How to access website on the public network
You can try to use gradio's [network](https://github.com/gradio-app/gradio/blob/main/gradio/networking.py) to achieve.
```python
import secrets
from gradio import networking
token=secrets.token_urlsafe(32)
local_port=5000
url = networking.setup_tunnel('0.0.0.0', local_port, token)
print(f'Public url: {url}')
time.sleep(60 * 60 * 24)
```
Open `url` with your browser to see the website.

View File

@@ -49,7 +49,7 @@ For the entire installation process of DB-GPT, we use the miniconda3 virtual env
python>=3.10
conda create -n dbgpt_env python=3.10
conda activate dbgpt_env
pip install -r requirements.txt
pip install -e .
```
Before use DB-GPT Knowledge
```bash
@@ -97,15 +97,20 @@ You can configure basic parameters in the .env file, for example setting LLM_MOD
### 3. Run
**(Optional) load examples into SQLlite**
```bash
bash ./scripts/examples/load_examples.sh
```
1.Run db-gpt server
```bash
$ python pilot/server/dbgpt_server.py
python pilot/server/dbgpt_server.py
```
Open http://localhost:5000 with your browser to see the product.
```tip
```{tip}
If you want to access an external LLM service, you need to
1.set the variables LLM_MODEL=YOUR_MODEL_NAME, MODEL_SERVER=YOUR_MODEL_SERVEReg:http://localhost:5000 in the .env file.
@@ -116,7 +121,7 @@ If you want to access an external LLM service, you need to
If you want to learn about dbgpt-webui, read https://github./csunny/DB-GPT/tree/new-page-framework/datacenter
```bash
$ python pilot/server/dbgpt_server.py --light
python pilot/server/dbgpt_server.py --light
```
### Multiple GPUs
@@ -141,6 +146,4 @@ DB-GPT supported 8-bit quantization and 4-bit quantization.
You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` in `.env` file to use quantization(8-bit quantization is enabled by default).
Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit quantization can run with 48 GB of VRAM.
Note: you need to install the latest dependencies according to [requirements.txt](https://github.com/eosphoros-ai/DB-GPT/blob/main/requirements.txt).
Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit quantization can run with 48 GB of VRAM.

View File

@@ -1,6 +1,8 @@
### llama.cpp
llama.cpp
==================================
DB-GPT is now supported by [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) through [llama.cpp](https://github.com/ggerganov/llama.cpp).
DB-GPT already supports [llama.cpp](https://github.com/ggerganov/llama.cpp) via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python).
## Running llama.cpp