doc:refactor install document and application document

This commit is contained in:
aries_ckt
2023-08-16 23:20:08 +08:00
parent 732fd0e7e7
commit 63af66ccc1
39 changed files with 3031 additions and 458 deletions

View File

@@ -65,18 +65,6 @@ LOCAL_DB_TYPE=sqlite
# LOCAL_DB_HOST=127.0.0.1
# LOCAL_DB_PORT=3306
### MILVUS
## MILVUS_ADDR - Milvus remote address (e.g. localhost:19530)
## MILVUS_USERNAME - username for your Milvus database
## MILVUS_PASSWORD - password for your Milvus database
## MILVUS_SECURE - True to enable TLS. (Default: False)
## Setting MILVUS_ADDR to a `https://` URL will override this setting.
## MILVUS_COLLECTION - Milvus collection, change it if you want to start a new memory and retain the old memory.
# MILVUS_ADDR=localhost:19530
# MILVUS_USERNAME=
# MILVUS_PASSWORD=
# MILVUS_SECURE=
# MILVUS_COLLECTION=dbgpt
#*******************************************************************#
#** COMMANDS **#

View File

@@ -1,60 +1,9 @@
# FAQ
##### Q1: text2vec-large-chinese not found
##### A1: make sure you have download text2vec-large-chinese embedding model in right way
```tip
centos:yum install git-lfs
ubuntu:apt-get install git-lfs -y
macos:brew install git-lfs
```
```bash
cd models
git lfs clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
```
##### Q2: execute `pip install -r requirements.txt` error, found some package cannot find correct version.
##### A2: change the pip source.
```bash
# pypi
$ pip install -r requirements.txt -i https://pypi.python.org/simple
```
or
```bash
# tsinghua
$ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
```
or
```bash
# aliyun
$ pip install -r requirements.txt -i http://mirrors.aliyun.com/pypi/simple/
```
##### Q3:Access denied for user 'root@localhost'(using password :NO)
##### A3: make sure you have installed mysql instance in right way
Docker:
```bash
docker run --name=mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=aa12345678 -dit mysql:latest
```
Normal:
[download mysql instance](https://dev.mysql.com/downloads/mysql/)
##### Q4:When I use openai(MODEL_SERVER=proxyllm) to chat
<p align="left">
<img src="../../assets/faq/proxyerror.png" width="800px" />
</p>
##### A4: make sure your openapi API_KEY is available
##### Q5:When I Chat Data and Chat Meta Data, I found the error
<p align="left">
@@ -82,48 +31,8 @@ mysql>CREATE TABLE `users` (
) ENGINE=InnoDB AUTO_INCREMENT=101 DEFAULT CHARSET=utf8mb4 COMMENT='聊天用户表'
```
##### Q6:How to change Vector DB Type in DB-GPT.
##### A6: Update .env file and set VECTOR_STORE_TYPE.
DB-GPT currently support Chroma(Default), Milvus(>2.1), Weaviate vector database.
If you want to change vector db, Update your .env, set your vector store type, VECTOR_STORE_TYPE=Chroma (now only support Chroma and Milvus(>2.1), if you set Milvus, please set MILVUS_URL and MILVUS_PORT)
If you want to support more vector db, you can integrate yourself.[how to integrate](https://db-gpt.readthedocs.io/en/latest/modules/vector.html)
```commandline
#*******************************************************************#
#** VECTOR STORE SETTINGS **#
#*******************************************************************#
VECTOR_STORE_TYPE=Chroma
#MILVUS_URL=127.0.0.1
#MILVUS_PORT=19530
#MILVUS_USERNAME
#MILVUS_PASSWORD
#MILVUS_SECURE=
#WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network
```
##### Q7:When I use vicuna-13b, found some illegal character like this.
<p align="left">
<img src="../../assets/faq/illegal_character.png" width="800px" />
</p>
##### A7: set KNOWLEDGE_SEARCH_TOP_SIZE smaller or set KNOWLEDGE_CHUNK_SIZE smaller, and reboot server.
##### Q8:space add error (pymysql.err.OperationalError) (1054, "Unknown column 'knowledge_space.context' in 'field list'")
##### A8:
1.shutdown dbgpt_server(ctrl c)
2.add column context for table knowledge_space
```commandline
mysql -h127.0.0.1 -uroot -paa12345678
```
3.execute sql ddl
```commandline
mysql> use knowledge_management;
mysql> ALTER TABLE knowledge_space ADD COLUMN context TEXT COMMENT "arguments context";
```
4.restart dbgpt server

View File

@@ -0,0 +1,20 @@
Applications
==================================
DB-GPT product is a Web application that you can chat database, chat knowledge, text2dashboard.
.. image:: ./assets/DB-GPT-Product.jpg
- Chat DB
- Chat Knowledge
- Dashboard
- Plugins
.. toctree::
:maxdepth: 2
:caption: Application
:name: chatdb
:hidden:
./application/chatdb/chatdb.md
./application/kbqa/kbqa.md

View File

@@ -0,0 +1,19 @@
ChatData & ChatDB
==================================
ChatData generates SQL from natural language and executes it. ChatDB involves conversing with metadata from the Database, including metadata about databases, tables, and fields.![db plugins demonstration](../../../../assets/chat_data/chat_data.jpg)
### 1.Choose Datasource
If you are using DB-GPT for the first time, you need to add a data source and set the relevant connection information for the data source.
#### 1.1 Datasource management
![db plugins demonstration](../../../../assets/chat_data/db_entry.png)
#### 1.2 Connection management
![db plugins demonstration](../../../../assets/chat_data/db_connect.png)
#### 1.3 Add Datasource
![db plugins demonstration](../../../../assets/chat_data/add_datasource.png)
### 2.ChatData
After successfully setting up the data source, you can start conversing with the database. You can ask it to generate SQL for you or inquire about relevant information on the database's metadata.
![db plugins demonstration](../../../../assets/chat_data/chatdata_eg.png)
### 3.ChatDB
![db plugins demonstration](../../../../assets/chat_data/chatdb_eg.png)

View File

@@ -0,0 +1,80 @@
KBQA
==================================
DB-GPT supports a knowledge question-answering module, which aims to create an intelligent expert in the field of databases and provide professional knowledge-based answers to database practitioners.
![chat_knowledge](../../../../assets/chat_knowledge.png)
## KBQA abilities
```{admonition} KBQA abilities
* Knowledge Space.
* Multi Source Knowledge Source Embedding.
* Embedding Argument Adjust
* Chat Knowledge
* Multi Vector DB
```
## Steps to KBQA In DB-GPT
#### 1.Create Knowledge Space
If you are using Knowledge Space for the first time, you need to create a Knowledge Space and set your name, owner, description.
![create_space](../../../../assets/kbqa/create_space.png)
#### 2.Create Knowledge Document
DB-GPT now support Multi Knowledge Source, including Text, WebUrl, and Document(PDF, Markdown, Word, PPT, HTML and CSV).
After successfully uploading a document for translation, the backend system will automatically read and split and chunk the document, and then import it into the vector database. Alternatively, you can manually synchronize the document. You can also click on details to view the specific document slicing content.
##### 2.1 Choose Knowledge Type:
![document](../../../../assets/kbqa/document.jpg)
##### 2.2 Upload Document:
![upload](../../../../assets/kbqa/upload.jpg)
#### 3.Chat With Knowledge
![upload](../../../../assets/kbqa/begin_chat.jpg)
#### 4.Adjust Space arguments
Each knowledge space supports argument customization, including the relevant arguments for vector retrieval and the arguments for knowledge question-answering prompts.
##### 4.1 Embedding
Embedding Argument
![upload](../../../../assets/kbqa/embedding.png)
```{tip} Embedding arguments
* topk:the top k vectors based on similarity score.
* recall_score:set a threshold score for the retrieval of similar vectors.
* recall_type:recall type.
* model:A model used to create vector representations of text or other data.
* chunk_size:The size of the data chunks used in processing.
* chunk_overlap:The amount of overlap between adjacent data chunks.
```
##### 4.2 Prompt
Prompt Argument
![upload](../../../../assets/kbqa/prompt.png)
```{tip} Prompt arguments
* scene:A contextual parameter used to define the setting or environment in which the prompt is being used.
* template:A pre-defined structure or format for the prompt, which can help ensure that the AI system generates responses that are consistent with the desired style or tone.
* max_token:The maximum number of tokens or words allowed in a prompt.
```
#### 5.Change Vector Database
```{admonition} Vector Store SETTINGS
#### Chroma
* VECTOR_STORE_TYPE=Chroma
#### MILVUS
* VECTOR_STORE_TYPE=Milvus
* MILVUS_URL=127.0.0.1
* MILVUS_PORT=19530
* MILVUS_USERNAME
* MILVUS_PASSWORD
* MILVUS_SECURE=
#### WEAVIATE
* WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network
```

View File

@@ -0,0 +1,22 @@
FAQ
==================================
DB-GPT product is a Web application that you can chat database, chat knowledge, text2dashboard.
.. image:: ./assets/DB-GPT-Product.jpg
- deploy
- llm
- chatdb
- kbqa
.. toctree::
:maxdepth: 2
:caption: Deloy
:name: deploy
:hidden:
./faq/deploy/deploy_faq.md
./faq/llm/llm_faq.md
./faq/chatdb/chatdb_faq.md
./faq/kbqa/kbqa_faq.md

View File

@@ -0,0 +1,10 @@
Chat DB FAQ
==================================
##### Q1: What difference between ChatData and ChatDB
ChatData generates SQL from natural language and executes it. ChatDB involves conversing with metadata from the Database, including metadata about databases, tables, and fields.
##### Q2: The suitable llm model currently supported for text-to-SQL is?
Now vicunna-13b-1.5 and llama2-70b is more suitable for text-to-SQL.
##### Q3: How to fine-tune Text-to-SQL in DB-GPT
there is another github project for Text-to-SQL fine-tune (https://github.com/eosphoros-ai/DB-GPT-Hub)

View File

@@ -0,0 +1,33 @@
Installation FAQ
==================================
##### Q1: execute `pip install -r requirements.txt` error, found some package cannot find correct version.
change the pip source.
```bash
# pypi
$ pip install -r requirements.txt -i https://pypi.python.org/simple
```
or
```bash
# tsinghua
$ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
```
or
```bash
# aliyun
$ pip install -r requirements.txt -i http://mirrors.aliyun.com/pypi/simple/
```
##### Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
make sure you pull latest code or create directory with mkdir pilot/data
##### Q3: The model keeps getting killed.
your GPU VRAM size is not enough, try replace your hardware or replace other llms.

View File

@@ -0,0 +1,58 @@
KBQA FAQ
==================================
##### Q1: text2vec-large-chinese not found
make sure you have download text2vec-large-chinese embedding model in right way
```tip
centos:yum install git-lfs
ubuntu:apt-get install git-lfs -y
macos:brew install git-lfs
```
```bash
cd models
git lfs clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
```
##### Q2:How to change Vector DB Type in DB-GPT.
Update .env file and set VECTOR_STORE_TYPE.
DB-GPT currently support Chroma(Default), Milvus(>2.1), Weaviate vector database.
If you want to change vector db, Update your .env, set your vector store type, VECTOR_STORE_TYPE=Chroma (now only support Chroma and Milvus(>2.1), if you set Milvus, please set MILVUS_URL and MILVUS_PORT)
If you want to support more vector db, you can integrate yourself.[how to integrate](https://db-gpt.readthedocs.io/en/latest/modules/vector.html)
```commandline
#*******************************************************************#
#** VECTOR STORE SETTINGS **#
#*******************************************************************#
VECTOR_STORE_TYPE=Chroma
#MILVUS_URL=127.0.0.1
#MILVUS_PORT=19530
#MILVUS_USERNAME
#MILVUS_PASSWORD
#MILVUS_SECURE=
#WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network
```
##### Q3:When I use vicuna-13b, found some illegal character like this.
<p align="left">
<img src="../../assets/faq/illegal_character.png" width="800px" />
</p>
Set KNOWLEDGE_SEARCH_TOP_SIZE smaller or set KNOWLEDGE_CHUNK_SIZE smaller, and reboot server.
##### Q4:space add error (pymysql.err.OperationalError) (1054, "Unknown column 'knowledge_space.context' in 'field list'")
1.shutdown dbgpt_server(ctrl c)
2.add column context for table knowledge_space
```commandline
mysql -h127.0.0.1 -uroot -paa12345678
```
3.execute sql ddl
```commandline
mysql> use knowledge_management;
mysql> ALTER TABLE knowledge_space ADD COLUMN context TEXT COMMENT "arguments context";
```
4.restart dbgpt serve

View File

@@ -0,0 +1,40 @@
LLM USE FAQ
==================================
##### Q1:how to use openai chatgpt service
change your LLM_MODEL
````shell
LLM_MODEL=proxyllm
````
set your OPENAPI KEY
````shell
PROXY_API_KEY={your-openai-sk}
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
````
make sure your openapi API_KEY is available
##### Q2 how to use MultiGPUs
DB-GPT will use all available gpu by default. And you can modify the setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu IDs.
Optionally, you can also specify the gpu ID to use before the starting command, as shown below:
````shell
# Specify 1 gpu
CUDA_VISIBLE_DEVICES=0 python3 pilot/server/dbgpt_server.py
# Specify 4 gpus
CUDA_VISIBLE_DEVICES=3,4,5,6 python3 pilot/server/dbgpt_server.py
````
You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to configure the maximum memory used by each GPU.
##### Q3 Not Enough Memory
DB-GPT supported 8-bit quantization and 4-bit quantization.
You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` in `.env` file to use quantization(8-bit quantization is enabled by default).
Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit quantization can run with 48 GB of VRAM.
Note: you need to install the latest dependencies according to [requirements.txt](https://github.com/eosphoros-ai/DB-GPT/blob/main/requirements.txt).

View File

@@ -0,0 +1,24 @@
Install
==================================
DB-GPT product is a Web application that you can chat database, chat knowledge, text2dashboard.
.. image:: ./assets/DB-GPT-Product.jpg
- deploy
- docker
- docker_compose
- environment
- deploy_faq
.. toctree::
:maxdepth: 2
:caption: Install
:name: deploy
:hidden:
./install/deploy/deploy.md
./install/docker/docker.md
./install/docker_compose/docker_compose.md
./install/environment/environment.md
./install/faq/deploy_faq.md

View File

@@ -0,0 +1,144 @@
# Installation From Source
This tutorial gives you a quick walkthrough about use DB-GPT with you environment and data.
## Installation
To get started, install DB-GPT with the following steps.
### 1. Hardware Requirements
As our project has the ability to achieve ChatGPT performance of over 85%, there are certain hardware requirements. However, overall, the project can be deployed and used on consumer-grade graphics cards. The specific hardware requirements for deployment are as follows:
| GPU | VRAM Size | Performance |
|----------|-----------| ------------------------------------------- |
| RTX 4090 | 24 GB | Smooth conversation inference |
| RTX 3090 | 24 GB | Smooth conversation inference, better than V100 |
| V100 | 16 GB | Conversation inference possible, noticeable stutter |
| T4 | 16 GB | Conversation inference possible, noticeable stutter |
if your VRAM Size is not enough, DB-GPT supported 8-bit quantization and 4-bit quantization.
Here are some of the VRAM size usage of the models we tested in some common scenarios.
| Model | Quantize | VRAM Size |
| --------- | --------- | --------- |
| vicuna-7b-v1.5 | 4-bit | 8 GB |
| vicuna-7b-v1.5 | 8-bit | 12 GB |
| vicuna-13b-v1.5 | 4-bit | 12 GB |
| vicuna-13b-v1.5 | 8-bit | 20 GB |
| llama-2-7b | 4-bit | 8 GB |
| llama-2-7b | 8-bit | 12 GB |
| llama-2-13b | 4-bit | 12 GB |
| llama-2-13b | 8-bit | 20 GB |
| llama-2-70b | 4-bit | 48 GB |
| llama-2-70b | 8-bit | 80 GB |
| baichuan-7b | 4-bit | 8 GB |
| baichuan-7b | 8-bit | 12 GB |
| baichuan-13b | 4-bit | 12 GB |
| baichuan-13b | 8-bit | 20 GB |
### 2. Install
```bash
git clone https://github.com/eosphoros-ai/DB-GPT.git
```
We use Sqlite as default database, so there is no need for database installation. If you choose to connect to other databases, you can follow our tutorial for installation and configuration.
For the entire installation process of DB-GPT, we use the miniconda3 virtual environment. Create a virtual environment and install the Python dependencies.
[How to install Miniconda](https://docs.conda.io/en/latest/miniconda.html)
```bash
python>=3.10
conda create -n dbgpt_env python=3.10
conda activate dbgpt_env
pip install -r requirements.txt
```
Before use DB-GPT Knowledge
```bash
python -m spacy download zh_core_web_sm
```
Once the environment is installed, we have to create a new folder "models" in the DB-GPT project, and then we can put all the models downloaded from huggingface in this directory
```{tip}
Notice make sure you have install git-lfs
centos:yum install git-lfs
ubuntu:app-get install git-lfs
macos:brew install git-lfs
```
```bash
cd DB-GPT
mkdir models and cd models
#### llm model
git clone https://huggingface.co/lmsys/vicuna-13b-v1.5
or
git clone https://huggingface.co/THUDM/chatglm2-6b
#### embedding model
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
or
git clone https://huggingface.co/moka-ai/m3e-large
```
The model files are large and will take a long time to download. During the download, let's configure the .env file, which needs to be copied and created from the .env.template
if you want to use openai llm service, see [LLM Use FAQ](https://db-gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)
```{tip}
cp .env.template .env
```
You can configure basic parameters in the .env file, for example setting LLM_MODEL to the model to be used
([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-13b-v1.5` to try this model)
### 3. Run
You can refer to this document to obtain the Vicuna weights: [Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-weights) .
If you have difficulty with this step, you can also directly use the model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a replacement.
set .env configuration set your vector store type, eg:VECTOR_STORE_TYPE=Chroma, now we support Chroma and Milvus(version > 2.1)
1.Run db-gpt server
```bash
$ python pilot/server/dbgpt_server.py
```
Open http://localhost:5000 with your browser to see the product.
If you want to access an external LLM service, you need to 1.set the variables LLM_MODEL=YOUR_MODEL_NAME MODEL_SERVER=YOUR_MODEL_SERVEReg:http://localhost:5000 in the .env file.
2.execute dbgpt_server.py in light mode
If you want to learn about dbgpt-webui, read https://github./csunny/DB-GPT/tree/new-page-framework/datacenter
```bash
$ python pilot/server/dbgpt_server.py --light
```
### 4. Multiple GPUs
DB-GPT will use all available gpu by default. And you can modify the setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu IDs.
Optionally, you can also specify the gpu ID to use before the starting command, as shown below:
````shell
# Specify 1 gpu
CUDA_VISIBLE_DEVICES=0 python3 pilot/server/dbgpt_server.py
# Specify 4 gpus
CUDA_VISIBLE_DEVICES=3,4,5,6 python3 pilot/server/dbgpt_server.py
````
You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to configure the maximum memory used by each GPU.
### 5. Not Enough Memory
DB-GPT supported 8-bit quantization and 4-bit quantization.
You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` in `.env` file to use quantization(8-bit quantization is enabled by default).
Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit quantization can run with 48 GB of VRAM.
Note: you need to install the latest dependencies according to [requirements.txt](https://github.com/eosphoros-ai/DB-GPT/blob/main/requirements.txt).

View File

@@ -0,0 +1,87 @@
Docker Install
==================================
### Docker (Experimental)
#### 1. Building Docker image
```bash
$ bash docker/build_all_images.sh
```
Review images by listing them:
```bash
$ docker images|grep db-gpt
```
Output should look something like the following:
```
db-gpt-allinone latest e1ffd20b85ac 45 minutes ago 14.5GB
db-gpt latest e36fb0cca5d9 3 hours ago 14GB
```
You can pass some parameters to docker/build_all_images.sh.
```bash
$ bash docker/build_all_images.sh \
--base-image nvidia/cuda:11.8.0-devel-ubuntu22.04 \
--pip-index-url https://pypi.tuna.tsinghua.edu.cn/simple \
--language zh
```
You can execute the command `bash docker/build_all_images.sh --help` to see more usage.
#### 2. Run all in one docker container
**Run with local model**
```bash
$ docker run --gpus "device=0" -d -p 3306:3306 \
-p 5000:5000 \
-e LOCAL_DB_HOST=127.0.0.1 \
-e LOCAL_DB_PASSWORD=aa123456 \
-e MYSQL_ROOT_PASSWORD=aa123456 \
-e LLM_MODEL=vicuna-13b \
-e LANGUAGE=zh \
-v /data/models:/app/models \
--name db-gpt-allinone \
db-gpt-allinone
```
Open http://localhost:5000 with your browser to see the product.
- `-e LLM_MODEL=vicuna-13b`, means we use vicuna-13b as llm model, see /pilot/configs/model_config.LLM_MODEL_CONFIG
- `-v /data/models:/app/models`, means we mount the local model file directory `/data/models` to the docker container directory `/app/models`, please replace it with your model file directory.
You can see log with command:
```bash
$ docker logs db-gpt-allinone -f
```
**Run with openai interface**
```bash
$ PROXY_API_KEY="You api key"
$ PROXY_SERVER_URL="https://api.openai.com/v1/chat/completions"
$ docker run --gpus "device=0" -d -p 3306:3306 \
-p 5000:5000 \
-e LOCAL_DB_HOST=127.0.0.1 \
-e LOCAL_DB_PASSWORD=aa123456 \
-e MYSQL_ROOT_PASSWORD=aa123456 \
-e LLM_MODEL=proxyllm \
-e PROXY_API_KEY=$PROXY_API_KEY \
-e PROXY_SERVER_URL=$PROXY_SERVER_URL \
-e LANGUAGE=zh \
-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-chinese \
--name db-gpt-allinone \
db-gpt-allinone
```
- `-e LLM_MODEL=proxyllm`, means we use proxy llm(openai interface, fastchat interface...)
- `-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-chinese`, means we mount the local text2vec model to the docker container.
Open http://localhost:5000 with your browser to see the product.

View File

@@ -0,0 +1,26 @@
Docker Compose
==================================
#### Run with docker compose
```bash
$ docker compose up -d
```
Output should look something like the following:
```
[+] Building 0.0s (0/0)
[+] Running 2/2
✔ Container db-gpt-db-1 Started 0.4s
✔ Container db-gpt-webserver-1 Started
```
You can see log with command:
```bash
$ docker logs db-gpt-webserver-1 -f
```
Open http://localhost:5000 with your browser to see the product.
You can open docker-compose.yml in the project root directory to see more details.

View File

@@ -0,0 +1,122 @@
Env Parameter
==================================
```{admonition} LLM MODEL Config
LLM Model Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG
* LLM_MODEL=vicuna-13b
MODEL_SERVER_ADDRESS
* MODEL_SERVER=http://127.0.0.1:8000
LIMIT_MODEL_CONCURRENCY
* LIMIT_MODEL_CONCURRENCY=5
MAX_POSITION_EMBEDDINGS
* MAX_POSITION_EMBEDDINGS=4096
QUANTIZE_QLORA
* QUANTIZE_QLORA=True
QUANTIZE_8bit
* QUANTIZE_8bit=True
```
```{admonition} LLM PROXY Settings
OPENAI Key
* PROXY_API_KEY={your-openai-sk}
* PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
from https://bard.google.com/ f12-> application-> __Secure-1PSID
* BARD_PROXY_API_KEY={your-bard-token}
```
```{admonition} DATABASE SETTINGS
### SQLite database (Current default database)
* LOCAL_DB_PATH=data/default_sqlite.db
* LOCAL_DB_TYPE=sqlite # Database Type default:sqlite
### MYSQL database
* LOCAL_DB_TYPE=mysql
* LOCAL_DB_USER=root
* LOCAL_DB_PASSWORD=aa12345678
* LOCAL_DB_HOST=127.0.0.1
* LOCAL_DB_PORT=3306
```
```{admonition} EMBEDDING SETTINGS
EMBEDDING MODEL Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG
* EMBEDDING_MODEL=text2vec
Embedding Chunk size, default 500
* KNOWLEDGE_CHUNK_SIZE=500
Embedding Chunk Overlap, default 100
* KNOWLEDGE_CHUNK_OVERLAP=100
embeding recall top k,5
* KNOWLEDGE_SEARCH_TOP_SIZE=5
embeding recall max token ,2000
* KNOWLEDGE_SEARCH_MAX_TOKEN=5
```
```{admonition} Vector Store SETTINGS
#### Chroma
* VECTOR_STORE_TYPE=Chroma
#### MILVUS
* VECTOR_STORE_TYPE=Milvus
* MILVUS_URL=127.0.0.1
* MILVUS_PORT=19530
* MILVUS_USERNAME
* MILVUS_PASSWORD
* MILVUS_SECURE=
#### WEAVIATE
* VECTOR_STORE_TYPE=Weaviate
* WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network
```
```{admonition} Vector Store SETTINGS
#### Chroma
* VECTOR_STORE_TYPE=Chroma
#### MILVUS
* VECTOR_STORE_TYPE=Milvus
* MILVUS_URL=127.0.0.1
* MILVUS_PORT=19530
* MILVUS_USERNAME
* MILVUS_PASSWORD
* MILVUS_SECURE=
#### WEAVIATE
* WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network
```
```{admonition} Multi-GPU Setting
See https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/
If CUDA_VISIBLE_DEVICES is not configured, all available gpus will be used
* CUDA_VISIBLE_DEVICES=0
Optionally, you can also specify the gpu ID to use before the starting command
* CUDA_VISIBLE_DEVICES=3,4,5,6
You can configure the maximum memory used by each GPU.
* MAX_GPU_MEMORY=16Gib
```
```{admonition} Other Setting
#### Language Settings(influence prompt language)
* LANGUAGE=en
* LANGUAGE=zh
```

View File

@@ -1,4 +1,4 @@
# Installation
# Python SDK
DB-GPT provides a third-party Python API package that you can integrate into your own code.
### Installation from Pip

View File

@@ -6,19 +6,18 @@ This is a collection of DB-GPT tutorials on Medium.
DB-GPT is divided into several functions, including chat with knowledge base, execute SQL, chat with database, and execute plugins.
### Introduction
#### youtube
[What is DB-GPT](https://www.youtube.com/watch?v=QszhVJerc0I)
### Knowledge
[How to deploy DB-GPT step by step](https://www.youtube.com/watch?v=OJGU4fQCqPs)
[How to Create your own knowledge repository](https://db-gpt.readthedocs.io/en/latest/modules/knownledge.html)
![Add new Knowledge demonstration](../../assets/new_knownledge.gif)
#### bilibili
[What is DB-GPT](https://www.bilibili.com/video/BV1SM4y1a7Nj/?spm_id_from=333.788&vd_source=7792e22c03b7da3c556a450eb42c8a0f)
[How to deploy DB-GPT step by step](https://www.bilibili.com/video/BV1mu411Y7ve/?spm_id_from=pageDriver&vd_source=7792e22c03b7da3c556a450eb42c8a0f)
### SQL Generation
![sql generation demonstration](../../assets/demo_en.gif)
### SQL Execute
![sql execute demonstration](../../assets/auto_sql_en.gif)
### Plugins
![db plugins demonstration](../../assets/dashboard.png)

View File

@@ -47,10 +47,12 @@ Getting Started
:caption: Getting Started
:hidden:
getting_started/getting_started.md
getting_started/install.rst
getting_started/application.md
getting_started/installation.md
getting_started/concepts.md
getting_started/tutorials.md
getting_started/faq.rst
Modules

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-10 16:40+0800\n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -19,135 +19,123 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../faq.md:1 c4b5b298c447462ba7aaffd954549def
msgid "FAQ"
msgstr "FAQ"
#: ../../faq.md:2 533e388b78594244aa0acbf2b0263f60
msgid "Q1: text2vec-large-chinese not found"
msgstr "Q1: text2vec-large-chinese not found"
#: ../../faq.md:4 e1c7e28c60f24f7a983f30ee43bad32e
msgid ""
"A1: make sure you have download text2vec-large-chinese embedding model in"
" right way"
msgstr "按照正确的姿势下载text2vec-large-chinese模型"
#: ../../faq.md:16 7cbfd6629267423a879735dd0dbba24e
msgid ""
"Q2: execute `pip install -r requirements.txt` error, found some package "
"cannot find correct version."
msgstr "执行`pip install -r requirements.txt`报错"
#: ../../faq.md:19 a2bedf7cf2984d35bc9e3edffd9cb991
msgid "A2: change the pip source."
msgstr "修改pip源"
#: ../../faq.md:26 ../../faq.md:33 307385dc581841148bad0dfa95541722
#: 55a9cd0665e74b47bac71e652a80c8bd
msgid "or"
msgstr "或"
#: ../../faq.md:41 8ec34a9f22744313825bd53e90e86695
msgid "Q3:Access denied for user 'root@localhost'(using password :NO)"
msgstr "或"
#: ../../faq.md:43 85d67326827145859f5af707c96c852f
msgid "A3: make sure you have installed mysql instance in right way"
msgstr "按照正确姿势安装mysql"
#: ../../faq.md:45 c51cffb2777c49c6b15485e6151f8b70
msgid "Docker:"
msgstr "Docker:"
#: ../../faq.md:49 09f16d57dbeb4f34a1274b2ac52aacf5
msgid "Normal: [download mysql instance](https://dev.mysql.com/downloads/mysql/)"
msgstr "[download mysql instance](https://dev.mysql.com/downloads/mysql/)"
#: ../../faq.md:52 ab730de8f02f4cb895510a52ca90c02f
msgid "Q4:When I use openai(MODEL_SERVER=proxyllm) to chat"
msgstr "使用openai-chatgpt模型时(MODEL_SERVER=proxyllm)"
#: ../../faq.md:57 8308cf21e0654e5798c9a2638b8565ec
msgid "A4: make sure your openapi API_KEY is available"
msgstr "确认openapi API_KEY是否可用"
#: ../../faq.md:59 ff6361a5438943c2b9e613aa9e2322bf
#: ../../faq.md:8 ded9afcc91594bce8950aa688058a5b6
msgid "Q5:When I Chat Data and Chat Meta Data, I found the error"
msgstr "Chat Data and Chat Meta Data报如下错"
#: ../../faq.md:64 1a89acc3bf074b77b9e44d20f9b9c3cb
#: ../../faq.md:13 25237221f65c47a2b62f5afbe637d6e7
msgid "A5: you have not create your database and table"
msgstr "需要创建自己的数据库"
#: ../../faq.md:65 e7d63e2aa0d24c5d90dd1865753c7d52
#: ../../faq.md:14 8c9024f1f4d7414499587e3bdf7d56d1
msgid "1.create your database."
msgstr "1.先创建数据库"
#: ../../faq.md:71 8b0de08106a144b98538b12bc671cbb2
#: ../../faq.md:20 afc7299d3b4e4d98b17fd6157d440970
msgid "2.create table {$your_table} and insert your data. eg:"
msgstr "然后创建数据表,模拟数据"
#: ../../faq.md:85 abb226843354448a925de4deb52db555
msgid "Q6:How to change Vector DB Type in DB-GPT."
msgstr ""
#~ msgid "FAQ"
#~ msgstr "FAQ"
#: ../../faq.md:87 cc80165e2f6b46a28e53359ea7b3e0af
msgid "A6: Update .env file and set VECTOR_STORE_TYPE."
msgstr ""
#~ msgid "Q1: text2vec-large-chinese not found"
#~ msgstr "Q1: text2vec-large-chinese not found"
#: ../../faq.md:88 d30fa1e2fdd94e00a72ef247af41fb43
#, fuzzy
msgid ""
"DB-GPT currently support Chroma(Default), Milvus(>2.1), Weaviate vector "
"database. If you want to change vector db, Update your .env, set your "
"vector store type, VECTOR_STORE_TYPE=Chroma (now only support Chroma and "
"Milvus(>2.1), if you set Milvus, please set MILVUS_URL and MILVUS_PORT) "
"If you want to support more vector db, you can integrate yourself.[how to"
" integrate](https://db-gpt.readthedocs.io/en/latest/modules/vector.html)"
msgstr ""
"DB-GPT当前支持Chroma(默认),如果你想替换向量数据库,需要更新.env文件VECTOR_STORE_TYPE=Chroma (now"
" only support Chroma, Milvus Weaviate, if you set Milvus(>2.1), please "
"set MILVUS_URL and "
"MILVUS_PORT)。如果当前支持向量数据库无法满足你的需求,可以集成使用自己的向量数据库。[怎样集成](https://db-"
"gpt.readthedocs.io/en/latest/modules/vector.html)"
#~ msgid ""
#~ "A1: make sure you have download "
#~ "text2vec-large-chinese embedding model in"
#~ " right way"
#~ msgstr "按照正确的姿势下载text2vec-large-chinese模型"
#: ../../faq.md:104 bed01b4f96e04f61aa536cf615254c87
#, fuzzy
msgid "Q7:When I use vicuna-13b, found some illegal character like this."
msgstr "使用vicuna-13b知识库问答出现乱码"
#~ msgid ""
#~ "Q2: execute `pip install -r "
#~ "requirements.txt` error, found some package"
#~ " cannot find correct version."
#~ msgstr "执行`pip install -r requirements.txt`报错"
#: ../../faq.md:109 1ea68292da6d46a89944212948d1719d
#, fuzzy
msgid ""
"A7: set KNOWLEDGE_SEARCH_TOP_SIZE smaller or set KNOWLEDGE_CHUNK_SIZE "
"smaller, and reboot server."
msgstr "将KNOWLEDGE_SEARCH_TOP_SIZE和KNOWLEDGE_CHUNK_SIZE设置小点然后重启"
#~ msgid "A2: change the pip source."
#~ msgstr "修改pip源"
#: ../../faq.md:111 256b58e3749a4a17acbc456c71581597
msgid ""
"Q8:space add error (pymysql.err.OperationalError) (1054, \"Unknown column"
" 'knowledge_space.context' in 'field list'\")"
msgstr "Q8:space add error (pymysql.err.OperationalError) (1054, \"Unknown column"
" 'knowledge_space.context' in 'field list'\")"
#~ msgid "or"
#~ msgstr ""
#: ../../faq.md:114 febe9ec0c51b4e10b3a2bb0aece94e7c
msgid "A8:"
msgstr "A8:"
#~ msgid "Q3:Access denied for user 'root@localhost'(using password :NO)"
#~ msgstr "或"
#: ../../faq.md:115 d713a368fe8147feb6db8f89938f3dcd
msgid "1.shutdown dbgpt_server(ctrl c)"
msgstr ""
#~ msgid "A3: make sure you have installed mysql instance in right way"
#~ msgstr "按照正确姿势安装mysql"
#: ../../faq.md:117 4c6ad307caff4b57bb6c7085cc42fb64
msgid "2.add column context for table knowledge_space"
msgstr "2.add column context for table knowledge_space"
#~ msgid "Docker:"
#~ msgstr "Docker:"
#: ../../faq.md:121 a3f955c090be4163bdb934eb32c25fd5
msgid "3.execute sql ddl"
msgstr "3.执行 sql ddl"
#~ msgid ""
#~ "Normal: [download mysql "
#~ "instance](https://dev.mysql.com/downloads/mysql/)"
#~ msgstr "[download mysql instance](https://dev.mysql.com/downloads/mysql/)"
#: ../../faq.md:126 1861332d1c0342e2b968732fba55fa54
msgid "4.restart dbgpt server"
msgstr "4.重启 dbgpt server"
#~ msgid "Q4:When I use openai(MODEL_SERVER=proxyllm) to chat"
#~ msgstr "使用openai-chatgpt模型时(MODEL_SERVER=proxyllm)"
#~ msgid "A4: make sure your openapi API_KEY is available"
#~ msgstr "确认openapi API_KEY是否可用"
#~ msgid "Q6:How to change Vector DB Type in DB-GPT."
#~ msgstr ""
#~ msgid "A6: Update .env file and set VECTOR_STORE_TYPE."
#~ msgstr ""
#~ msgid ""
#~ "DB-GPT currently support Chroma(Default), "
#~ "Milvus(>2.1), Weaviate vector database. If "
#~ "you want to change vector db, "
#~ "Update your .env, set your vector "
#~ "store type, VECTOR_STORE_TYPE=Chroma (now only"
#~ " support Chroma and Milvus(>2.1), if "
#~ "you set Milvus, please set MILVUS_URL"
#~ " and MILVUS_PORT) If you want to "
#~ "support more vector db, you can "
#~ "integrate yourself.[how to integrate](https://db-"
#~ "gpt.readthedocs.io/en/latest/modules/vector.html)"
#~ msgstr ""
#~ "DB-"
#~ "GPT当前支持Chroma(默认),如果你想替换向量数据库,需要更新.env文件VECTOR_STORE_TYPE=Chroma "
#~ "(now only support Chroma, Milvus "
#~ "Weaviate, if you set Milvus(>2.1), "
#~ "please set MILVUS_URL and "
#~ "MILVUS_PORT)。如果当前支持向量数据库无法满足你的需求,可以集成使用自己的向量数据库。[怎样集成](https://db-"
#~ "gpt.readthedocs.io/en/latest/modules/vector.html)"
#~ msgid "Q7:When I use vicuna-13b, found some illegal character like this."
#~ msgstr "使用vicuna-13b知识库问答出现乱码"
#~ msgid ""
#~ "A7: set KNOWLEDGE_SEARCH_TOP_SIZE smaller or"
#~ " set KNOWLEDGE_CHUNK_SIZE smaller, and "
#~ "reboot server."
#~ msgstr "将KNOWLEDGE_SEARCH_TOP_SIZE和KNOWLEDGE_CHUNK_SIZE设置小点然后重启"
#~ msgid ""
#~ "Q8:space add error (pymysql.err.OperationalError)"
#~ " (1054, \"Unknown column "
#~ "'knowledge_space.context' in 'field list'\")"
#~ msgstr ""
#~ "Q8:space add error (pymysql.err.OperationalError)"
#~ " (1054, \"Unknown column "
#~ "'knowledge_space.context' in 'field list'\")"
#~ msgid "A8:"
#~ msgstr "A8:"
#~ msgid "1.shutdown dbgpt_server(ctrl c)"
#~ msgstr ""
#~ msgid "2.add column context for table knowledge_space"
#~ msgstr "2.add column context for table knowledge_space"
#~ msgid "3.execute sql ddl"
#~ msgstr "3.执行 sql ddl"
#~ msgid "4.restart dbgpt server"
#~ msgstr "4.重启 dbgpt server"

View File

@@ -0,0 +1,51 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application.rst:13
msgid "Application"
msgstr ""
#: ../../getting_started/application.rst:2 4de9609785bc406aa6f3965f057f90d8
msgid "Applications"
msgstr ""
#: ../../getting_started/application.rst:3 1ef4f6d81f7c4a2f9ee330fc890017dd
msgid ""
"DB-GPT product is a Web application that you can chat database, chat "
"knowledge, text2dashboard."
msgstr ""
#: ../../getting_started/application.rst:8 0451376d435b47bfa7b3f81c87683610
msgid "Chat DB"
msgstr ""
#: ../../getting_started/application.rst:9 4b1372dd4ae34881891c1dbcd83b92bf
msgid "Chat Knowledge"
msgstr ""
#: ../../getting_started/application.rst:10 d8807a357f1144ccba933338cd0f619a
msgid "Dashboard"
msgstr ""
#: ../../getting_started/application.rst:11 f6a5806d4f0d4271bb0964935d9c2ff3
msgid "Plugins"
msgstr ""

View File

@@ -0,0 +1,118 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/chatdb/chatdb.md:1
#: e3984d7305214fc59fc356bd2e382543
msgid "ChatData & ChatDB"
msgstr "ChatData & ChatDB"
#: ../../getting_started/application/chatdb/chatdb.md:3
#: e74daab738984be990aaa21ab5af7046
msgid ""
"ChatData generates SQL from natural language and executes it. ChatDB "
"involves conversing with metadata from the Database, including metadata "
"about databases, tables, and fields.![db plugins "
"demonstration](../../../../assets/chat_data/chat_data.jpg)"
msgstr "ChatData 是会将自然语言生成SQL并将其执行。ChatDB是与Database里面的元数据包括库、表、字段的元数据进行对话.![db plugins "
"demonstration](../../../../assets/chat_data/chat_data.jpg)"
#: ../../getting_started/application/chatdb/chatdb.md:3
#: ../../getting_started/application/chatdb/chatdb.md:7
#: ../../getting_started/application/chatdb/chatdb.md:9
#: ../../getting_started/application/chatdb/chatdb.md:11
#: ../../getting_started/application/chatdb/chatdb.md:13
#: ../../getting_started/application/chatdb/chatdb.md:17
#: 109d4292c91f4ca5a6b682189f10620a 23b1865780a24ad2b765196faff15550
#: 3c87a8907e854e5d9a61b1ef658e5935 40eefe75a51e4444b1e4b67bdddc6da9
#: 73eee6111545418e8f9ae533a255d589 876ee62e4e404da6b5d6c5cba0259e5c
msgid "db plugins demonstration"
msgstr "db plugins demonstration"
#: ../../getting_started/application/chatdb/chatdb.md:4
#: a760078b833e434681da72c87342083f
msgid "1.Choose Datasource"
msgstr "1.Choose Datasource"
#: ../../getting_started/application/chatdb/chatdb.md:5
#: b68ac469c7d34026b1cb8b587a446737
msgid ""
"If you are using DB-GPT for the first time, you need to add a data source"
" and set the relevant connection information for the data source."
msgstr "如果你是第一次使用DB-GPT, 首先需要添加数据源,设置数据源的相关连接信息"
#: ../../getting_started/application/chatdb/chatdb.md:6
#: 8edecf9287ce4077a984617f0a06e30c
msgid "1.1 Datasource management"
msgstr "1.1 Datasource management"
#: ../../getting_started/application/chatdb/chatdb.md:7
#: af7b5d2f6c9b4666a3466fed05968258
msgid "![db plugins demonstration](../../../../assets/chat_data/db_entry.png)"
msgstr "![db plugins demonstration](../../../../assets/chat_data/db_entry.png)"
#: ../../getting_started/application/chatdb/chatdb.md:8
#: f7d49ffe448d49169713452f54b3437c
msgid "1.2 Connection management"
msgstr "1.2 Connection管理"
#: ../../getting_started/application/chatdb/chatdb.md:9
#: 109a8c9da05d4ec7a80ae1665b8d48bb
msgid "![db plugins demonstration](../../../../assets/chat_data/db_connect.png)"
msgstr "![db plugins demonstration](../../../../assets/chat_data/db_connect.png)"
#: ../../getting_started/application/chatdb/chatdb.md:10
#: ac283fff85bf41efbd2cfeeda5980f9a
msgid "1.3 Add Datasource"
msgstr "1.3 添加Datasource"
#: ../../getting_started/application/chatdb/chatdb.md:11
#: a25dec01f4824bfe88cc801c93bb3e3b
msgid ""
"![db plugins "
"demonstration](../../../../assets/chat_data/add_datasource.png)"
msgstr "![db plugins "
"demonstration](../../../../assets/chat_data/add_datasource.png)"
#: ../../getting_started/application/chatdb/chatdb.md:12
#: 4751d6d3744148cab130a901594f673a
msgid "2.ChatData"
msgstr "2.ChatData"
#: ../../getting_started/application/chatdb/chatdb.md:13
#: bad82b572ba34ffb8669f0f8ff9b0a05
msgid ""
"After successfully setting up the data source, you can start conversing "
"with the database. You can ask it to generate SQL for you or inquire "
"about relevant information on the database's metadata. ![db plugins "
"demonstration](../../../../assets/chat_data/chatdata_eg.png)"
msgstr "设置数据源成功后就可以和数据库进行对话了。你可以让它帮你生成SQL也可以和问它数据库元数据的相关信息。 ![db plugins "
"demonstration](../../../../assets/chat_data/chatdata_eg.png)"
#: ../../getting_started/application/chatdb/chatdb.md:16
#: 0bf9c5a2f3764b36abc82102c53d3cd4
msgid "3.ChatDB"
msgstr "3.ChatDB"
#: ../../getting_started/application/chatdb/chatdb.md:17
#: f06e7583c2484a959c1681b1db0acfaa
msgid "![db plugins demonstration](../../../../assets/chat_data/chatdb_eg.png)"
msgstr "![db plugins demonstration](../../../../assets/chat_data/chatdb_eg.png)"

View File

@@ -0,0 +1,21 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"

View File

@@ -0,0 +1,318 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/application/kbqa/kbqa.md:1
#: a06d9329c98f44ffaaf6fc09ba53d97e
msgid "KBQA"
msgstr ""
#: ../../getting_started/application/kbqa/kbqa.md:3
#: bb6943c115754b1fafb7467313100753
msgid ""
"DB-GPT supports a knowledge question-answering module, which aims to "
"create an intelligent expert in the field of databases and provide "
"professional knowledge-based answers to database practitioners."
msgstr " DB-GPT支持知识问答模块知识问答的初衷是打造DB领域的智能专家为数据库从业人员解决专业的知识问题回答"
#: ../../getting_started/application/kbqa/kbqa.md:5
#: a13ac963479e4d4fbcbcc4fec7863274
msgid "![chat_knowledge](../../../../assets/chat_knowledge.png)"
msgstr "![chat_knowledge](../../../../assets/chat_knowledge.png)"
#: ../../getting_started/application/kbqa/kbqa.md:5
#: f4db3a8d04634059a74be1b2b3c948ef
msgid "chat_knowledge"
msgstr "chat_knowledge"
#: ../../getting_started/application/kbqa/kbqa.md:7
#: ../../getting_started/application/kbqa/kbqa.md:10
#: 86856cce95c845eb83ed44d8713b0ef6 f787e9ad9f7444a5923fe3476ab4d287
msgid "KBQA abilities"
msgstr "KBQA现有能力"
#: ../../getting_started/application/kbqa/kbqa.md:11
#: df1c88b28a2d46b2b3f5fae1caec000e
msgid "Knowledge Space."
msgstr "知识空间"
#: ../../getting_started/application/kbqa/kbqa.md:12
#: 7f6c99d0c1394f08b246daa1343c24b2
msgid "Multi Source Knowledge Source Embedding."
msgstr "多数据源Embedding"
#: ../../getting_started/application/kbqa/kbqa.md:13
#: 3c1274c3e65f426bbee219227f681e27
msgid "Embedding Argument Adjust"
msgstr "Embedding参数自定义"
#: ../../getting_started/application/kbqa/kbqa.md:14
#: 52fb29f7ee8745e6b78145f5be23b8ce
msgid "Chat Knowledge"
msgstr "知识问答"
#: ../../getting_started/application/kbqa/kbqa.md:15
#: 83e427c46d844c6a9bb5954342ae7c42
msgid "Multi Vector DB"
msgstr "多向量数据库管理"
#: ../../getting_started/application/kbqa/kbqa.md:19
#: 07c1e189d0f84db69b6a51cf23ede7dc
msgid "Steps to KBQA In DB-GPT"
msgstr "怎样一步一步使用KBQA"
#: ../../getting_started/application/kbqa/kbqa.md:21
#: 36e4195e5c3a4bedb7d11243e5705f0a
msgid "1.Create Knowledge Space"
msgstr "1.首先创建知识空间"
#: ../../getting_started/application/kbqa/kbqa.md:22
#: 5c92d41df2b04374a6c7ce40308f738b
msgid ""
"If you are using Knowledge Space for the first time, you need to create a"
" Knowledge Space and set your name, owner, description. "
"![create_space](../../../../assets/kbqa/create_space.png)"
msgstr "如果你是第一次使用,先创建知识空间,指定名字,拥有者和描述信息"
#: ../../getting_started/application/kbqa/kbqa.md:22
#: c5f6e842be384977be1bd667b1e0ab5d
msgid "create_space"
msgstr "create_space"
#: ../../getting_started/application/kbqa/kbqa.md:27
#: c98434920955416ca5273892f9086bc5
msgid "2.Create Knowledge Document"
msgstr "2.上传知识"
#: ../../getting_started/application/kbqa/kbqa.md:28
#: d63f798e8270455587d1afd98c72e995
msgid ""
"DB-GPT now support Multi Knowledge Source, including Text, WebUrl, and "
"Document(PDF, Markdown, Word, PPT, HTML and CSV). After successfully "
"uploading a document for translation, the backend system will "
"automatically read and split and chunk the document, and then import it "
"into the vector database. Alternatively, you can manually synchronize the"
" document. You can also click on details to view the specific document "
"slicing content."
msgstr "DB-GPT支持多数据源包括Text纯文本, WebUrl和Document(PDF, Markdown, Word, PPT, HTML and CSV)。上传文档成功后后台会自动将文档内容进行读取,切片,然后导入到向量数据库中,当然你也可以手动进行同步,你也可以点击详情查看具体的文档切片内容"
#: ../../getting_started/application/kbqa/kbqa.md:30
#: 18becde7bdc34e9cb7017ff7711d1634
msgid "2.1 Choose Knowledge Type:"
msgstr "2.1 选择知识类型"
#: ../../getting_started/application/kbqa/kbqa.md:31
#: 755dfd18812249e591647d233ec9253b
msgid "![document](../../../../assets/kbqa/document.jpg)"
msgstr "![document](../../../../assets/kbqa/document.jpg)"
#: ../../getting_started/application/kbqa/kbqa.md:31
#: 10faa2236fa84dc284f7ce6d68acc43c
msgid "document"
msgstr "document"
#: ../../getting_started/application/kbqa/kbqa.md:33
#: 04449bfa78b14d1cb42c1259f67531d1
msgid "2.2 Upload Document:"
msgstr "2.2上传文档"
#: ../../getting_started/application/kbqa/kbqa.md:34
#: de14e2d7a6c54623bfc8af2fc6e20c62
msgid "![upload](../../../../assets/kbqa/upload.jpg)"
msgstr "![upload](../../../../assets/kbqa/upload.jpg)"
#: ../../getting_started/application/kbqa/kbqa.md:34
#: ../../getting_started/application/kbqa/kbqa.md:38
#: ../../getting_started/application/kbqa/kbqa.md:43
#: ../../getting_started/application/kbqa/kbqa.md:56
#: 0e3ab13b9d064b238fba283d6f466051 65ea0b9b43ef4c64897a6e65781129c7
#: cb65cb968a91492d9526f47e74b179e1 f0ff29911159497fb542942b6deb6972
msgid "upload"
msgstr "upload"
#: ../../getting_started/application/kbqa/kbqa.md:37
#: 77d936d2502f41feb2721da86cc1ebb1
msgid "3.Chat With Knowledge"
msgstr "3.知识问答"
#: ../../getting_started/application/kbqa/kbqa.md:38
#: ae72c3236e564949b142a945f3474425
msgid "![upload](../../../../assets/kbqa/begin_chat.jpg)"
msgstr "![upload](../../../../assets/kbqa/begin_chat.jpg)"
#: ../../getting_started/application/kbqa/kbqa.md:40
#: b9d701985d404a9196bb896718f41e36
msgid "4.Adjust Space arguments"
msgstr "4.调整知识参数"
#: ../../getting_started/application/kbqa/kbqa.md:41
#: 6155bbf401ef44ea8d761c83ae5ceb9a
msgid ""
"Each knowledge space supports argument customization, including the "
"relevant arguments for vector retrieval and the arguments for knowledge "
"question-answering prompts."
msgstr "每一个知识空间都支持参数自定义, 包括向量召回的相关参数以及知识问答Promp参数"
#: ../../getting_started/application/kbqa/kbqa.md:42
#: b9ac70b5ee99435a962dcb040b5ba4fc
msgid "4.1 Embedding"
msgstr "4.1 Embedding"
#: ../../getting_started/application/kbqa/kbqa.md:43
#: 0ca2398756924828a904921456b863b5
msgid "Embedding Argument ![upload](../../../../assets/kbqa/embedding.png)"
msgstr "Embedding Argument ![upload](../../../../assets/kbqa/embedding.png)"
#: ../../getting_started/application/kbqa/kbqa.md:47
#: e8b3e973e3b940e4bbf3180b7d6057ec
msgid "Embedding arguments"
msgstr "Embedding arguments"
#: ../../getting_started/application/kbqa/kbqa.md:48
#: 1cff4790d546455abeca725ae8b53c0d
msgid "topk:the top k vectors based on similarity score."
msgstr "topk:相似性检索出tok条文档"
#: ../../getting_started/application/kbqa/kbqa.md:49
#: 525aaa02028b4900919526290b6da9ef
msgid "recall_score:set a threshold score for the retrieval of similar vectors."
msgstr "recall_score:向量检索相关度衡量指标分数"
#: ../../getting_started/application/kbqa/kbqa.md:50
#: dead3b559b134e52a3cf38e29b1982e1
msgid "recall_type:recall type."
msgstr "recall_type:召回类型"
#: ../../getting_started/application/kbqa/kbqa.md:51
#: 7c48de3186a14fcf96ee2bc6d715bd4c
msgid "model:A model used to create vector representations of text or other data."
msgstr "model:embdding模型"
#: ../../getting_started/application/kbqa/kbqa.md:52
#: 875ba3d051b847a5b3357a4eef583c0a
msgid "chunk_size:The size of the data chunks used in processing."
msgstr "chunk_size:文档切片阈值大小"
#: ../../getting_started/application/kbqa/kbqa.md:53
#: e43f2bb091594532ba69ed3e3a385cdd
msgid "chunk_overlap:The amount of overlap between adjacent data chunks."
msgstr "chunk_overlap:文本块之间的最大重叠量。保留一些重叠可以保持文本块之间的连续性(例如使用滑动窗口)"
#: ../../getting_started/application/kbqa/kbqa.md:55
#: e21e19688fa042a4b60860ffa6bcf119
msgid "4.2 Prompt"
msgstr "4.2 Prompt"
#: ../../getting_started/application/kbqa/kbqa.md:56
#: 00f0bddcd9174b7a84a25b7fe6d286e9
msgid "Prompt Argument ![upload](../../../../assets/kbqa/prompt.png)"
msgstr "Prompt Argument ![upload](../../../../assets/kbqa/prompt.png)"
#: ../../getting_started/application/kbqa/kbqa.md:60
#: d5ff2b76a04949708b3c9050db647927
msgid "Prompt arguments"
msgstr "Prompt arguments"
#: ../../getting_started/application/kbqa/kbqa.md:61
#: 817a8744378546a2b049f44793b4554b
msgid ""
"scene:A contextual parameter used to define the setting or environment in"
" which the prompt is being used."
msgstr "scene:上下文环境的场景定义"
#: ../../getting_started/application/kbqa/kbqa.md:62
#: 5330c26a4fc34ff6ba5fea1910cfbdc0
msgid ""
"template:A pre-defined structure or format for the prompt, which can help"
" ensure that the AI system generates responses that are consistent with "
"the desired style or tone."
msgstr ""
"template:预定义的提示结构或格式可以帮助确保AI系统生成与所期望的风格或语气一致的回复。"
#: ../../getting_started/application/kbqa/kbqa.md:63
#: a34cd8248e6b44228868c2a02e12466f
msgid "max_token:The maximum number of tokens or words allowed in a prompt."
msgstr "max_token: prompt token最大值"
#: ../../getting_started/application/kbqa/kbqa.md:65
#: 7da49aac293f462b9e3968fa1493dba1
msgid "5.Change Vector Database"
msgstr "5.Change Vector Database"
#: ../../getting_started/application/kbqa/kbqa.md:67
#: 680bfb451eb040e0aa55f8faa12bb75a
msgid "Vector Store SETTINGS"
msgstr "Vector Store SETTINGS"
#: ../../getting_started/application/kbqa/kbqa.md:68
#: 32820ae6807840119a786e92124ad209
msgid "Chroma"
msgstr "Chroma"
#: ../../getting_started/application/kbqa/kbqa.md:69
#: 7573aa05bc914fc2993b2531475b3b99
msgid "VECTOR_STORE_TYPE=Chroma"
msgstr "VECTOR_STORE_TYPE=Chroma"
#: ../../getting_started/application/kbqa/kbqa.md:70
#: cc2c5521ba6e417eb5b3cf96375c2a08
msgid "MILVUS"
msgstr "MILVUS"
#: ../../getting_started/application/kbqa/kbqa.md:71
#: fc5dfa54c8a24b069e2709c309e43faa
msgid "VECTOR_STORE_TYPE=Milvus"
msgstr "VECTOR_STORE_TYPE=Milvus"
#: ../../getting_started/application/kbqa/kbqa.md:72
#: 8661384f0c4a40a0b79fad94bbee1792
msgid "MILVUS_URL=127.0.0.1"
msgstr "MILVUS_URL=127.0.0.1"
#: ../../getting_started/application/kbqa/kbqa.md:73
#: 2035d343d34f47fa9fa84eb841afa2f8
msgid "MILVUS_PORT=19530"
msgstr "MILVUS_PORT=19530"
#: ../../getting_started/application/kbqa/kbqa.md:74
#: e0d8bd8d1dfb425bb50d5c8bf4a134ab
msgid "MILVUS_USERNAME"
msgstr "MILVUS_USERNAME"
#: ../../getting_started/application/kbqa/kbqa.md:75
#: dec547e6b4354c71857fa7c92c08bfa6
msgid "MILVUS_PASSWORD"
msgstr "MILVUS_PASSWORD"
#: ../../getting_started/application/kbqa/kbqa.md:76
#: 9ad3f325f79e49309bd3419f48015e90
msgid "MILVUS_SECURE="
msgstr "MILVUS_SECURE="
#: ../../getting_started/application/kbqa/kbqa.md:78
#: f6e15bf479da496fbcdf7621bad4dda8
msgid "WEAVIATE"
msgstr "WEAVIATE"
#: ../../getting_started/application/kbqa/kbqa.md:79
#: f7838d5de0a9452486adceb3a26369ee
msgid "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network"
msgstr "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.networkc"

View File

@@ -0,0 +1,51 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq.rst:13
msgid "Deloy"
msgstr "Deloy"
#: ../../getting_started/faq.rst:2 136e86216b564bcb91709d97ae03013c
msgid "FAQ"
msgstr "FAQ"
#: ../../getting_started/faq.rst:3 9350e190f83648248a4e53a4703529a0
msgid ""
"DB-GPT product is a Web application that you can chat database, chat "
"knowledge, text2dashboard."
msgstr ""
#: ../../getting_started/faq.rst:8 c4d7887a869f48ddb1819a5df9206005
msgid "deploy"
msgstr "deploy"
#: ../../getting_started/faq.rst:9 3859638a380043dfa466d6f9c593ff14
msgid "llm"
msgstr "llm"
#: ../../getting_started/faq.rst:10 2039dce8389d4f9fb8fb7a8afa393809
msgid "chatdb"
msgstr "chatdb"
#: ../../getting_started/faq.rst:11 24a115044f984a71a83dea6000a98de8
msgid "kbqa"
msgstr "kbqa"

View File

@@ -0,0 +1,61 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 23:15+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/chatdb/chatdb_faq.md:1
#: ffd1abb6b8f34e53a8ed83ede845b141
msgid "Chat DB FAQ"
msgstr ""
#: ../../getting_started/faq/chatdb/chatdb_faq.md:3
#: 2467f6603ab341bbb1ce17f75dc06e5e
msgid "Q1: What difference between ChatData and ChatDB"
msgstr ""
#: ../../getting_started/faq/chatdb/chatdb_faq.md:4
#: 6e47803e92064e379a1bb74e2b3d347a
msgid ""
"ChatData generates SQL from natural language and executes it. ChatDB "
"involves conversing with metadata from the Database, including metadata "
"about databases, tables, and fields."
msgstr ""
#: ../../getting_started/faq/chatdb/chatdb_faq.md:6
#: fa3eef15697f43db963f6c875e85323b
msgid "Q2: The suitable llm model currently supported for text-to-SQL is?"
msgstr ""
#: ../../getting_started/faq/chatdb/chatdb_faq.md:7
#: 47f36a0a48c045269bcf790873d55f8c
msgid "Now vicunna-13b-1.5 and llama2-70b is more suitable for text-to-SQL."
msgstr ""
#: ../../getting_started/faq/chatdb/chatdb_faq.md:9
#: 9f5ef9a05fac4eb3b27875171ec4e763
msgid "Q3: How to fine-tune Text-to-SQL in DB-GPT"
msgstr ""
#: ../../getting_started/faq/chatdb/chatdb_faq.md:10
#: c9b6f6e969e04805a59d9ccabc03c0b8
msgid ""
"there is another github project for Text-to-SQL fine-tune "
"(https://github.com/eosphoros-ai/DB-GPT-Hub)"
msgstr ""

View File

@@ -0,0 +1,86 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 23:15+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/deploy/deploy_faq.md:1
#: 6aa73265a43d4e6ea287e6265ef4efe5
msgid "Installation FAQ"
msgstr "Installation FAQ"
#: ../../getting_started/faq/deploy/deploy_faq.md:5
#: 4efe241d5e724db5ab22548cfb88f8b6
msgid ""
"Q1: execute `pip install -r requirements.txt` error, found some package "
"cannot find correct version."
msgstr "Q1: execute `pip install -r requirements.txt` error, found some package "
"cannot find correct version."
#: ../../getting_started/faq/deploy/deploy_faq.md:6
#: e837e10bdcfa49cebb71b32eece4831b
msgid "change the pip source."
msgstr "替换pip源."
#: ../../getting_started/faq/deploy/deploy_faq.md:13
#: ../../getting_started/faq/deploy/deploy_faq.md:20
#: 84310ec0c54e4a02949da2e0b35c8c7d e8a7a8b38b7849b88c14fb6d647f9b63
msgid "or"
msgstr "或者"
#: ../../getting_started/faq/deploy/deploy_faq.md:27
#: 87797a5dafef47c8884f6f1be9a1fbd2
msgid ""
"Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to"
" open database file"
msgstr "Q2: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to"
" open database file"
#: ../../getting_started/faq/deploy/deploy_faq.md:29
#: bc96b22e201c47ec999c8d98227a956d
msgid "make sure you pull latest code or create directory with mkdir pilot/data"
msgstr "make sure you pull latest code or create directory with mkdir pilot/data"
#: ../../getting_started/faq/deploy/deploy_faq.md:31
#: d7938f1c70a64efa9948080a6d416964
msgid "Q3: The model keeps getting killed."
msgstr "Q3: The model keeps getting killed."
#: ../../getting_started/faq/deploy/deploy_faq.md:32
#: b072386586a64b2289c0fcdf6857b2b7
msgid ""
"your GPU VRAM size is not enough, try replace your hardware or replace "
"other llms."
msgstr "GPU显存不够, 增加显存或者换一个显存小的模型"
#~ msgid ""
#~ "Q2: When use Mysql, Access denied "
#~ "for user 'root@localhost'(using password :NO)"
#~ msgstr ""
#~ msgid "A3: make sure you have installed mysql instance in right way"
#~ msgstr ""
#~ msgid "Docker:"
#~ msgstr ""
#~ msgid ""
#~ "Normal: [download mysql "
#~ "instance](https://dev.mysql.com/downloads/mysql/)"
#~ msgstr ""

View File

@@ -0,0 +1,98 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/kbqa/kbqa_faq.md:1
#: bce4fd8751dd49bb8ab89a651094cfe1
msgid "KBQA FAQ"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:4
#: c9714f70a03f422fae7bdb2b3b3c8be5
msgid "Q1: text2vec-large-chinese not found"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:6
#: 1c56b9081adb46b29063947659e67083
msgid ""
"make sure you have download text2vec-large-chinese embedding model in "
"right way"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:18
#: 39763bdbc6824675b293f0f219fc05bb
msgid "Q2:How to change Vector DB Type in DB-GPT."
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:20
#: 225cc6e5b6944ce3b675845aa93afb66
msgid "Update .env file and set VECTOR_STORE_TYPE."
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:22
#: 5ad12db4aed44f4890911b5aea9529ce
msgid ""
"DB-GPT currently support Chroma(Default), Milvus(>2.1), Weaviate vector "
"database. If you want to change vector db, Update your .env, set your "
"vector store type, VECTOR_STORE_TYPE=Chroma (now only support Chroma and "
"Milvus(>2.1), if you set Milvus, please set MILVUS_URL and MILVUS_PORT) "
"If you want to support more vector db, you can integrate yourself.[how to"
" integrate](https://db-gpt.readthedocs.io/en/latest/modules/vector.html)"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:38
#: f8dd19ff86244cc7be91b7ab9190baeb
msgid "Q3:When I use vicuna-13b, found some illegal character like this."
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:43
#: ec4a69476d4e4e4d92619ade829c2b39
msgid ""
"Set KNOWLEDGE_SEARCH_TOP_SIZE smaller or set KNOWLEDGE_CHUNK_SIZE "
"smaller, and reboot server."
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:45
#: 5b039a860c0b4cb088d042467ad2a49c
msgid ""
"Q4:space add error (pymysql.err.OperationalError) (1054, \"Unknown column"
" 'knowledge_space.context' in 'field list'\")"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:47
#: bafac5e248894263a77ab688922df520
msgid "1.shutdown dbgpt_server(ctrl c)"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:49
#: 8ff5ca901f684e8ebe951f8c053ab925
msgid "2.add column context for table knowledge_space"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:53
#: ea9f945678aa426894f0a12013f8fe5c
msgid "3.execute sql ddl"
msgstr ""
#: ../../getting_started/faq/kbqa/kbqa_faq.md:58
#: 1a7941f2f1e94d359095e92b470ee02c
msgid "4.restart dbgpt serve"
msgstr ""

View File

@@ -0,0 +1,92 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/faq/llm/llm_faq.md:1 f79c82f385904385b08618436e600d9f
msgid "LLM USE FAQ"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:3 1fc802fa69224062b02403bc35084c18
msgid "Q1:how to use openai chatgpt service"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:4 9094902d148a4cc99fe72aa0e41062ae
msgid "change your LLM_MODEL"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:9 07073eb8d9eb4988a3b035666c63d3fb
msgid "set your OPENAPI KEY"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:15 a71bb0d1181e47368a286b5694a00056
msgid "make sure your openapi API_KEY is available"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:17 789b003864824970923bac474a9ab0cd
msgid "Q2 how to use MultiGPUs"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:18 4be3dd71a8654202a210bcb11c50cc79
msgid ""
"DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs."
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:20 3a00c5fff666451cacda7f9af37564b9
msgid ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:30 7fef386f3a3443569042e7d8b9a3ff15
msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU."
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:32 d75d440da8ab49a3944c3a456db25bee
msgid "Q3 Not Enough Memory"
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:34 2a2cd59382e149ffb623cb3d42754dca
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:36 755131812baa4f5a99b706849459e10a
msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by "
"default)."
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:38 b85424bc11134af985f687d8ee8d2c9f
msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM."
msgstr ""
#: ../../getting_started/faq/llm/llm_faq.md:40 b6e4db679636492c9e4170a33fd6f638
msgid ""
"Note: you need to install the latest dependencies according to "
"[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)."
msgstr ""

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.3.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-10 16:38+0800\n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -19,29 +19,29 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/getting_started.md:1 b5ce6e0075114e669c78cda63f22dfe6
#: ../../getting_started/getting_started.md:1 70e40ad608d54bcfae6faf0437c09b6f
msgid "Quickstart Guide"
msgstr "使用指南"
#: ../../getting_started/getting_started.md:3 caf6a35c0ddc43199750b2faae2bf95d
#: ../../getting_started/getting_started.md:3 c22ff099d6e940f7938dcea0e2265f11
msgid ""
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
"environment and data."
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
#: ../../getting_started/getting_started.md:5 338aae168e63461f84c8233c4cbf4bcc
#: ../../getting_started/getting_started.md:5 dc717e76b3194a85ac5b9e8a4479b197
msgid "Installation"
msgstr "安装"
#: ../../getting_started/getting_started.md:7 62519550906f4fdb99aad61502e7a5e6
#: ../../getting_started/getting_started.md:7 a1d30c3d01b94310b89fae16ac581157
msgid "To get started, install DB-GPT with the following steps."
msgstr "请按照以下步骤安装DB-GPT"
#: ../../getting_started/getting_started.md:9 f9072092ebf24ebc893604ca86116cd3
#: ../../getting_started/getting_started.md:9 bab013745cb24538ac568b97045b72cc
msgid "1. Hardware Requirements"
msgstr "1. 硬件要求"
#: ../../getting_started/getting_started.md:10 071c54f9a38e4ef5bb5dcfbf7d4a6004
#: ../../getting_started/getting_started.md:10 7dd84870db394338a5c7e63b171207e0
msgid ""
"As our project has the ability to achieve ChatGPT performance of over "
"85%, there are certain hardware requirements. However, overall, the "
@@ -49,86 +49,76 @@ msgid ""
"specific hardware requirements for deployment are as follows:"
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。但总体来说我们在消费级的显卡上即可完成项目的部署使用具体部署的硬件说明如下:"
#: ../../getting_started/getting_started.md 4757df7812c64eb4b511aa3b8950c899
#: ../../getting_started/getting_started.md 055d07f830c54303a0b5596601c58870
msgid "GPU"
msgstr "GPU"
#: ../../getting_started/getting_started.md
#: ../../getting_started/getting_started.md:60 8f316fbd032f474b88d823978d134328
#: cb3d34d1ed074e1187b60c6f74cf1468
#: ../../getting_started/getting_started.md:50 7323cee42940438b8a0752d3c2355e59
#: e3fc2c10f81b4fe2ac0bfd9fe78feed9
msgid "VRAM Size"
msgstr "显存大小"
#: ../../getting_started/getting_started.md ac41e6ccf820424b9c497156a317287d
#: ../../getting_started/getting_started.md 68831daf63f14dcd92088dd6c866f110
msgid "Performance"
msgstr "显存大小"
#: ../../getting_started/getting_started.md 3ca06ce116754dde958ba21beaedac6f
#: ../../getting_started/getting_started.md 8a7197a5e92c40a9ba160b43656983d2
msgid "RTX 4090"
msgstr "RTX 4090"
#: ../../getting_started/getting_started.md 54e3410cc9a241098ae4412356a89f02
#: e3f31b30c4c84b45bc63b0ea193d0ed1
#: ../../getting_started/getting_started.md 69b2bfdec17e43fbb57f47e3e10a5f5a
#: 81d0c2444bbb4a2fb834a628689e4b68
msgid "24 GB"
msgstr "24 GB"
#: ../../getting_started/getting_started.md ff0c37e041a848899b1c3a63bab43404
#: ../../getting_started/getting_started.md 49b65b0eaffb41d78a1227bcc3e836e0
msgid "Smooth conversation inference"
msgstr "可以流畅的进行对话推理,无卡顿"
#: ../../getting_started/getting_started.md 09120e60cfc640e182e8069a372038cc
#: ../../getting_started/getting_started.md 527def44c8b24e4bbe589d110bd43e91
msgid "RTX 3090"
msgstr "RTX 3090"
#: ../../getting_started/getting_started.md 0545dfec3b6d47e1bffc87301a0f05b3
#: ../../getting_started/getting_started.md ad9666e0521c4ef0ad19c2e224daecda
msgid "Smooth conversation inference, better than V100"
msgstr "可以流畅进行对话推理有卡顿感但好于V100"
#: ../../getting_started/getting_started.md d41078c0561f47a981a488eb4b9c7a54
#: ../../getting_started/getting_started.md c090521969f042398236f6d04b017295
msgid "V100"
msgstr "V100"
#: ../../getting_started/getting_started.md 6aefb19420f4452999544a0da646f067
#: ../../getting_started/getting_started.md 60605e0de3fc494cbb7199cef8f831ad
msgid "16 GB"
msgstr "16 GB"
#: ../../getting_started/getting_started.md 8e06966b88234f4898729750d724e272
#: ../../getting_started/getting_started.md b82747aee44e444984565aab0faa2a64
msgid "Conversation inference possible, noticeable stutter"
msgstr "可以进行对话推理,有明显卡顿"
#: ../../getting_started/getting_started.md:18 113957ae638e4799a47e58a2ca259ce8
#: ../../getting_started/getting_started.md:18 b9df8ec7b2c34233b28db81f3c90f8c7
msgid "2. Install"
msgstr "2. 安装"
#: ../../getting_started/getting_started.md:20 e068b950171f4c48a9e125ece31f75b9
#: ../../getting_started/getting_started.md:20 bd0f0159a49b4493ab387e124fef15fa
#, fuzzy
msgid ""
"1.This project relies on a local MySQL database service, which you need "
"to install locally. We recommend using Docker for installation."
msgstr "本项目依赖一个本地的 MySQL 数据库服务,你需要本地安装,推荐直接使用 Docker 安装。"
#: ../../getting_started/getting_started.md:24 f115b61438ad403aaaae0fc4970a982b
msgid "prepare server sql script"
msgstr "准备db-gpt server sql脚本"
#: ../../getting_started/getting_started.md:29 c5f707269582464e92e5fd57365f5a51
msgid ""
"We use [Chroma embedding database](https://github.com/chroma-core/chroma)"
" as the default for our vector database, so there is no need for special "
"installation. If you choose to connect to other databases, you can follow"
" our tutorial for installation and configuration. For the entire "
"installation process of DB-GPT, we use the miniconda3 virtual "
"environment. Create a virtual environment and install the Python "
"dependencies."
" as the default for our vector database and use SQLite as the default for"
" our database, so there is no need for special installation. If you "
"choose to connect to other databases, you can follow our tutorial for "
"installation and configuration. For the entire installation process of "
"DB-GPT, we use the miniconda3 virtual environment. Create a virtual "
"environment and install the Python dependencies."
msgstr ""
"向量数据库我们默认使用的是Chroma内存数据库所以无需特殊安装如果有需要连接其他的同学可以按照我们的教程进行安装配置。整个DB-"
"GPT的安装过程我们使用的是miniconda3的虚拟环境。创建虚拟环境并安装python依赖包"
#: ../../getting_started/getting_started.md:38 377381c76f32409298f4184f23653205
#: ../../getting_started/getting_started.md:29 bc8f2ee7894b4cad858d5e2cfbee10a9
msgid "Before use DB-GPT Knowledge Management"
msgstr "使用知识库管理功能之前"
#: ../../getting_started/getting_started.md:44 62e7e96a18ca495595b3dfab97ca387e
#: ../../getting_started/getting_started.md:34 03e1652f724946ec8d01a97811883b5f
msgid ""
"Once the environment is installed, we have to create a new folder "
"\"models\" in the DB-GPT project, and then we can put all the models "
@@ -137,40 +127,40 @@ msgstr ""
"环境安装完成后我们必须在DB-"
"GPT项目中创建一个新文件夹\"models\"然后我们可以把从huggingface下载的所有模型放到这个目录下。"
#: ../../getting_started/getting_started.md:47 e34d946ab06e47dba81cf50348987594
#: ../../getting_started/getting_started.md:37 e26a827f1d5b4d2cbf92b42bce461082
#, fuzzy
msgid "Notice make sure you have install git-lfs"
msgstr "确保你已经安装了git-lfs"
#: ../../getting_started/getting_started.md:58 07149cfac5314daabbd37dc1cef82905
#: ../../getting_started/getting_started.md:48 af91349fe332472ca7a2e592a0c582f7
msgid ""
"The model files are large and will take a long time to download. During "
"the download, let's configure the .env file, which needs to be copied and"
" created from the .env.template"
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件它需要从。env.template中复制和创建。"
#: ../../getting_started/getting_started.md:61 b8da240286e04573a2d731d228fbdb1e
#: ../../getting_started/getting_started.md:51 84d2d93130034c2f94f4d0bebcc2b0d2
msgid "cp .env.template .env"
msgstr "cp .env.template .env"
#: ../../getting_started/getting_started.md:64 c2b05910bba14e5b9b49f8ad8c436412
#: ../../getting_started/getting_started.md:54 b3adfec002354f00baba9c43a1a3e381
msgid ""
"You can configure basic parameters in the .env file, for example setting "
"LLM_MODEL to the model to be used"
msgstr "您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。"
#: ../../getting_started/getting_started.md:66 4af57c4cbf4744a8af83083d051098b3
#: ../../getting_started/getting_started.md:56 1b11268133c0440cb3c26981f5d0c1fe
msgid ""
"([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on "
"llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-"
"13b-v1.5` to try this model)"
msgstr ""
#: ../../getting_started/getting_started.md:68 62287c3828ba48c4871dd831af7d819a
#: ../../getting_started/getting_started.md:58 511d47f08bab42e0bd3f34df58c5a822
msgid "3. Run"
msgstr "3. 运行"
#: ../../getting_started/getting_started.md:69 7a04f6c514c34d87a03b105de8a03f0b
#: ../../getting_started/getting_started.md:59 eba1922d852b4d548370884987237c09
msgid ""
"You can refer to this document to obtain the Vicuna weights: "
"[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-"
@@ -179,7 +169,7 @@ msgstr ""
"关于基础模型, 可以根据[Vicuna](https://github.com/lm-"
"sys/FastChat/blob/main/README.md#model-weights) 合成教程进行合成。"
#: ../../getting_started/getting_started.md:71 72c9ad9d95c34b42807ed0a1e46adb73
#: ../../getting_started/getting_started.md:61 bb51e0506f4e44c3a4daf1d0fbd5a4ef
msgid ""
"If you have difficulty with this step, you can also directly use the "
"model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a "
@@ -188,7 +178,7 @@ msgstr ""
"如果此步有困难的同学,也可以直接使用[此链接](https://huggingface.co/Tribbiani/vicuna-"
"7b)上的模型进行替代。"
#: ../../getting_started/getting_started.md:73 889cf1a4dbd2422580fe59d9f3eab6bf
#: ../../getting_started/getting_started.md:63 0d532891bd754e78a6390fcc97d0f59d
msgid ""
"set .env configuration set your vector store type, "
"eg:VECTOR_STORE_TYPE=Chroma, now we support Chroma and Milvus(version > "
@@ -197,21 +187,21 @@ msgstr ""
"在.env文件设置向量数据库环境变量eg:VECTOR_STORE_TYPE=Chroma, 目前我们支持了 Chroma and "
"Milvus(version >2.1) "
#: ../../getting_started/getting_started.md:76 28a2ec60d35c43edab71bac103ccb0e0
#: ../../getting_started/getting_started.md:66 8beb3199650e4d2b8f567d0de20e2cb6
#, fuzzy
msgid "1.Run db-gpt server"
msgstr "运行模型服务"
#: ../../getting_started/getting_started.md:81
#: ../../getting_started/getting_started.md:140
#: ../../getting_started/getting_started.md:194
#: 04ed6e8729604ee59edd16995301f0d5 225139430ee947cb8a191f1641f581b3
#: b33473644c124fdf9fe9912307cb6a9d
#: ../../getting_started/getting_started.md:71
#: ../../getting_started/getting_started.md:131
#: ../../getting_started/getting_started.md:200
#: 41984cf5289f4cefbead6b84fb011e92 5c3eb807af36425a8b0620705fc9c4e9
#: 7c420ffa305d4eb1a052b25101e80011
#, fuzzy
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/getting_started.md:83 8068aef8a84e46b59caf59de32ad6e2b
#: ../../getting_started/getting_started.md:73 802ca13fbe0542a19258d40c08da509e
msgid ""
"If you want to access an external LLM service, you need to 1.set the "
"variables LLM_MODEL=YOUR_MODEL_NAME "
@@ -219,7 +209,7 @@ msgid ""
"file. 2.execute dbgpt_server.py in light mode"
msgstr "如果你想访问外部的大模型服务1.需要在.env文件设置模型名和外部模型服务地址。2.使用light模式启动服务"
#: ../../getting_started/getting_started.md:86 4a11e53950ac489a891ca8dc67c26ca0
#: ../../getting_started/getting_started.md:76 7808d30d1642422fbed42f1929e37343
#, fuzzy
msgid ""
"If you want to learn about dbgpt-webui, read https://github./csunny/DB-"
@@ -228,268 +218,304 @@ msgstr ""
"如果你想了解DB-GPT前端服务访问https://github.com/csunny/DB-GPT/tree/new-page-"
"framework/datacenter"
#: ../../getting_started/getting_started.md:92 6a2b09ea9ba64c879d3e7ca154d95095
#: ../../getting_started/getting_started.md:82 094ed036720c406c904c8a9ca6e7b8ae
msgid "4. Docker (Experimental)"
msgstr "4. Docker (Experimental)"
#: ../../getting_started/getting_started.md:94 5cef89f829ce4da299f9b430fd52c446
#: ../../getting_started/getting_started.md:84 9428d68d1eef42c28bf33d9bd7022c85
msgid "4.1 Building Docker image"
msgstr "4.1 Building Docker image"
#: ../../getting_started/getting_started.md:100
#: b3999a9169694b209bd4848c6458ff41
#: ../../getting_started/getting_started.md:90 16bfb175a85c455e847c3cafe2d94ce1
msgid "Review images by listing them:"
msgstr "Review images by listing them:"
#: ../../getting_started/getting_started.md:106
#: ../../getting_started/getting_started.md:180
#: 10486e7f872f4c13a384b75e564e37fe 5b142d0a103848fa82fafd40a557f998
#: ../../getting_started/getting_started.md:96
#: ../../getting_started/getting_started.md:186
#: 46438a8f73f344ca84f5c77c8fe7434b eaca8cbd8bdd4c03863ef5ea218218b1
msgid "Output should look something like the following:"
msgstr "Output should look something like the following:"
#: ../../getting_started/getting_started.md:113
#: 83dd5525d0f7481fac57e5898605e7f3
#: ../../getting_started/getting_started.md:103
#: 47d72ad65ce0455ca0d85580eb0db619
msgid ""
"`eosphorosai/dbgpt` is the base image, which contains the project's base "
"dependencies and a sqlite database. `eosphorosai/dbgpt-allinone` build "
"from `eosphorosai/dbgpt`, which contains a mysql database."
msgstr ""
#: ../../getting_started/getting_started.md:105
#: 980f39c088564de288cab120a4004d53
msgid "You can pass some parameters to docker/build_all_images.sh."
msgstr "You can pass some parameters to docker/build_all_images.sh."
#: ../../getting_started/getting_started.md:121
#: d5b916efd7b84972b68a4aa50ebc047c
#: ../../getting_started/getting_started.md:113
#: fad33767622f4b639881c4124bdc92cc
msgid ""
"You can execute the command `bash docker/build_all_images.sh --help` to "
"see more usage."
msgstr "You can execute the command `bash docker/build_all_images.sh --help` to "
msgstr ""
"You can execute the command `bash docker/build_all_images.sh --help` to "
"see more usage."
#: ../../getting_started/getting_started.md:123
#: 0d01cafcfde04c92b4115d804dd9e912
#: ../../getting_started/getting_started.md:115
#: 878bb7ecfbe74a96bf42adef951bec44
msgid "4.2. Run all in one docker container"
msgstr "4.2. Run all in one docker container"
#: ../../getting_started/getting_started.md:125
#: 90d49baea1d34f01a0db696fd567bfba
msgid "**Run with local model**"
#: ../../getting_started/getting_started.md:117
#: 2e8279c88e814e8db5dc16d4b05c62e1
#, fuzzy
msgid "**Run with local model and SQLite database**"
msgstr "**Run with local model**"
#: ../../getting_started/getting_started.md:143
#: 422a237241014cac9917832ac053aed5
#: ../../getting_started/getting_started.md:134
#: fb87d01dc56c48ac937b537dcc29627c
msgid ""
"`-e LLM_MODEL=vicuna-13b`, means we use vicuna-13b as llm model, see "
"/pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr "`-e LLM_MODEL=vicuna-13b`, means we use vicuna-13b as llm model, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr ""
"`-e LLM_MODEL=vicuna-13b`, means we use vicuna-13b as llm model, see "
"/pilot/configs/model_config.LLM_MODEL_CONFIG"
#: ../../getting_started/getting_started.md:144
#: af5fb303a0c64567b9c82a3ce3b00aaa
#: ../../getting_started/getting_started.md:135
#: b2517db39b0b473ab4f302a925d86879
msgid ""
"`-v /data/models:/app/models`, means we mount the local model file "
"directory `/data/models` to the docker container directory `/app/models`,"
" please replace it with your model file directory."
msgstr "`-v /data/models:/app/models`, means we mount the local model file "
msgstr ""
"`-v /data/models:/app/models`, means we mount the local model file "
"directory `/data/models` to the docker container directory `/app/models`,"
" please replace it with your model file directory."
#: ../../getting_started/getting_started.md:146
#: ../../getting_started/getting_started.md:188
#: 05652e117c3f4d3aaea0269f980cf975 3844ae7eeec54804b05bc8aea01c3e36
#: ../../getting_started/getting_started.md:137
#: ../../getting_started/getting_started.md:194
#: 140b925a0e5b4daeb6880fef503d6aac 67636495385943e3b81af74b5433c74b
msgid "You can see log with command:"
msgstr "You can see log with command:"
#: ../../getting_started/getting_started.md:152
#: 665aa47936aa40458fe33dd73c76513d
#: ../../getting_started/getting_started.md:143
#: 4ba6f304021746f6aa63fe196e752092
#, fuzzy
msgid "**Run with local model and MySQL database**"
msgstr "**Run with local model**"
#: ../../getting_started/getting_started.md:158
#: 4f2fc76b150b4f248356418cf0183c83
msgid "**Run with openai interface**"
msgstr "**Run with openai interface**"
#: ../../getting_started/getting_started.md:171
#: 98d627b0f7f149dfb9c7365e3ba082a1
#: ../../getting_started/getting_started.md:177
#: ed6edd5016444324a98200f054c7d2a5
msgid ""
"`-e LLM_MODEL=proxyllm`, means we use proxy llm(openai interface, "
"fastchat interface...)"
msgstr "`-e LLM_MODEL=proxyllm`, means we use proxy llm(openai interface, "
msgstr ""
"`-e LLM_MODEL=proxyllm`, means we use proxy llm(openai interface, "
"fastchat interface...)"
#: ../../getting_started/getting_started.md:172
#: b293dedb88034aaab9a3b740e82d7adc
#: ../../getting_started/getting_started.md:178
#: 7bc17c46292f43c488dcec172c835d49
msgid ""
"`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, means we mount the local text2vec model to the docker "
"container."
msgstr "`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
msgstr ""
"`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, means we mount the local text2vec model to the docker "
"container."
#: ../../getting_started/getting_started.md:174
#: e48085324a324749af8a21bad8e7ca0e
#: ../../getting_started/getting_started.md:180
#: 257860df855843eb85383478d2e8baca
msgid "4.3. Run with docker compose"
msgstr ""
#: ../../getting_started/getting_started.md:196
#: a356d9ae292f4dc3913c5999e561d533
#: ../../getting_started/getting_started.md:202
#: 143be856423e412c8605488c0e50d2dc
msgid ""
"You can open docker-compose.yml in the project root directory to see more"
" details."
msgstr "You can open docker-compose.yml in the project root directory to see more"
msgstr ""
"You can open docker-compose.yml in the project root directory to see more"
" details."
#: ../../getting_started/getting_started.md:199
#: 0c644409bce34bf4b898c26e24f7a1ac
#: ../../getting_started/getting_started.md:205
#: 0dd3dc0d7a7f4565a041ab5855808fd3
msgid "5. Multiple GPUs"
msgstr "5. Multiple GPUs"
#: ../../getting_started/getting_started.md:201
#: af435b17984341e5abfd2d9b64fbde20
#: ../../getting_started/getting_started.md:207
#: 65b1d32a91634c069a80269c44728727
msgid ""
"DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs."
msgstr "DB-GPT will use all available gpu by default. And you can modify the "
msgstr ""
"DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs."
#: ../../getting_started/getting_started.md:203
#: aac3e9cec74344fd963e2136a26384af
#: ../../getting_started/getting_started.md:209
#: 1ef71b0a976640319211a245943d99d5
msgid ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:"
msgstr "Optionally, you can also specify the gpu ID to use before the starting "
msgstr ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:"
#: ../../getting_started/getting_started.md:213
#: 3351daf4f5a04b76bae2e9cd8740549e
#: ../../getting_started/getting_started.md:219
#: efbc9ca4b8f44b219460577b691d2c0a
msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU."
msgstr ""
#: ../../getting_started/getting_started.md:215
#: fabf9090b7c649b59b91742ba761339c
#: ../../getting_started/getting_started.md:221
#: c80f5e6d2a6042ad9ac0ef320cc6d987
msgid "6. Not Enough Memory"
msgstr ""
#: ../../getting_started/getting_started.md:217
#: d23322783fd34364acebe1679d7ca554
#: ../../getting_started/getting_started.md:223
#: b8151a333f804489ad6a2b59e739a8ed
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr ""
#: ../../getting_started/getting_started.md:219
#: 8699aa0b5fe049099fb1e8038f9ad1ee
#: ../../getting_started/getting_started.md:225
#: be32dacd8bb84a19985417bd1b78db0f
msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by "
"default)."
msgstr ""
#: ../../getting_started/getting_started.md:221
#: b115800c207b49b39de5e1a548feff58
#: ../../getting_started/getting_started.md:227
#: 247509e1392e4b4a8d8cdc59f2f94d37
msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM."
msgstr "Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
msgstr ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM."
#: ../../getting_started/getting_started.md:223
#: ae7146f0995d4eca9f775c39898d5313
#: ../../getting_started/getting_started.md:229
#: 4e8def991b8a491c83c29214e5f80669
msgid ""
"Note: you need to install the latest dependencies according to "
"[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)."
msgstr "Note: you need to install the latest dependencies according to "
msgstr ""
"Note: you need to install the latest dependencies according to "
"[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)."
#: ../../getting_started/getting_started.md:226
#: 58987c486d2641cda3180a77af26c56d
#: ../../getting_started/getting_started.md:232
#: 85f7bb08aa714f5db6db00d905dd9dc8
msgid ""
"Here are some of the VRAM size usage of the models we tested in some "
"common scenarios."
msgstr "Here are some of the VRAM size usage of the models we tested in some "
msgstr ""
"Here are some of the VRAM size usage of the models we tested in some "
"common scenarios."
#: ../../getting_started/getting_started.md:60 4ca1a7e65f7e46c08ef12a1357f68817
#: ../../getting_started/getting_started.md:50 165d0902ed064bbaa8b0bbe84befe139
msgid "Model"
msgstr "Model"
#: ../../getting_started/getting_started.md:60 21ab5b38f81a49aeacadbcde55082adf
#: ../../getting_started/getting_started.md:50 d077bca9cada4a9b89037ef5ab494c26
msgid "Quantize"
msgstr "Quantize"
#: ../../getting_started/getting_started.md:60 a2ad84e7807c4d3ebc9d2f9aec323962
#: b4e2d7a107074e58b91b263d815b2936
#: ../../getting_started/getting_started.md:50 42723863614e42b1aa6c664bfd197474
#: b71ce0c19de7471787cbbc09d6137a4b
msgid "vicuna-7b-v1.5"
msgstr "vicuna-7b-v1.5"
#: ../../getting_started/getting_started.md:60 0134b7564c3f4cd9a605c3e7dabf5c78
#: 12a931b938ee4e938a4ed166532946e9 9cc80bc860c7430cbc2b4aff3c4c67d2
#: b34fb93b644a419d8073203a1a0091fc bb61c197b1c74eb2bf4e7e42ad05850d
#: e3ab5cabd6f54faa9de0ec6c334c0d44 f942635d6f5e483293833ac5f92b6ab9
#: ../../getting_started/getting_started.md:50 2654d358528a48e38bec445644ffd20a
#: 2ad34cd14f54422491464967331d83fc 3b42a055e72847ec99d8d131fa9f5f84
#: 6641e98d9bed44ee853fe7a11263a88b a69801bc92f143aa9e0818886ef6eade
#: daa418b33705484ba944b0d92981a501 fc26bd1dace2432ca64b99ca16c2c80f
msgid "4-bit"
msgstr "4-bit"
#: ../../getting_started/getting_started.md:60 a8556b0ec7b3488585efcdc2ba5e8565
#: afb4365dab1e4be18f486fafbd5c6256 b50b8bc0e8084e6ebe4f9678d1f2af9d
#: ../../getting_started/getting_started.md:50 1688f8a0972c4026abf69b5186d92137
#: 683de09d524a4084bcdad1a184206f87 74586e40b480444591dc884e7f3f683f
#, fuzzy
msgid "8 GB"
msgstr "24 GB"
#: ../../getting_started/getting_started.md:60 259b43952ea5424a970dfa29c84cc83d
#: 64c6a8282eaf4e229d4196ccdbc2527f 7a426b8af2a6406fa4144142450e392e
#: 8e5742b0c4b44bf6bd0536aade037671 a96071a0c713493fae2b381408ceaad3
#: dcd6861abd2049a38f32957dd16408ab f334903a72934a2a86c3b24920f8cfdb
#: ../../getting_started/getting_started.md:50 13aa8cac7a784ad1a1b58b166be99711
#: 32633a38c1f44aac92f9647ee7867cd1 3b7fea4236174e2bb33894fd8234eddb
#: 3b8104af560e41f285d9c433e19f6cb7 5862f0c57e2c411dada47fe71f6a74bd
#: 6425f53f197742c8b3153a79cf4a220a d1fa83af3a884714a72f7a0af5f3be23
msgid "8-bit"
msgstr "8-bit"
#: ../../getting_started/getting_started.md:60 09c7435ba68248bd885a41366a08f557
#: 328ad98c8cb94f9083e8b5f158d7c14e 633e1f6d17404732a0c2f1cdf04f6591
#: c351117229304ae88561cdd284a77f55 c77a5a73cde34e3195a6389fdd81deda
#: f28704154fad48a8a08e16e136877ca7
#: ../../getting_started/getting_started.md:50 107ed4fb15c44cb5bb7020a8092f7341
#: 1d1791894226418ea623abdecd3107ba 1fdc42ef874f4022be31df7696e6a5de
#: 718c49ddac8e4086a75cdbba166dc3cb 99404c8333974ae7a1e68885f4471b32
#: da59674e4418488583cf4865545ad752
#, fuzzy
msgid "12 GB"
msgstr "24 GB"
#: ../../getting_started/getting_started.md:60 63564bdcfa4249438d0245922068924f
#: 9ad1bc9af45b4f52a1a0a9e7c8d14cce
#: ../../getting_started/getting_started.md:50 79bf82a6dc9f4c22af779be4b1d2d13c
#: b5bde8f01fc343baa567806fd53070dc
msgid "vicuna-13b-v1.5"
msgstr "vicuna-13b-v1.5"
#: ../../getting_started/getting_started.md:60 0e05753ff4ec4c1a94fc68b63254115b
#: 68a9e76f2089461692c8f396774e7634 a3136e5685cf49c4adbf79224cb0eb7d
#: ../../getting_started/getting_started.md:50 362720115dd64143a214b3b9d3069512
#: e0545557159b44dea0522a1848170216 e3c6ab8f25bc4adbae3b3087716a1efe
#, fuzzy
msgid "20 GB"
msgstr "24 GB"
#: ../../getting_started/getting_started.md:60 173b58152fde49f4b3e03103323c4dd7
#: 7ab8309661024d1cb656911acb7b0d58
#: ../../getting_started/getting_started.md:50 295fd7d6e6c846748a1f2c82f8c79ba0
#: 55efa8e8e4e74efc865df87fcfade84d
msgid "llama-2-7b"
msgstr "llama-2-7b"
#: ../../getting_started/getting_started.md:60 069459a955274376a3d9a4021a032424
#: 619257e4671141488feb6e71bd002880
#: ../../getting_started/getting_started.md:50 15b1e541bdac43fda1dcccf2aaeaa40f
#: 5785954810bc45369ed1745f5c503c9c
msgid "llama-2-13b"
msgstr "llama-2-13b"
#: ../../getting_started/getting_started.md:60 ce39526cfbcc4c8c910928dc69293720
#: f5e3fb53bd964e328aac908ae6fc06a4
#: ../../getting_started/getting_started.md:50 185892d421684c1b903c04ea9b6653d7
#: d7970a5fe574434798d72427436c82d5
msgid "llama-2-70b"
msgstr "llama-2-70b"
#: ../../getting_started/getting_started.md:60 c09fb995952048d290dde484a0e09478
#: ../../getting_started/getting_started.md:50 f732a7f73a504bd1b56b42dab1114d04
#, fuzzy
msgid "48 GB"
msgstr "24 GB"
#: ../../getting_started/getting_started.md:60 4d9bc275dbc548918cda210f5f5d7722
#: ../../getting_started/getting_started.md:50 04642b0dd4bc4563a45c6d15fa1d8f07
#, fuzzy
msgid "80 GB"
msgstr "24 GB"
#: ../../getting_started/getting_started.md:60 962d13ed427041a19f20d9a1c8ca26ff
#: dedfece8bc8c4ffabf9cc7166e4ca4db
#: ../../getting_started/getting_started.md:50 b9c4d8b71b1e4185bab24a857433f884
#: fc1b1927cc344e2e91bb7047c79ad227
msgid "baichuan-7b"
msgstr ""
#: ../../getting_started/getting_started.md:60 43f629be73914d4cbb1d75c7a06f88e8
#: 5cba8d23c6e847579986b7019e073eaf
#: ../../getting_started/getting_started.md:50 c92417b527a04d82ac9caa837884113c
#: dc17ae982c154b988433a0c623301bcb
msgid "baichuan-13b"
msgstr "baichuan-13b"
#~ msgid "4.2. Run with docker compose"
#~ msgstr "4.2. Run with docker compose"
#~ msgid ""
#~ "1.This project relies on a local "
#~ "MySQL database service, which you need"
#~ " to install locally. We recommend "
#~ "using Docker for installation."
#~ msgstr "本项目依赖一个本地的 MySQL 数据库服务,你需要本地安装,推荐直接使用 Docker 安装。"
#~ msgid "prepare server sql script"
#~ msgstr "准备db-gpt server sql脚本"

View File

@@ -0,0 +1,52 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install.rst:2 ../../getting_started/install.rst:14
#: 2861085e63144eaca1bb825e5f05d089
msgid "Install"
msgstr "Install"
#: ../../getting_started/install.rst:3 01a6603d91fa4520b0f839379d4eda23
msgid ""
"DB-GPT product is a Web application that you can chat database, chat "
"knowledge, text2dashboard."
msgstr "DB-GPT 可以生成sql智能报表, 知识库问答的产品"
#: ../../getting_started/install.rst:8 beca85cddc9b4406aecf83d5dfcce1f7
msgid "deploy"
msgstr "部署"
#: ../../getting_started/install.rst:9 601e9b9eb91f445fb07d2f1c807f0370
msgid "docker"
msgstr "docker"
#: ../../getting_started/install.rst:10 6d1e094ac9284458a32a3e7fa6241c81
msgid "docker_compose"
msgstr "docker_compose"
#: ../../getting_started/install.rst:11 ff1d1c60bbdc4e8ca82b7a9f303dd167
msgid "environment"
msgstr "environment"
#: ../../getting_started/install.rst:12 33bfbe8defd74244bfc24e8fbfd640f6
msgid "deploy_faq"
msgstr "deploy_faq"

View File

@@ -0,0 +1,424 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 23:15+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/deploy/deploy.md:1
#: de443fce549545518824a89604028a2e
msgid "Installation From Source"
msgstr "源码安装"
#: ../../getting_started/install/deploy/deploy.md:3
#: d7b1a80599004c589c9045eba98cc5c9
msgid ""
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
"environment and data."
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
#: ../../getting_started/install/deploy/deploy.md:5
#: 0ba98573194c4108aedaa2669915e949
msgid "Installation"
msgstr "安装"
#: ../../getting_started/install/deploy/deploy.md:7
#: b8f465fcee2b45009bb1c6356df06b20
msgid "To get started, install DB-GPT with the following steps."
msgstr "请按照以下步骤安装DB-GPT"
#: ../../getting_started/install/deploy/deploy.md:9
#: fd5031c97e304023bd6880cd10d58413
msgid "1. Hardware Requirements"
msgstr "1. 硬件要求"
#: ../../getting_started/install/deploy/deploy.md:10
#: 05f570b3999f465982c2648f658aed82
msgid ""
"As our project has the ability to achieve ChatGPT performance of over "
"85%, there are certain hardware requirements. However, overall, the "
"project can be deployed and used on consumer-grade graphics cards. The "
"specific hardware requirements for deployment are as follows:"
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能所以对硬件有一定的要求。但总体来说我们在消费级的显卡上即可完成项目的部署使用具体部署的硬件说明如下:"
#: ../../getting_started/install/deploy/deploy.md
#: 5c5ee902c51d4e44aeeac3fa99910098
msgid "GPU"
msgstr "GPU"
#: ../../getting_started/install/deploy/deploy.md
#: a3199d1f11474451a06a11503c4e8c74 e3d7c2003b444cb886aec34aaba4acfe
msgid "VRAM Size"
msgstr "显存"
#: ../../getting_started/install/deploy/deploy.md
#: 3bd4ce6f9201483fa579d42ebf8cf556
msgid "Performance"
msgstr "Performance"
#: ../../getting_started/install/deploy/deploy.md
#: 8256a27b6a534edea5646589d65eb34e
msgid "RTX 4090"
msgstr "RTX 4090"
#: ../../getting_started/install/deploy/deploy.md
#: 25c1f69adc5d4a058dbd28ea4414c3f8 ed85dab6725b4f0baf13ff67a7032777
msgid "24 GB"
msgstr "24 GB"
#: ../../getting_started/install/deploy/deploy.md
#: f57d2a02d8344a3d9870c1c21728249d
msgid "Smooth conversation inference"
msgstr "Smooth conversation inference"
#: ../../getting_started/install/deploy/deploy.md
#: aa1e607b65964d43ad93fc9b3cff7712
msgid "RTX 3090"
msgstr "RTX 3090"
#: ../../getting_started/install/deploy/deploy.md
#: c0220f95d58543b498bdf896b2c1a2a1
msgid "Smooth conversation inference, better than V100"
msgstr "Smooth conversation inference, better than V100"
#: ../../getting_started/install/deploy/deploy.md
#: acf0daf6aa764953b43464c8d6688dd8
msgid "V100"
msgstr "V100"
#: ../../getting_started/install/deploy/deploy.md
#: 902f8c48bdad47d587acb1990b4d45b7 e53c23b23b414025be52191beb6d33da
msgid "16 GB"
msgstr "16 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 68f4b835131c4753b1ba690f3b34daea fac3351a3901481c9e0c5204d6790c75
msgid "Conversation inference possible, noticeable stutter"
msgstr "Conversation inference possible, noticeable stutter"
#: ../../getting_started/install/deploy/deploy.md
#: d4b9ff72353b4a10bff0647bf50bfe5c
msgid "T4"
msgstr "T4"
#: ../../getting_started/install/deploy/deploy.md:19
#: ddc9544667654f539ca91ac7e8af1268
msgid ""
"if your VRAM Size is not enough, DB-GPT supported 8-bit quantization and "
"4-bit quantization."
msgstr "如果你的显存不够DB-GPT支持8-bit和4-bit量化版本"
#: ../../getting_started/install/deploy/deploy.md:21
#: a6ec9822bc754670bbfc1a8a75e71eb2
msgid ""
"Here are some of the VRAM size usage of the models we tested in some "
"common scenarios."
msgstr "这里是量化版本的相关说明"
#: ../../getting_started/install/deploy/deploy.md
#: b307fe62a5564cadbf3f2d1387165c6b
msgid "Model"
msgstr "Model"
#: ../../getting_started/install/deploy/deploy.md
#: 718fb2ff4fcc488aba8963fc6ad5ea8c
msgid "Quantize"
msgstr "Quantize"
#: ../../getting_started/install/deploy/deploy.md
#: 6079a14fca3d43bfbf14021fcd1534c7 785489b458ca4578bfd586c495b5abb9
msgid "vicuna-7b-v1.5"
msgstr "vicuna-7b-v1.5"
#: ../../getting_started/install/deploy/deploy.md
#: 1d6a5c19584247d89fb2eb98bcaecc83 278d03ee54e749e1b5f20204ddc36149
#: 69c01cb441894f059e91400502cd33ae 7fa7d4922bfb4b3bb44b98ea02ff7e78
#: b8b4566e3a994919b9821cd536504936 d6f4afc865cb40b085b5fc79a09bc7f9
#: ef05aa05a2d2411a91449ccc18a76211
msgid "4-bit"
msgstr "4-bit"
#: ../../getting_started/install/deploy/deploy.md
#: 1266b6e1dde64dab9e6d8bba2f3f6d09 8ab98ed2c80c48ab9e9694131ffcac67
#: b94deb7b80c24ce8a694984511e5a02a
msgid "8 GB"
msgstr "8 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 065f1cf1a1b94ad5803f95f8f019d882 0689708416e14942a76c2808a26bc26e
#: 29dc55e7659a4d6a999a347c346e1327 5f0fa6c729db4cd7ab42dbdc73ca4e40
#: 6401e59dc85541a0b20cb2d2c26e4fd0 9071acd973b24d5582f8d879d5e55931
#: 96f12483ac7447baab6592538cfd567c
msgid "8-bit"
msgstr "8-bit"
#: ../../getting_started/install/deploy/deploy.md
#: 2d56e3dc1f6a4035a770f7b94c8e0f96 5eebdf37bc544624be5d1b6dabda4716
#: b9fd2505b4644257b91777bc68d5f41e e7056c195656413f92a0c78b5d14219c
#: e7b87586700e4da0aaccff0b4c7c54f7 eb5ad729ae784c7cb8dd52fbb12699ae
msgid "12 GB"
msgstr "12 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 529ead731c98461b8cb5452c4e72ab23 7cce32961a654ed2a31edc82724e6a1f
msgid "vicuna-13b-v1.5"
msgstr "vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md
#: 0085b850f3574ba6bf3b3654123882dd 69b2df6df91c49b2b26f6749bf6dc657
#: 714e9441566e4c8bbdeaad944e64c699
msgid "20 GB"
msgstr "20 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 133b65fb88f74645ae5db5cd0009bb35 1e7dedf510e94a47b23eaef61f9687b1
msgid "llama-2-7b"
msgstr "llama-2-7b"
#: ../../getting_started/install/deploy/deploy.md
#: 0951d03bb6544a2391dcd72eea47c1a7 89f93c8aadc84a0d97d3d89ee55d06bf
msgid "llama-2-13b"
msgstr "llama-2-13b"
#: ../../getting_started/install/deploy/deploy.md
#: 6e5a32858b20441daa4b2584faa46ec4 8bcd62d8cf4f49aebb7d97cd9e015252
msgid "llama-2-70b"
msgstr "llama-2-70b"
#: ../../getting_started/install/deploy/deploy.md
#: 7f7333221b014cc6857fd9a9e358d85c
msgid "48 GB"
msgstr "48 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 77c24e304e9e4de7b62f99ce29a66a70
msgid "80 GB"
msgstr "80 GB"
#: ../../getting_started/install/deploy/deploy.md
#: 32c04dc45efb45bcb516640a6d15cce1 e04ad78be6774c32bc53ddd7951cedae
msgid "baichuan-7b"
msgstr "baichuan-7b"
#: ../../getting_started/install/deploy/deploy.md
#: 0fe379939b164e56b0d93113e85fbd98 3400143cf1b94edfbf5da63ed388b08c
msgid "baichuan-13b"
msgstr "baichuan-13b"
#: ../../getting_started/install/deploy/deploy.md:40
#: 7a05f116e0904d0d84d9fc98e5465494
msgid "2. Install"
msgstr "2. Install"
#: ../../getting_started/install/deploy/deploy.md:45
#: 8f4d6c2b69cb46288f593b6c2aa7701e
msgid ""
"We use Sqlite as default database, so there is no need for database "
"installation. If you choose to connect to other databases, you can "
"follow our tutorial for installation and configuration. For the entire "
"installation process of DB-GPT, we use the miniconda3 virtual "
"environment. Create a virtual environment and install the Python "
"dependencies. [How to install "
"Miniconda](https://docs.conda.io/en/latest/miniconda.html)"
msgstr ""
"目前使用Sqlite作为默认数据库因此DB-"
"GPT快速部署不需要部署相关数据库服务。如果你想使用其他数据库需要先部署相关数据库服务。我们目前使用Miniconda进行python环境和包依赖管理[安装"
" Miniconda](https://docs.conda.io/en/latest/miniconda.html)"
#: ../../getting_started/install/deploy/deploy.md:54
#: 3ffaf7fed0c8422b9ceb2ab82d6ddd4d
msgid "Before use DB-GPT Knowledge"
msgstr "在使用知识库之前"
#: ../../getting_started/install/deploy/deploy.md:60
#: 2c2ef86e379d4db18bdfdba6133a0b2f
msgid ""
"Once the environment is installed, we have to create a new folder "
"\"models\" in the DB-GPT project, and then we can put all the models "
"downloaded from huggingface in this directory"
msgstr "如果你已经安装好了环境需要创建models, 然后到huggingface官网下载模型"
#: ../../getting_started/install/deploy/deploy.md:63
#: 73a766538b3d4cfaa8d7a68b3c9915b8
msgid ""
"Notice make sure you have install git-lfs centos:yum install git-lfs "
"ubuntu:app-get install git-lfs macos:brew install git-lfs"
msgstr ""
"注意下载模型之前确保git-lfs已经安ubuntu:app-get install git-lfs macos:brew install "
"git-lfs"
#: ../../getting_started/install/deploy/deploy.md:83
#: 3c26909ece094ecb9f6343d15cca394a
msgid ""
"The model files are large and will take a long time to download. During "
"the download, let's configure the .env file, which needs to be copied and"
" created from the .env.template"
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件它需要从。env.template中复制和创建。"
#: ../../getting_started/install/deploy/deploy.md:85
#: efab7120927d4b3f90e591d736b927a3
msgid ""
"if you want to use openai llm service, see [LLM Use FAQ](https://db-"
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:88
#: 2009fcaad7c34ebfaa900215650256fc
msgid "cp .env.template .env"
msgstr "cp .env.template .env"
#: ../../getting_started/install/deploy/deploy.md:91
#: ee97ddf25daf45e3bc32b33693af447a
msgid ""
"You can configure basic parameters in the .env file, for example setting "
"LLM_MODEL to the model to be used"
msgstr "您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。"
#: ../../getting_started/install/deploy/deploy.md:93
#: a86fd88e1d0f4925b8d0dbc27535663b
msgid ""
"([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on "
"llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-"
"13b-v1.5` to try this model)"
msgstr ""
"您可以在.env文件中配置基本参数例如将LLM_MODEL设置为要使用的模型。([Vicuna-v1.5](https://huggingface.co/lmsys"
"/vicuna-13b-v1.5) "
"目前Vicuna-v1.5模型(基于llama2)已经开源了我们推荐你使用这个模型通过设置LLM_MODEL=vicuna-13b-v1.5"
#: ../../getting_started/install/deploy/deploy.md:95
#: 5395445ea6324e7c9e15485fad084937
msgid "3. Run"
msgstr "3. Run"
#: ../../getting_started/install/deploy/deploy.md:96
#: cbbc83183f0d49bdb16a3df18adbe8b2
msgid ""
"You can refer to this document to obtain the Vicuna weights: "
"[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-"
"weights) ."
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:98
#: e0ffb578c7894520bbb850b257e7773c
msgid ""
"If you have difficulty with this step, you can also directly use the "
"model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a "
"replacement."
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:100
#: 2a32ee94d4404dc2bf4c57aae21b5ec3
msgid ""
"set .env configuration set your vector store type, "
"eg:VECTOR_STORE_TYPE=Chroma, now we support Chroma and Milvus(version > "
"2.1)"
msgstr ""
#: ../../getting_started/install/deploy/deploy.md:103
#: 590c7c07cf5347b4aeee0809185c7f45
msgid "1.Run db-gpt server"
msgstr "1.Run db-gpt server"
#: ../../getting_started/install/deploy/deploy.md:108
#: cc1f6d2e37464a4291ee7d33d9ebd75f
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/install/deploy/deploy.md:110
#: 7eef6b17573e4300aa6b693200461f58
msgid ""
"If you want to access an external LLM service, you need to 1.set the "
"variables LLM_MODEL=YOUR_MODEL_NAME "
"MODEL_SERVER=YOUR_MODEL_SERVEReg:http://localhost:5000 in the .env "
"file. 2.execute dbgpt_server.py in light mode"
msgstr ""
"如果你想访问外部的大模型服务(是通过DB-"
"GPT/pilot/server/llmserver.py启动的模型服务)1.需要在.env文件设置模型名和外部模型服务地址。2.使用light模式启动服务"
#: ../../getting_started/install/deploy/deploy.md:113
#: 2fa89081574d4d3a92a4c7d33b090d02
msgid ""
"If you want to learn about dbgpt-webui, read https://github./csunny/DB-"
"GPT/tree/new-page-framework/datacenter"
msgstr ""
"如果你想了解web-ui, 请访问https://github./csunny/DB-GPT/tree/new-page-"
"framework/datacenter"
#: ../../getting_started/install/deploy/deploy.md:120
#: 3b825bc956a0406fb8464e51cfeb769e
msgid "4. Multiple GPUs"
msgstr "4. Multiple GPUs"
#: ../../getting_started/install/deploy/deploy.md:122
#: 568ea5e67ad745858870e66c42ba6833
msgid ""
"DB-GPT will use all available gpu by default. And you can modify the "
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
" IDs."
msgstr "DB-GPT默认加载可利用的gpu你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
#: ../../getting_started/install/deploy/deploy.md:124
#: c5b980733d7a4c8d997123ff5524a055
msgid ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command, as shown below:"
msgstr "你也可以指定gpu ID启动"
#: ../../getting_started/install/deploy/deploy.md:134
#: 2a5d283a614644d1bb98bbe721aee8e1
msgid ""
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
"configure the maximum memory used by each GPU."
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
#: ../../getting_started/install/deploy/deploy.md:136
#: c29c956d3071455bb11694df721e6612
msgid "5. Not Enough Memory"
msgstr "5. Not Enough Memory"
#: ../../getting_started/install/deploy/deploy.md:138
#: 0174e92fdbfa4af08063c89f6bbe3957
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
#: ../../getting_started/install/deploy/deploy.md:140
#: 277f67fa08a541b3bd1fe77cdab39757
msgid ""
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
"in `.env` file to use quantization(8-bit quantization is enabled by "
"default)."
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
#: ../../getting_started/install/deploy/deploy.md:142
#: 00884fdf7c9a4f8c983ee52bfbb820aa
msgid ""
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
" quantization can run with 48 GB of VRAM."
msgstr ""
"Llama-2-70b with 8-bit quantization 可以运行在 80 GB VRAM机器 4-bit "
"quantization 可以运行在 48 GB VRAM"
#: ../../getting_started/install/deploy/deploy.md:144
#: a73698444bb4426ca779cc126497a2e0
msgid ""
"Note: you need to install the latest dependencies according to "
"[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)."
msgstr ""
"注意,需要安装[requirements.txt](https://github.com/eosphoros-ai/DB-"
"GPT/blob/main/requirements.txt)涉及的所有的依赖"

View File

@@ -0,0 +1,118 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/docker/docker.md:1
#: ea5d6b95dea844b89d2f5d0e8f6ebfd3
msgid "Docker Install"
msgstr "Docker Install"
#: ../../getting_started/install/docker/docker.md:4
#: c1125facb7334346a8e5b66bab892b1c
msgid "Docker (Experimental)"
msgstr "Docker (Experimental)"
#: ../../getting_started/install/docker/docker.md:6
#: 33a64984d0694ec9aee272f8a7ecd4cf
msgid "1. Building Docker image"
msgstr "1.构建Docker镜像"
#: ../../getting_started/install/docker/docker.md:12
#: 656beee32e6f4a49ad48a219910ba36c
msgid "Review images by listing them:"
msgstr "Review images by listing them:"
#: ../../getting_started/install/docker/docker.md:18
#: a8fd727500de480299e5bdfc86151473
msgid "Output should look something like the following:"
msgstr "输出日志应该长这样:"
#: ../../getting_started/install/docker/docker.md:25
#: 965edb9fe5184571b8afc5232cfd2773
msgid "You can pass some parameters to docker/build_all_images.sh."
msgstr "你也可以docker/build_all_images.sh构建的时候指定参数"
#: ../../getting_started/install/docker/docker.md:33
#: a1743c21a4db468db108a60540dd4754
msgid ""
"You can execute the command `bash docker/build_all_images.sh --help` to "
"see more usage."
msgstr "可以指定命令`bash docker/build_all_images.sh --help`查看如何使用"
#: ../../getting_started/install/docker/docker.md:35
#: cb7f05675c674fcf931a6afa6fb7d24c
msgid "2. Run all in one docker container"
msgstr "2. Run all in one docker container"
#: ../../getting_started/install/docker/docker.md:37
#: 2b2d95e668ed428e97eac851c604a74c
msgid "**Run with local model**"
msgstr "**Run with local model**"
#: ../../getting_started/install/docker/docker.md:52
#: ../../getting_started/install/docker/docker.md:87
#: 731540ca95004dd2bb2c3a9871ddb404 e3ab41d312ea40d492b48cf8629553fe
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问http://localhost:5000"
#: ../../getting_started/install/docker/docker.md:55
#: 2cbab25caaa749f5b753c58947332cb2
msgid ""
"`-e LLM_MODEL=vicuna-13b`, means we use vicuna-13b as llm model, see "
"/pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr "`-e LLM_MODEL=vicuna-13b` 指定llm model is vicuna-13b "
#: ../../getting_started/install/docker/docker.md:56
#: ed7e79d43ee940dca788f15d520850a3
msgid ""
"`-v /data/models:/app/models`, means we mount the local model file "
"directory `/data/models` to the docker container directory `/app/models`,"
" please replace it with your model file directory."
msgstr "`-v /data/models:/app/models`, 指定挂载的模型文件 "
"directory `/data/models` to the docker container directory `/app/models`,"
" 你也可以替换成你自己的模型."
#: ../../getting_started/install/docker/docker.md:58
#: 98e7bff5dab04979a0bb15abdf2ac1e0
msgid "You can see log with command:"
msgstr "你也可以通过命令查看日志"
#: ../../getting_started/install/docker/docker.md:64
#: 7ce23cecd6f24a6b8d3e4708d5f6265d
msgid "**Run with openai interface**"
msgstr "**Run with openai interface**"
#: ../../getting_started/install/docker/docker.md:83
#: bdf315780f454aaf9ead6414723f34c7
msgid ""
"`-e LLM_MODEL=proxyllm`, means we use proxy llm(openai interface, "
"fastchat interface...)"
msgstr "`-e LLM_MODEL=proxyllm`, 通过设置模型为第三方模型服务API 可以是openai, 也可以是fastchat interface..."
#: ../../getting_started/install/docker/docker.md:84
#: 12b3e78d3ebd47288a8d081eea278b45
msgid ""
"`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, means we mount the local text2vec model to the docker "
"container."
msgstr "`-v /data/models/text2vec-large-chinese:/app/models/text2vec-large-"
"chinese`, 设置知识库embedding模型为text2vec. "
"container.""

View File

@@ -0,0 +1,53 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/docker_compose/docker_compose.md:1
#: f1092bd1601c48789cb6f35ff9141c69
msgid "Docker Compose"
msgstr "Docker Compose"
#: ../../getting_started/install/docker_compose/docker_compose.md:4
#: 63460a1c4e0f46378c6a48643684fa17
msgid "Run with docker compose"
msgstr "Run with docker compose"
#: ../../getting_started/install/docker_compose/docker_compose.md:10
#: fa53c93a8c8b451788987461e6b296b6
msgid "Output should look something like the following:"
msgstr "输出应该这样:"
#: ../../getting_started/install/docker_compose/docker_compose.md:18
#: 6c56a6ef5367442ab924f9a49d955d11
msgid "You can see log with command:"
msgstr "你可以通过命令查看日志"
#: ../../getting_started/install/docker_compose/docker_compose.md:24
#: 1bd9fac3803641fa9ec1730c7d15d3f1
msgid "Open http://localhost:5000 with your browser to see the product."
msgstr "打开浏览器访问Open http://localhost:5000"
#: ../../getting_started/install/docker_compose/docker_compose.md:26
#: 21214a8bf62746078ccf7ab83ccca4f7
msgid ""
"You can open docker-compose.yml in the project root directory to see more"
" details."
msgstr "可以打开docker-compose.yml查看更多内容"

View File

@@ -0,0 +1,358 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/environment/environment.md:1
#: f53e1cd4e7f14be68fe574f62b390e72
msgid "Env Parameter"
msgstr ""
#: ../../getting_started/install/environment/environment.md:4
#: 47136d85ee0e4d9b95ffbf8333bec51d
msgid "LLM MODEL Config"
msgstr ""
#: ../../getting_started/install/environment/environment.md:5
#: 3b15da40362e4a0182cfab06d0d7832e
msgid "LLM Model Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr ""
#: ../../getting_started/install/environment/environment.md:6
#: d65fcfb91f254be995e26b80a9b369f8
msgid "LLM_MODEL=vicuna-13b"
msgstr ""
#: ../../getting_started/install/environment/environment.md:8
#: 5b1a0f7f9b81469ca43a328d18250c2d
msgid "MODEL_SERVER_ADDRESS"
msgstr ""
#: ../../getting_started/install/environment/environment.md:9
#: 2d99674a9aa44254a54548a746369731
msgid "MODEL_SERVER=http://127.0.0.1:8000 LIMIT_MODEL_CONCURRENCY"
msgstr ""
#: ../../getting_started/install/environment/environment.md:12
#: 9783efba49c24c38ba7c998fdba1f469
msgid "LIMIT_MODEL_CONCURRENCY=5"
msgstr ""
#: ../../getting_started/install/environment/environment.md:14
#: 6c9a05252b114942912c2f4123c6e5aa
msgid "MAX_POSITION_EMBEDDINGS"
msgstr ""
#: ../../getting_started/install/environment/environment.md:16
#: d8e6a0b293c4415b91f1c927c2b9313a
msgid "MAX_POSITION_EMBEDDINGS=4096"
msgstr ""
#: ../../getting_started/install/environment/environment.md:18
#: b206ac79475a40f3add1fabd26fe2f16
msgid "QUANTIZE_QLORA"
msgstr ""
#: ../../getting_started/install/environment/environment.md:20
#: fb7702fe165441dfb29d18e20a7d65e5
msgid "QUANTIZE_QLORA=True"
msgstr ""
#: ../../getting_started/install/environment/environment.md:22
#: 335beebb5fa34878967385bbe6c6aba6
msgid "QUANTIZE_8bit"
msgstr ""
#: ../../getting_started/install/environment/environment.md:24
#: 8ee6cf930d284c149eff21215795718c
msgid "QUANTIZE_8bit=True"
msgstr ""
#: ../../getting_started/install/environment/environment.md:27
#: ad49e465cfe44d13b7172a50ee335875
msgid "LLM PROXY Settings"
msgstr ""
#: ../../getting_started/install/environment/environment.md:28
#: 5804e5a550694f35ba7b3710b6e053a1
msgid "OPENAI Key"
msgstr ""
#: ../../getting_started/install/environment/environment.md:30
#: 06f7ff947bc14784b304774946b621fa
msgid "PROXY_API_KEY={your-openai-sk}"
msgstr ""
#: ../../getting_started/install/environment/environment.md:31
#: ae34d3c9a4914ce29744ceb1589b18c9
msgid "PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions"
msgstr ""
#: ../../getting_started/install/environment/environment.md:33
#: 1014a1617e7d46a6bccb28fdb0292134
msgid "from https://bard.google.com/ f12-> application-> __Secure-1PSID"
msgstr ""
#: ../../getting_started/install/environment/environment.md:35
#: 551d0dc883f141d4a692a0062c766106
msgid "BARD_PROXY_API_KEY={your-bard-token}"
msgstr ""
#: ../../getting_started/install/environment/environment.md:38
#: c0b88491521c45f0a5320c676b2bb72c
msgid "DATABASE SETTINGS"
msgstr ""
#: ../../getting_started/install/environment/environment.md:39
#: 831c7cba9b09499a92c4072cae486a93
msgid "SQLite database (Current default database)"
msgstr ""
#: ../../getting_started/install/environment/environment.md:40
#: 2eaaa662918244738f5a91b090b34c57
msgid "LOCAL_DB_PATH=data/default_sqlite.db"
msgstr ""
#: ../../getting_started/install/environment/environment.md:41
#: f2a89570e9334db6b0a274e4880c63ce
msgid "LOCAL_DB_TYPE=sqlite # Database Type default:sqlite"
msgstr ""
#: ../../getting_started/install/environment/environment.md:43
#: ed93685f20634c05b8cf11fd0dacce1b
msgid "MYSQL database"
msgstr ""
#: ../../getting_started/install/environment/environment.md:44
#: 769eb44abb0c4960a40e487bed7c42a0
msgid "LOCAL_DB_TYPE=mysql"
msgstr ""
#: ../../getting_started/install/environment/environment.md:45
#: 03219c94db144664894faddc398bf0ef
msgid "LOCAL_DB_USER=root"
msgstr ""
#: ../../getting_started/install/environment/environment.md:46
#: 951fcfc3621f45a8a12a2dd9c4b171e6
msgid "LOCAL_DB_PASSWORD=aa12345678"
msgstr ""
#: ../../getting_started/install/environment/environment.md:47
#: f60ad85f7deb497c9fe582c735dad911
msgid "LOCAL_DB_HOST=127.0.0.1"
msgstr ""
#: ../../getting_started/install/environment/environment.md:48
#: 47500d1a30124f07b612bf1038b6563f
msgid "LOCAL_DB_PORT=3306"
msgstr ""
#: ../../getting_started/install/environment/environment.md:51
#: 13f2e7f37f864f32ae5463a760790f5e
msgid "EMBEDDING SETTINGS"
msgstr ""
#: ../../getting_started/install/environment/environment.md:52
#: 7a5407c2a32645b1aaaf31237622a404
msgid "EMBEDDING MODEL Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr ""
#: ../../getting_started/install/environment/environment.md:53
#: b4ca40b8dbfe467686d1a2634f2960f9
msgid "EMBEDDING_MODEL=text2vec"
msgstr ""
#: ../../getting_started/install/environment/environment.md:55
#: e1644a780b5f4070a27694d6015865e8
msgid "Embedding Chunk size, default 500"
msgstr ""
#: ../../getting_started/install/environment/environment.md:57
#: 9f462ca4c25c4b8bb9809425ec9cfb66
msgid "KNOWLEDGE_CHUNK_SIZE=500"
msgstr ""
#: ../../getting_started/install/environment/environment.md:59
#: 14ce15c1593b42db99f6b1891f1e8a46
msgid "Embedding Chunk Overlap, default 100"
msgstr ""
#: ../../getting_started/install/environment/environment.md:60
#: c21856a592924271bf1a655a8d552098
msgid "KNOWLEDGE_CHUNK_OVERLAP=100"
msgstr ""
#: ../../getting_started/install/environment/environment.md:62
#: 652b5c9891444333861d49d1f5a0029e
msgid "embeding recall top k,5"
msgstr ""
#: ../../getting_started/install/environment/environment.md:64
#: 3d4e24e414ac4d2ca44a92a9481abd94
msgid "KNOWLEDGE_SEARCH_TOP_SIZE=5"
msgstr ""
#: ../../getting_started/install/environment/environment.md:66
#: 1407a7fd19304c15bd5f71cb4e3c5871
msgid "embeding recall max token ,2000"
msgstr ""
#: ../../getting_started/install/environment/environment.md:68
#: 1b0a31eaaf554cd087646bf384d5ddbf
msgid "KNOWLEDGE_SEARCH_MAX_TOKEN=5"
msgstr ""
#: ../../getting_started/install/environment/environment.md:71
#: ../../getting_started/install/environment/environment.md:87
#: 63dd3c374601464e8eae33fd7b2e28cc 94e2b09a85ea4883ab725dcd835ddd42
msgid "Vector Store SETTINGS"
msgstr ""
#: ../../getting_started/install/environment/environment.md:72
#: ../../getting_started/install/environment/environment.md:88
#: 3ed000578b014afb9f04fdb64bbc03c4 7f99a645141544b783967ae5f0683087
msgid "Chroma"
msgstr ""
#: ../../getting_started/install/environment/environment.md:73
#: ../../getting_started/install/environment/environment.md:89
#: 4e9e3e7bb12249bda66f5f87b9f722c4 645cf6085f924837836ec17dc895c498
msgid "VECTOR_STORE_TYPE=Chroma"
msgstr ""
#: ../../getting_started/install/environment/environment.md:74
#: ../../getting_started/install/environment/environment.md:90
#: 5f0b5c13d8d241de89260842587f029a fa090522f5a941179c1240c2f31a6d6b
msgid "MILVUS"
msgstr ""
#: ../../getting_started/install/environment/environment.md:75
#: ../../getting_started/install/environment/environment.md:91
#: 95a7301dbcfc42e38933a5b720c58477 e240454173ce4c9b9926cea648d63891
msgid "VECTOR_STORE_TYPE=Milvus"
msgstr ""
#: ../../getting_started/install/environment/environment.md:76
#: ../../getting_started/install/environment/environment.md:92
#: 249a91c0245d42ce891ff0d7217fb0d5 8d8ca097f2ba49af802bbcb5bfe02a8a
msgid "MILVUS_URL=127.0.0.1"
msgstr ""
#: ../../getting_started/install/environment/environment.md:77
#: ../../getting_started/install/environment/environment.md:93
#: 3180099b455a47aeb45701fbdc5e4e4d 9ebf3daaaa994b20880a77df05c55246
msgid "MILVUS_PORT=19530"
msgstr ""
#: ../../getting_started/install/environment/environment.md:78
#: ../../getting_started/install/environment/environment.md:94
#: 7a6bf87bbc354d75bfb7cdbb19a79db5 ed7b14490c29444bb6aca8f8052c3fd6
msgid "MILVUS_USERNAME"
msgstr ""
#: ../../getting_started/install/environment/environment.md:79
#: ../../getting_started/install/environment/environment.md:95
#: ca0ea23663d14ca884c35c75f3ec6762 fae03ba65210435fb4e1a840d0bf032c
msgid "MILVUS_PASSWORD"
msgstr ""
#: ../../getting_started/install/environment/environment.md:80
#: ../../getting_started/install/environment/environment.md:96
#: 4112d34ad70c4e2281b16304cbe7d6b6 ab17c60241b0455580057b895041692c
msgid "MILVUS_SECURE="
msgstr ""
#: ../../getting_started/install/environment/environment.md:82
#: ../../getting_started/install/environment/environment.md:98
#: 48909fb9e190460b9c1b95534ffa0424 7e17a2d67a64471ebce8287c6c080afb
msgid "WEAVIATE"
msgstr ""
#: ../../getting_started/install/environment/environment.md:83
#: 3cade72e7b5d4befa3fc049cf21521cb
msgid "VECTOR_STORE_TYPE=Weaviate"
msgstr ""
#: ../../getting_started/install/environment/environment.md:84
#: ../../getting_started/install/environment/environment.md:99
#: 264720556c5746a59dadad73427bcabd ff741c65112b45f0bed89390bf33cd03
msgid "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network"
msgstr ""
#: ../../getting_started/install/environment/environment.md:102
#: 9536be75496642c6b5302f6afb60c340
msgid "Multi-GPU Setting"
msgstr ""
#: ../../getting_started/install/environment/environment.md:103
#: 5986cb5fb6b34decad59f3a161c23b07
msgid ""
"See https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-"
"visibility-cuda_visible_devices/ If CUDA_VISIBLE_DEVICES is not "
"configured, all available gpus will be used"
msgstr ""
#: ../../getting_started/install/environment/environment.md:106
#: 5df0644d4a5f419287a9146eaddaffb6
msgid "CUDA_VISIBLE_DEVICES=0"
msgstr ""
#: ../../getting_started/install/environment/environment.md:108
#: 885df03d13914ea6a11a74063fc35b0a
msgid ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command"
msgstr ""
#: ../../getting_started/install/environment/environment.md:110
#: 45b1a0051a5c402f863e14dc6fca47e8
msgid "CUDA_VISIBLE_DEVICES=3,4,5,6"
msgstr ""
#: ../../getting_started/install/environment/environment.md:112
#: ee9502fea64449e48c59948e8b4ecfb5
msgid "You can configure the maximum memory used by each GPU."
msgstr ""
#: ../../getting_started/install/environment/environment.md:114
#: af8a98833c064fcfa7260ef0bf889c56
msgid "MAX_GPU_MEMORY=16Gib"
msgstr ""
#: ../../getting_started/install/environment/environment.md:117
#: 473f789502f945688862d2d9c9f2b4df
msgid "Other Setting"
msgstr ""
#: ../../getting_started/install/environment/environment.md:118
#: 04871b36ae4f47edbf20b5abdeb92cb2
msgid "Language Settings(influence prompt language)"
msgstr ""
#: ../../getting_started/install/environment/environment.md:119
#: 54b4cba342014d9ca1b35acc13d71e6e
msgid "LANGUAGE=en"
msgstr ""
#: ../../getting_started/install/environment/environment.md:120
#: db66e92d5c1447abb725c5286608a646
msgid "LANGUAGE=zh"
msgstr ""

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-07-13 15:39+0800\n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -19,67 +19,76 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/installation.md:1 bc5bfc8ebfc847c5a22f2346357cf747
msgid "Installation"
msgstr "安装dbgpt包指南"
#: ../../getting_started/installation.md:1 f9e65f84d13249098aa2768bb7cc69ed
msgid "Python SDK"
msgstr ""
#: ../../getting_started/installation.md:2 1aaef0db5ee9426aa337021d782666af
#: ../../getting_started/installation.md:2 1902cffb20fe4994aecb577c1a3fa8bf
msgid ""
"DB-GPT provides a third-party Python API package that you can integrate "
"into your own code."
msgstr "DB-GPT提供了python第三方包你可以在你的代码中引入"
#: ../../getting_started/installation.md:4 de542f259e20441991a0e5a7d52769b8
#: ../../getting_started/installation.md:4 908a7c1971d04e37bd5e88171bd93d3b
msgid "Installation from Pip"
msgstr "使用pip安装"
#: ../../getting_started/installation.md:6 3357f019aa8249b292162de92757eec4
#: ../../getting_started/installation.md:6 0eff7cb3c9c643e0884209daddf5d709
msgid "You can simply pip install:"
msgstr "你可以使用pip install"
#: ../../getting_started/installation.md:12 9c610d593608452f9d7d8d7e462251e3
#: ../../getting_started/installation.md:12 1b0f308412cd44bda55efff5830b99f3
msgid "Notice:make sure python>=3.10"
msgstr "注意:确保你的python版本>=3.10"
#: ../../getting_started/installation.md:15 b2ed238c29bb40cba990068e8d7ceae7
#: ../../getting_started/installation.md:15 bec8afbfa9af4594bb09307f92c617a4
msgid "Environment Setup"
msgstr "环境设置"
#: ../../getting_started/installation.md:17 4804ad4d8edf44f49b1d35b271635fad
#: ../../getting_started/installation.md:17 61f60acc7a6449ac974593bd5f164e3e
msgid "By default, if you use the EmbeddingEngine api"
msgstr "如果你想使用EmbeddingEngine api"
#: ../../getting_started/installation.md:19 2205f69ec60d4f73bb3a93a583928455
#: ../../getting_started/installation.md:19 f31d9989e0514881baed8c690bc8035a
msgid "you will prepare embedding models from huggingface"
msgstr "你需要从huggingface下载embedding models"
#: ../../getting_started/installation.md:22 693c18a83f034dcc8c263674418bcde2
#: ../../getting_started/installation.md:22 02eb7897412c4223a013e7921eaed253
msgid "Notice make sure you have install git-lfs"
msgstr "确保你已经安装了git-lfs"
#: ../../getting_started/installation.md:30 dd8d0880b55e4c48bfc414f8cbdda268
#: ../../getting_started/installation.md:30 ced612323d5b47d3b93febbc637515bd
msgid "version:"
msgstr "版本:"
#: ../../getting_started/installation.md:31 731e634b96164efbbc1ce9fa88361b12
#: ../../getting_started/installation.md:31 4c168bdddb664c76ad4c2be177ef01a0
msgid "db-gpt0.3.0"
msgstr "db-gpt0.3.0"
#: ../../getting_started/installation.md:32 38fb635be4554d94b527c6762253d46d
#: ../../getting_started/installation.md:32 9bc5bb66e7214ca0b945b6b46a989307
msgid ""
"[embedding_engine api](https://db-"
"gpt.readthedocs.io/en/latest/modules/knowledge.html)"
msgstr "[embedding_engine api](https://db-gpt.readthedocs.io/en/latest/modules/knowledge.html)"
msgstr ""
"[embedding_engine api](https://db-"
"gpt.readthedocs.io/en/latest/modules/knowledge.html)"
#: ../../getting_started/installation.md:33 a60b0ffe21a74ebca05529dc1dd1ba99
#: ../../getting_started/installation.md:33 52e656e2f0ae46688a6456a441f9c1e5
msgid ""
"[multi source embedding](https://db-"
"gpt.readthedocs.io/en/latest/modules/knowledge/pdf/pdf_embedding.html)"
msgstr "[multi source embedding](https://db-gpt.readthedocs.io/en/latest/modules/knowledge/pdf/pdf_embedding.html)"
msgstr ""
"[multi source embedding](https://db-"
"gpt.readthedocs.io/en/latest/modules/knowledge/pdf/pdf_embedding.html)"
#: ../../getting_started/installation.md:34 3c752c9305414719bc3f561cf18a75af
#: ../../getting_started/installation.md:34 726c87e53d8e4cd799d422aaf562fa29
msgid ""
"[vector connector](https://db-"
"gpt.readthedocs.io/en/latest/modules/vector.html)"
msgstr "[vector connector](https://db-gpt.readthedocs.io/en/latest/modules/vector.html)"
msgstr ""
"[vector connector](https://db-"
"gpt.readthedocs.io/en/latest/modules/vector.html)"
#~ msgid "Installation"
#~ msgstr "安装dbgpt包指南"

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.3.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-07-12 16:23+0800\n"
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -19,25 +19,29 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/tutorials.md:1 cb100b89a2a747cd90759e415c737070
#: ../../getting_started/tutorials.md:1 3e4b863de01942d5823d5dd3975bcb05
msgid "Tutorials"
msgstr "教程"
#: ../../getting_started/tutorials.md:4 dbc2a2346b384cc3930086f97181b14b
#: ../../getting_started/tutorials.md:4 bf8a3dc2c8d045d0b4668ecbd25c954d
msgid "This is a collection of DB-GPT tutorials on Medium."
msgstr "这是知乎上DB-GPT教程的集合。"
#: ../../getting_started/tutorials.md:6 67e5b6dbac654d428e6a8be9d1ec6473
#: ../../getting_started/tutorials.md:6 e72430ed63ca4c13bab165d61293585a
msgid ""
"DB-GPT is divided into several functions, including chat with knowledge "
"base, execute SQL, chat with database, and execute plugins."
msgstr "DB-GPT包含以下功能和知识库聊天执行SQL和数据库聊天以及执行插件。"
#: ../../getting_started/tutorials.md:8 744aaec68aa3413c9b17b09714476d32
#: ../../getting_started/tutorials.md:8 29cb77fcc97e494ba26ecd61fd21d84f
msgid "Introduction"
msgstr "介绍"
#: ../../getting_started/tutorials.md:9 305bcf5e847a4322a2834b84fa3c694a
#: ../../getting_started/tutorials.md:10 0158e41b603a47d8a8d0debcddc1d99c
msgid "youtube"
msgstr ""
#: ../../getting_started/tutorials.md:11 bd8078ee97094559b3e9b4a90a89651a
#, fuzzy
msgid "[What is DB-GPT](https://www.youtube.com/watch?v=QszhVJerc0I)"
msgstr ""
@@ -45,69 +49,75 @@ msgstr ""
"GPT](https://www.bilibili.com/video/BV1SM4y1a7Nj/?buvid=551b023900b290f9497610b2155a2668&is_story_h5=false&mid=%2BVyE%2Fwau5woPcUKieCWS0A%3D%3D&p=1&plat_id=116&share_from=ugc&share_medium=iphone&share_plat=ios&share_session_id=5D08B533-82A4-4D40-9615-7826065B4574&share_source=GENERIC&share_tag=s_i&timestamp=1686307943&unique_k=bhO3lgQ&up_id=31375446)"
" by csunny (https://github.com/csunny/DB-GPT)"
#: ../../getting_started/tutorials.md:11 22fdc6937b2248ae8f5a7ef385aa55d9
#, fuzzy
msgid "Knowledge"
msgstr "知识库"
#: ../../getting_started/tutorials.md:13 9bbf0f5aece64389b93b16235abda58e
#: ../../getting_started/tutorials.md:13 0330432860384964804cc6ea9054d06b
#, fuzzy
msgid ""
"[How to Create your own knowledge repository](https://db-"
"gpt.readthedocs.io/en/latest/modules/knownledge.html)"
"[How to deploy DB-GPT step by "
"step](https://www.youtube.com/watch?v=OJGU4fQCqPs)"
msgstr ""
"[怎么创建自己的知识库](https://db-"
"gpt.readthedocs.io/en/latest/modules/knowledge.html)"
"###Introduction [什么是DB-"
"GPT](https://www.bilibili.com/video/BV1SM4y1a7Nj/?buvid=551b023900b290f9497610b2155a2668&is_story_h5=false&mid=%2BVyE%2Fwau5woPcUKieCWS0A%3D%3D&p=1&plat_id=116&share_from=ugc&share_medium=iphone&share_plat=ios&share_session_id=5D08B533-82A4-4D40-9615-7826065B4574&share_source=GENERIC&share_tag=s_i&timestamp=1686307943&unique_k=bhO3lgQ&up_id=31375446)"
" by csunny (https://github.com/csunny/DB-GPT)"
#: ../../getting_started/tutorials.md:15 ae201d75a3aa485e99b258103245db1c
#, fuzzy
msgid "![Add new Knowledge demonstration](../../assets/new_knownledge.gif)"
msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
#: ../../getting_started/tutorials.md:16 a5b7b6e46df040ddaea184487b6a6ba3
msgid "bilibili"
msgstr "bilibili"
#: ../../getting_started/tutorials.md:15 e7bfb3396f7b42f1a1be9f29df1773a2
#, fuzzy
msgid "Add new Knowledge demonstration"
msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
#: ../../getting_started/tutorials.md:17 9da89100917742fca40ab71bc1283504
msgid ""
"[What is DB-"
"GPT](https://www.bilibili.com/video/BV1SM4y1a7Nj/?spm_id_from=333.788&vd_source=7792e22c03b7da3c556a450eb42c8a0f)"
msgstr "[DB-"
"GPT介绍](https://www.bilibili.com/video/BV1SM4y1a7Nj/?spm_id_from=333.788&vd_source=7792e22c03b7da3c556a450eb42c8a0f)"
#: ../../getting_started/tutorials.md:17 d37acc0486ec40309e7e944bb0458b0a
msgid "SQL Generation"
msgstr "SQL生成"
#: ../../getting_started/tutorials.md:19 d0b58d505c774d0e867dbd87e703d4ed
msgid ""
"[How to deploy DB-GPT step by "
"step](https://www.bilibili.com/video/BV1mu411Y7ve/?spm_id_from=pageDriver&vd_source=7792e22c03b7da3c556a450eb42c8a0f)"
msgstr "[How to deploy DB-GPT step by "
"step](https://www.bilibili.com/video/BV1mu411Y7ve/?spm_id_from=pageDriver&vd_source=7792e22c03b7da3c556a450eb42c8a0f)"
#: ../../getting_started/tutorials.md:18 86a328c9e15f46679a2611f7162f9fbe
#, fuzzy
msgid "![sql generation demonstration](../../assets/demo_en.gif)"
msgstr "[sql生成演示](../../assets/demo_en.gif)"
#~ msgid "Knowledge"
#~ msgstr "知识库"
#: ../../getting_started/tutorials.md:18 03bc8d7320be44f0879a553a324ec26f
#, fuzzy
msgid "sql generation demonstration"
msgstr "[sql生成演示](../../assets/demo_en.gif)"
#~ msgid ""
#~ "[How to Create your own knowledge "
#~ "repository](https://db-"
#~ "gpt.readthedocs.io/en/latest/modules/knownledge.html)"
#~ msgstr ""
#~ "[怎么创建自己的知识库](https://db-"
#~ "gpt.readthedocs.io/en/latest/modules/knowledge.html)"
#: ../../getting_started/tutorials.md:20 5f3b241f24634c09880d5de014f64f1b
msgid "SQL Execute"
msgstr "SQL执行"
#~ msgid "![Add new Knowledge demonstration](../../assets/new_knownledge.gif)"
#~ msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
#: ../../getting_started/tutorials.md:21 13a16debf2624f44bfb2e0453c11572d
#, fuzzy
msgid "![sql execute demonstration](../../assets/auto_sql_en.gif)"
msgstr "[sql execute 演示](../../assets/auto_sql_en.gif)"
#~ msgid "Add new Knowledge demonstration"
#~ msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
#: ../../getting_started/tutorials.md:21 2d9673cfd48b49a5b1942fdc9de292bf
#, fuzzy
msgid "sql execute demonstration"
msgstr "SQL执行演示"
#~ msgid "SQL Generation"
#~ msgstr "SQL生成"
#: ../../getting_started/tutorials.md:23 8cc0c647ad804969b470b133708de37f
#, fuzzy
msgid "Plugins"
msgstr "DB插件"
#~ msgid "![sql generation demonstration](../../assets/demo_en.gif)"
#~ msgstr "[sql生成演示](../../assets/demo_en.gif)"
#: ../../getting_started/tutorials.md:24 cad5cc0cb94b42a1a6619bbd2a8b9f4c
#, fuzzy
msgid "![db plugins demonstration](../../assets/dashboard.png)"
msgstr "[db plugins 演示](../../assets/dbgpt_bytebase_plugin.gif)"
#~ msgid "sql generation demonstration"
#~ msgstr "[sql生成演示](../../assets/demo_en.gif)"
#: ../../getting_started/tutorials.md:24 adeee7ea37b743c9b251976124520725
msgid "db plugins demonstration"
msgstr "DB插件演示"
#~ msgid "SQL Execute"
#~ msgstr "SQL执行"
#~ msgid "![sql execute demonstration](../../assets/auto_sql_en.gif)"
#~ msgstr "[sql execute 演示](../../assets/auto_sql_en.gif)"
#~ msgid "sql execute demonstration"
#~ msgstr "SQL执行演示"
#~ msgid "Plugins"
#~ msgstr "DB插件"
#~ msgid "![db plugins demonstration](../../assets/dashboard.png)"
#~ msgstr "[db plugins 演示](../../assets/dbgpt_bytebase_plugin.gif)"
#~ msgid "db plugins demonstration"
#~ msgstr "DB插件演示"

View File

@@ -0,0 +1,52 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2023, csunny
# This file is distributed under the same license as the DB-GPT package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-07-13 15:39+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../modules/knowledge/string/string_embedding.md:1
#: 0dbae4a197aa4e45a16cfd90c137c8f2
msgid "String"
msgstr ""
#: ../../modules/knowledge/string/string_embedding.md:3
#: bf9f445b1d2045848ad3c663e4551c1f
msgid ""
"string embedding can import a long raw text into a vector knowledge base."
" The entire embedding process includes the read (loading data), "
"data_process (data processing), and index_to_store (embedding to the "
"vector database) methods."
msgstr ""
#: ../../modules/knowledge/string/string_embedding.md:5
#: 4418539511024c2fa8cf1e73ade226a8
msgid "inheriting the SourceEmbedding"
msgstr ""
#: ../../modules/knowledge/string/string_embedding.md:23
#: 178050a4cdd6429aa73406043d89869b
msgid ""
"implement read() and data_process() read() method allows you to read data"
" and split data into chunk"
msgstr ""
#: ../../modules/knowledge/string/string_embedding.md:32
#: fdaf95f201624242bbfcff36480e1819
msgid "data_process() method allows you to pre processing your ways"
msgstr ""

View File

@@ -37,6 +37,10 @@ LLM_MODEL_CONFIG = {
"vicuna-13b-v1.5": os.path.join(MODEL_PATH, "vicuna-13b-v1.5"),
"vicuna-7b-v1.5": os.path.join(MODEL_PATH, "vicuna-7b-v1.5"),
"text2vec": os.path.join(MODEL_PATH, "text2vec-large-chinese"),
#https://huggingface.co/moka-ai/m3e-large
"m3e-base": os.path.join(MODEL_PATH, "m3e-base"),
# https://huggingface.co/moka-ai/m3e-base
"m3e-large": os.path.join(MODEL_PATH, "m3e-large"),
"sentence-transforms": os.path.join(MODEL_PATH, "all-MiniLM-L6-v2"),
"codegen2-1b": os.path.join(MODEL_PATH, "codegen2-1B"),
"codet5p-2b": os.path.join(MODEL_PATH, "codet5p-2b"),