mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-07-30 15:21:02 +00:00
Merge remote-tracking branch 'origin/main' into feat_rag_graph
This commit is contained in:
commit
b0dd2e7d6a
68
README.md
68
README.md
@ -39,7 +39,17 @@
|
||||
|
||||
## What is DB-GPT?
|
||||
|
||||
DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
|
||||
DB-GPT is an open-source framework for large models in the databases fields. It's purpose is to build infrastructure for the domain of large models, making it easier and more convenient to develop applications around databases. By developing various technical capabilities such as:
|
||||
|
||||
1. **SMMF(Service-oriented Multi-model Management Framework)**
|
||||
2. **Text2SQL Fine-tuning**
|
||||
3. **RAG(Retrieval Augmented Generation) framework and optimization**
|
||||
4. **Data-Driven Agents framework collaboration**
|
||||
5. **GBI(Generative Business intelligence)**
|
||||
|
||||
etc, DB-GPT simplifies the construction of large model applications based on databases.
|
||||
|
||||
In the era of Data 3.0, enterprises and developers can build their own customized applications with less code, leveraging models and databases.
|
||||
|
||||
|
||||
## Contents
|
||||
@ -57,16 +67,6 @@ DB-GPT is an experimental open-source project that uses localized GPT large mode
|
||||
Run on an RTX 4090 GPU.
|
||||
##### Chat Excel
|
||||

|
||||
##### Chat Plugin
|
||||

|
||||
##### LLM Management
|
||||

|
||||
##### FastChat && vLLM
|
||||

|
||||
##### Trace
|
||||

|
||||
##### Chat Knowledge
|
||||

|
||||
|
||||
## Install
|
||||

|
||||
@ -97,23 +97,23 @@ Run on an RTX 4090 GPU.
|
||||
## Features
|
||||
|
||||
Currently, we have released multiple key features, which are listed below to demonstrate our current capabilities:
|
||||
- Private KBQA & data processing
|
||||
- **Private Domain Q&A & Data Processing**
|
||||
|
||||
The DB-GPT project offers a range of features to enhance knowledge base construction and enable efficient storage and retrieval of both structured and unstructured data. These include built-in support for uploading multiple file formats, the ability to integrate plug-ins for custom data extraction, and unified vector storage and retrieval capabilities for managing large volumes of information.
|
||||
|
||||
- Multiple data sources & visualization
|
||||
- **Multi-Data Source & GBI(Generative Business intelligence)**
|
||||
|
||||
The DB-GPT project enables seamless natural language interaction with various data sources, including Excel, databases, and data warehouses. It facilitates effortless querying and retrieval of information from these sources, allowing users to engage in intuitive conversations and obtain insights. Additionally, DB-GPT supports the generation of analysis reports, providing users with valuable summaries and interpretations of the data.
|
||||
|
||||
- Multi-Agents&Plugins
|
||||
- **Multi-Agents&Plugins**
|
||||
|
||||
It supports custom plug-ins to perform tasks, natively supports the Auto-GPT plug-in model, and the Agents protocol adopts the Agent Protocol standard.
|
||||
|
||||
- Fine-tuning text2SQL
|
||||
|
||||
- **Automated Fine-tuning text2SQL**
|
||||
|
||||
An automated fine-tuning lightweight framework built around large language models, Text2SQL data sets, LoRA/QLoRA/Pturning, and other fine-tuning methods, making TextSQL fine-tuning as convenient as an assembly line. [DB-GPT-Hub](https://github.com/eosphoros-ai/DB-GPT-Hub)
|
||||
|
||||
- Multi LLMs Support, Supports multiple large language models, currently supporting
|
||||
- **SMMF(Service-oriented Multi-model Management Framework)**
|
||||
|
||||
Massive model support, including dozens of large language models such as open source and API agents. Such as LLaMA/LLaMA2, Baichuan, ChatGLM, Wenxin, Tongyi, Zhipu, etc.
|
||||
- [Vicuna](https://huggingface.co/Tribbiani/vicuna-13b)
|
||||
@ -126,22 +126,6 @@ Currently, we have released multiple key features, which are listed below to dem
|
||||
- [falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
|
||||
- [internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
|
||||
- [Qwen-7B-Chat/Qwen-14B-Chat](https://huggingface.co/Qwen/)
|
||||
- [RWKV-4-Raven](https://huggingface.co/BlinkDL/rwkv-4-raven)
|
||||
- [CAMEL-13B-Combined-Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data)
|
||||
- [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b)
|
||||
- [h2ogpt-gm-oasst1-en-2048-open-llama-7b](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b)
|
||||
- [fastchat-t5-3b-v1.0](https://huggingface.co/lmsys/fastchat-t5)
|
||||
- [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)
|
||||
- [gpt4all-13b-snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy)
|
||||
- [Nous-Hermes-13b](https://huggingface.co/NousResearch/Nous-Hermes-13b)
|
||||
- [codet5p-6b](https://huggingface.co/Salesforce/codet5p-6b)
|
||||
- [guanaco-33b-merged](https://huggingface.co/timdettmers/guanaco-33b-merged)
|
||||
- [WizardLM-13B-V1.0](https://huggingface.co/WizardLM/WizardLM-13B-V1.0)
|
||||
- [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
|
||||
- [Llama2-Chinese-13b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat)
|
||||
- [OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)
|
||||
|
||||
Etc.
|
||||
|
||||
- Support API Proxy LLMs
|
||||
- [x] [ChatGPT](https://api.openai.com/)
|
||||
@ -149,7 +133,7 @@ Currently, we have released multiple key features, which are listed below to dem
|
||||
- [x] [Wenxin](https://cloud.baidu.com/product/wenxinworkshop?track=dingbutonglan)
|
||||
- [x] [ChatGLM](http://open.bigmodel.cn/)
|
||||
|
||||
- Privacy and security
|
||||
- **Privacy and Security**
|
||||
|
||||
The privacy and security of data are ensured through various technologies, such as privatized large models and proxy desensitization.
|
||||
|
||||
@ -192,11 +176,6 @@ The core capabilities mainly consist of the following parts:
|
||||
6. Privacy & Secure: You can be assured that there is no risk of data leakage, and your data is 100% private and secure.
|
||||
7. Text2SQL: We enhance the Text-to-SQL performance by applying Supervised Fine-Tuning (SFT) on large language models
|
||||
|
||||
### RAG-IN-Action
|
||||
<p align="center">
|
||||
<img src="./assets/RAG-IN-ACTION.jpg" width="800px" />
|
||||
</p>
|
||||
|
||||
### SubModule
|
||||
- [DB-GPT-Hub](https://github.com/eosphoros-ai/DB-GPT-Hub) Text-to-SQL performance by applying Supervised Fine-Tuning (SFT) on large language models.
|
||||
- [DB-GPT-Plugins](https://github.com/eosphoros-ai/DB-GPT-Plugins) DB-GPT Plugins Can run autogpt plugin directly
|
||||
@ -310,19 +289,8 @@ The core capabilities mainly consist of the following parts:
|
||||
- [x] ChatGLM2
|
||||
|
||||
- SFT Accuracy
|
||||
|
||||
As of October 10, 2023, by fine-tuning an open-source model of 13 billion parameters using this project, the execution accuracy on the Spider evaluation dataset has surpassed that of GPT-4!
|
||||
|
||||
| name | Execution Accuracy | reference |
|
||||
| ----------------------------------| ------------------ | ------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **GPT-4** | **0.762** | [numbersstation-eval-res](https://www.numbersstation.ai/post/nsql-llama-2-7b) |
|
||||
| ChatGPT | 0.728 | [numbersstation-eval-res](https://www.numbersstation.ai/post/nsql-llama-2-7b) |
|
||||
| **CodeLlama-13b-Instruct-hf_lora**| **0.789** | sft train by our this project,only used spider train dataset ,the same eval way in this project with lora SFT |
|
||||
| CodeLlama-13b-Instruct-hf_qlora | 0.774 | sft train by our this project,only used spider train dataset ,the same eval way in this project with qlora and nf4,bit4 SFT |
|
||||
| wizardcoder | 0.610 | [text-to-sql-wizardcoder](https://github.com/cuplv/text-to-sql-wizardcoder/tree/main) |
|
||||
| CodeLlama-13b-Instruct-hf | 0.556 | eval in this project default param |
|
||||
| llama2_13b_hf_lora_best | 0.744 | sft train by our this project,only used spider train dataset ,the same eval way in this project |
|
||||
|
||||
[More Information about Text2SQL finetune](https://github.com/eosphoros-ai/DB-GPT-Hub)
|
||||
|
||||
## Licence
|
||||
|
90
README.zh.md
90
README.zh.md
@ -34,12 +34,9 @@
|
||||
</div>
|
||||
|
||||
## DB-GPT 是什么?
|
||||
DB-GPT是一个开源的数据库领域大模型框架。目的是构建大模型领域的基础设施,通过开发多模型管理、Text2SQL效果优化、RAG框架以及优化、Multi-Agents框架协作等多种技术能力,让围绕数据库构建大模型应用更简单,更方便。
|
||||
|
||||
随着大模型的发布迭代,大模型变得越来越智能,在使用大模型的过程当中,遇到极大的数据安全与隐私挑战。在利用大模型能力的过程中我们的私密数据跟环境需要掌握自己的手里,完全可控,避免任何的数据隐私泄露以及安全风险。基于此,我们发起了DB-GPT项目,为所有以数据库为基础的场景,构建一套完整的私有大模型解决方案。 此方案因为支持本地部署,所以不仅仅可以应用于独立私有环境,而且还可以根据业务模块独立部署隔离,让大模型的能力绝对私有、安全、可控。我们的愿景是让围绕数据库构建大模型应用更简单,更方便。
|
||||
|
||||
DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地化的GPT大模型与您的数据和环境进行交互,无数据泄露风险,100% 私密
|
||||
|
||||
|
||||
数据3.0 时代,基于模型、数据库,企业/开发者可以用更少的代码搭建自己的专属应用。
|
||||
|
||||
## 目录
|
||||
|
||||
@ -59,19 +56,8 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
|
||||
##### Chat Excel
|
||||

|
||||
#### Chat Plugin
|
||||

|
||||
#### LLM Management
|
||||

|
||||
#### FastChat && vLLM
|
||||

|
||||
#### Trace
|
||||

|
||||
#### Chat Knowledge
|
||||

|
||||
|
||||
#### 根据自然语言对话生成分析图表
|
||||
|
||||
<p align="left">
|
||||
<img src="./assets/chat_excel/chat_excel_6.png" width="800px" />
|
||||
</p>
|
||||
@ -80,10 +66,6 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
<img src="./assets/dashboard.png" width="800px" />
|
||||
</p>
|
||||
|
||||
<p align="left">
|
||||
<img src="./assets/chat_dashboard/chat_dashboard_2.png" width="800px" />
|
||||
</p>
|
||||
|
||||
## 安装
|
||||
|
||||

|
||||
@ -111,26 +93,23 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
- [**FAQ**](https://db-gpt.readthedocs.io/en/latest/getting_started/faq/deploy/deploy_faq.html)
|
||||
|
||||
## 特性一览
|
||||
|
||||
目前我们已经发布了多种关键的特性,这里一一列举展示一下当前发布的能力。
|
||||
|
||||
- 私域问答&数据处理
|
||||
- **私域问答&数据处理&RAG**
|
||||
|
||||
支持内置、多文件格式上传、插件自抓取等方式自定义构建知识库,对海量结构化,非结构化数据做统一向量存储与检索
|
||||
|
||||
- 多数据源&可视化
|
||||
|
||||
- **多数据源&GBI**
|
||||
|
||||
支持自然语言与Excel、数据库、数仓等多种数据源交互,并支持分析报告。
|
||||
|
||||
- 自动化微调
|
||||
- **自动化微调**
|
||||
|
||||
围绕大语言模型、Text2SQL数据集、LoRA/QLoRA/Pturning等微调方法构建的自动化微调轻量框架, 让TextSQL微调像流水线一样方便。详见: [DB-GPT-Hub](https://github.com/eosphoros-ai/DB-GPT-Hub)
|
||||
|
||||
- Multi-Agents&Plugins
|
||||
- **Data-Driven Multi-Agents&Plugins**
|
||||
|
||||
支持自定义插件执行任务,原生支持Auto-GPT插件模型,Agents协议采用Agent Protocol标准
|
||||
|
||||
- 多模型支持与管理
|
||||
- **多模型支持与管理**
|
||||
|
||||
海量模型支持,包括开源、API代理等几十种大语言模型。如LLaMA/LLaMA2、Baichuan、ChatGLM、文心、通义、智谱等。
|
||||
- 支持多种大语言模型, 当前已支持如下模型:
|
||||
@ -141,30 +120,14 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
- [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)
|
||||
- [chatglm-6b](https://huggingface.co/THUDM/chatglm-6b)
|
||||
- [chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b)
|
||||
- [falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
|
||||
- [internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
|
||||
- [Qwen-7B-Chat/Qwen-14B-Chat](https://huggingface.co/Qwen/)
|
||||
- [RWKV-4-Raven](https://huggingface.co/BlinkDL/rwkv-4-raven)
|
||||
- [CAMEL-13B-Combined-Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data)
|
||||
- [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b)
|
||||
- [h2ogpt-gm-oasst1-en-2048-open-llama-7b](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b)
|
||||
- [fastchat-t5-3b-v1.0](https://huggingface.co/lmsys/fastchat-t5)
|
||||
- [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)
|
||||
- [gpt4all-13b-snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy)
|
||||
- [Nous-Hermes-13b](https://huggingface.co/NousResearch/Nous-Hermes-13b)
|
||||
- [codet5p-6b](https://huggingface.co/Salesforce/codet5p-6b)
|
||||
- [guanaco-33b-merged](https://huggingface.co/timdettmers/guanaco-33b-merged)
|
||||
- [WizardLM-13B-V1.0](https://huggingface.co/WizardLM/WizardLM-13B-V1.0)
|
||||
- [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
|
||||
- [Llama2-Chinese-13b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat)
|
||||
- [OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)
|
||||
|
||||
- 支持在线代理模型
|
||||
- [x] [ChatGPT](https://api.openai.com/)
|
||||
- [x] [Tongyi](https://www.aliyun.com/product/dashscope)
|
||||
- [x] [Wenxin](https://cloud.baidu.com/product/wenxinworkshop?track=dingbutonglan)
|
||||
- [x] [ChatGLM](http://open.bigmodel.cn/)
|
||||
|
||||
- 隐私安全
|
||||
- **隐私安全**
|
||||
|
||||
通过私有化大模型、代理脱敏等多种技术保障数据的隐私安全。
|
||||
|
||||
@ -192,22 +155,23 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
| [StarRocks](https://github.com/StarRocks/starrocks) | No | TODO |
|
||||
|
||||
## 架构方案
|
||||
DB-GPT基于 [FastChat](https://github.com/lm-sys/FastChat) 构建大模型运行环境。此外,我们通过LangChain提供私域知识库问答能力。同时我们支持插件模式, 在设计上原生支持Auto-GPT插件。我们的愿景是让围绕数据库和LLM构建应用程序更加简便和便捷。
|
||||
|
||||
整个DB-GPT的架构,如下图所示
|
||||
|
||||
<p align="center">
|
||||
<img src="./assets/DB-GPT_zh.png" width="800px" />
|
||||
</p>
|
||||
|
||||
核心能力主要有以下几个部分。
|
||||
1. 多模型:支持多LLM,如LLaMA/LLaMA2、CodeLLaMA、ChatGLM、QWen、Vicuna以及代理模型ChatGPT、Baichuan、tongyi、wenxin等
|
||||
2. 私域知识库问答: 可以根据本地文档(如pdf、word、excel等数据)进行高质量的智能问答。
|
||||
3. 统一数据向量存储和索引: 将数据嵌入为向量并存储在向量数据库中,提供内容相似性搜索。
|
||||
4. 多数据源: 用于连接不同的模块和数据源,实现数据的流动和交互。
|
||||
5. Agent与插件: 提供Agent和插件机制,使得用户可以自定义并增强系统的行为。
|
||||
6. 隐私和安全: 您可以放心,没有数据泄露的风险,您的数据100%私密和安全。
|
||||
7. Text2SQL: 我们通过在大型语言模型监督微调(SFT)来增强文本到SQL的性能
|
||||
核心能力主要有以下几个部分:
|
||||
- **RAG(Retrieval Augmented Generation)**,RAG是当下落地实践最多,也是最迫切的领域,DB-GPT目前已经实现了一套基于RAG的框架,用户可以基于DB-GPT的RAG能力构建知识类应用。
|
||||
|
||||
- **GBI**:生成式BI是DB-GPT项目的核心能力之一,为构建企业报表分析、业务洞察提供基础的数智化技术保障。
|
||||
|
||||
- **Fine-tune框架**: 模型微调是任何一个企业在垂直、细分领域落地不可或缺的能力,DB-GPT提供了完整的微调框架,实现与DB-GPT项目的无缝打通,在最近的微调中,基于spider的准确率已经做到了82.5%
|
||||
|
||||
- **数据驱动的Multi-Agents框架**: DB-GPT提供了数据驱动的自进化微调框架,目标是可以持续基于数据做决策与执行。
|
||||
|
||||
- **数据工厂**: 数据工厂主要是在大模型时代,做可信知识、数据的清洗加工。
|
||||
|
||||
- **数据源**: 对接各类数据源,实现生产业务数据无缝对接到DB-GPT核心能力。
|
||||
|
||||
### RAG生产落地实践架构
|
||||
<p align="center">
|
||||
@ -345,16 +309,6 @@ The MIT License (MIT)
|
||||
- SFT模型准确率
|
||||
截止20231010,我们利用本项目基于开源的13B大小的模型微调后,在Spider的评估集上的执行准确率,已经超越GPT-4!
|
||||
|
||||
| 模型名称 | 执行准确率 | 说明 |
|
||||
| ----------------------------------| ------------------ | ------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **GPT-4** | **0.762** | [numbersstation-eval-res](https://www.numbersstation.ai/post/nsql-llama-2-7b) |
|
||||
| ChatGPT | 0.728 | [numbersstation-eval-res](https://www.numbersstation.ai/post/nsql-llama-2-7b) |
|
||||
| **CodeLlama-13b-Instruct-hf_lora**| **0.789** | sft train by our this project,only used spider train dataset ,the same eval way in this project with lora SFT |
|
||||
| CodeLlama-13b-Instruct-hf_qlora | 0.774 | sft train by our this project,only used spider train dataset ,the same eval way in this project with qlora and nf4,bit4 SFT |
|
||||
| wizardcoder | 0.610 | [text-to-sql-wizardcoder](https://github.com/cuplv/text-to-sql-wizardcoder/tree/main) |
|
||||
| CodeLlama-13b-Instruct-hf | 0.556 | eval in this project default param |
|
||||
| llama2_13b_hf_lora_best | 0.744 | sft train by our this project,only used spider train dataset ,the same eval way in this project |
|
||||
|
||||
[More Information about Text2SQL finetune](https://github.com/eosphoros-ai/DB-GPT-Hub)
|
||||
|
||||
## 联系我们
|
||||
|
@ -3,48 +3,58 @@
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to DB-GPT!
|
||||
==================================
|
||||
| As large models are released and iterated upon, they are becoming increasingly intelligent. However, in the process of using large models, we face significant challenges in data security and privacy. We need to ensure that our sensitive data and environments remain completely controlled and avoid any data privacy leaks or security risks. Based on this, we have launched the DB-GPT project to build a complete private large model solution for all database-based scenarios. This solution supports local deployment, allowing it to be applied not only in independent private environments but also to be independently deployed and isolated according to business modules, ensuring that the ability of large models is absolutely private, secure, and controllable.
|
||||
Overview
|
||||
------------------
|
||||
|
||||
| **DB-GPT** is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
|
||||
| DB-GPT is an open-source framework for large models in the databases fields. It's purpose is to build infrastructure for the domain of large models, making it easier and more convenient to develop applications around databases. By developing various technical capabilities such as:
|
||||
|
||||
| **Features**
|
||||
Currently, we have released multiple key features, which are listed below to demonstrate our current capabilities:
|
||||
1. **SMMF(Service-oriented Multi-model Management Framework)**
|
||||
2. **Text2SQL Fine-tuning**
|
||||
3. **RAG(Retrieval Augmented Generation) framework and optimization**
|
||||
4. **Data-Driven Agents framework collaboration**
|
||||
5. **GBI(Generative Business intelligence)**
|
||||
|
||||
- SQL language capabilities
|
||||
- SQL generation
|
||||
- SQL diagnosis
|
||||
etc, DB-GPT simplifies the construction of large model applications based on databases.
|
||||
|
||||
- Private domain Q&A and data processing
|
||||
- Database knowledge Q&A
|
||||
- Data processing
|
||||
| In the era of Data 3.0, enterprises and developers can build their own customized applications with less code, leveraging models and databases.
|
||||
|
||||
- Plugins
|
||||
- Support custom plugin execution tasks and natively support the Auto-GPT plugin, such as:
|
||||
Features
|
||||
^^^^^^^^^^^
|
||||
|
||||
- Unified vector storage/indexing of knowledge base
|
||||
- Support for unstructured data such as PDF, Markdown, CSV, and WebURL
|
||||
| **1. Private Domain Q&A & Data Processing**
|
||||
| Supports custom construction of knowledge bases through methods such as built-in, multi-file format uploads, and plugin-based web scraping. Enables unified vector storage and retrieval of massive structured and unstructured data.
|
||||
|
||||
| **2.Multi-Data Source & GBI(Generative Business intelligence)**
|
||||
| Supports interaction between natural language and various data sources such as Excel, databases, and data warehouses. Also supports analysis reporting.
|
||||
|
||||
| **3.SMMF(Service-oriented Multi-model Management Framework)**
|
||||
| Supports a wide range of models, including dozens of large language models such as open-source models and API proxies. Examples include LLaMA/LLaMA2, Baichuan, ChatGLM, Wenxin, Tongyi, Zhipu, Xinghuo, etc.
|
||||
|
||||
| **4.Automated Fine-tuning**
|
||||
| A lightweight framework for automated fine-tuning built around large language models, Text2SQL datasets, and methods like LoRA/QLoRA/Pturning. Makes TextSQL fine-tuning as convenient as a production line.
|
||||
|
||||
| **5.Data-Driven Multi-Agents & Plugins**
|
||||
| Supports executing tasks through custom plugins and natively supports the Auto-GPT plugin model. Agents protocol follows the Agent Protocol standard.
|
||||
|
||||
| **6.Privacy and Security**
|
||||
| Ensures data privacy and security through techniques such as privatizing large models and proxy de-identification.
|
||||
|
||||
- Multi LLMs Support
|
||||
- Supports multiple large language models, currently supporting Vicuna (7b, 13b), ChatGLM-6b (int4, int8)
|
||||
- TODO: codegen2, codet5p
|
||||
|
||||
Getting Started
|
||||
-----------------
|
||||
| How to get started using DB-GPT to interact with your data and environment.
|
||||
- `Quickstart Guide <./getting_started/getting_started.html>`_
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
| Quickstart
|
||||
|
||||
- `Quickstart Guide <./getting_started/getting_started.html>`_
|
||||
|
||||
| Concepts and terminology
|
||||
|
||||
- `Concepts and Terminology <./getting_started/concepts.html>`_
|
||||
|
||||
| Coming soon...
|
||||
|
||||
- `Tutorials <.getting_started/tutorials.html>`_
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Getting Started
|
||||
:name: getting_started
|
||||
:hidden:
|
||||
|
||||
getting_started/install.rst
|
||||
@ -57,10 +67,9 @@ Getting Started
|
||||
|
||||
|
||||
Modules
|
||||
---------
|
||||
^^^^^^^^^
|
||||
|
||||
| These modules are the core abstractions with which we can interact with data and environment smoothly.
|
||||
It's very important for DB-GPT, DB-GPT also provide standard, extendable interfaces.
|
||||
| These modules are the core abstractions with which we can interact with data and environment smoothly. It's very important for DB-GPT, DB-GPT also provide standard, extendable interfaces.
|
||||
|
||||
| The docs for each module contain quickstart examples, how to guides, reference docs, and conceptual guides.
|
||||
|
||||
@ -78,35 +87,23 @@ It's very important for DB-GPT, DB-GPT also provide standard, extendable interfa
|
||||
|
||||
- `Vector <./modules/vector.html>`_: Supported multi vector database.
|
||||
|
||||
-------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Modules
|
||||
:name: modules
|
||||
:hidden:
|
||||
|
||||
./modules/llms.md
|
||||
./modules/prompts.md
|
||||
./modules/plugins.md
|
||||
./modules/connections.rst
|
||||
./modules/knowledge.rst
|
||||
./modules/vector.rst
|
||||
|
||||
|
||||
Reference
|
||||
-----------
|
||||
| Full documentation on all methods, classes, installation methods, and integration setups for DB-GPT.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Reference
|
||||
:name: reference
|
||||
:hidden:
|
||||
|
||||
./reference.md
|
||||
|
||||
modules/llms.md
|
||||
modules/prompts.md
|
||||
modules/plugins.md
|
||||
modules/connections.rst
|
||||
modules/knowledge.rst
|
||||
modules/vector.rst
|
||||
|
||||
Resources
|
||||
----------
|
||||
-----------------
|
||||
|
||||
| Additional resources we think may be useful as you develop your application!
|
||||
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-11-02 21:04+0800\n"
|
||||
"POT-Creation-Date: 2023-11-14 16:08+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -20,292 +20,287 @@ msgstr ""
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:1
|
||||
#: a17719d2f4374285a7beb4d1db470146
|
||||
#: e4787ab6eacc4362802752528bb786ec
|
||||
#, fuzzy
|
||||
msgid "Environment Parameter"
|
||||
msgstr "环境变量说明"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:4
|
||||
#: 9a62e6fff7914eeaa2d195ddef4fcb61
|
||||
#: 4682a0734a034e0e9f2c22fa061b889e
|
||||
msgid "LLM MODEL Config"
|
||||
msgstr "模型配置"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:5
|
||||
#: 90e3991538324ecfac8cac7ef2103ac2
|
||||
#: c148f178b2964344a570bb2b3713fba3
|
||||
msgid "LLM Model Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
|
||||
msgstr "LLM Model Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:6
|
||||
#: 1f45af01100c4586acbc05469e3006bc
|
||||
#: 9ab8d82fb338439a8c0042b92ad2f7c4
|
||||
msgid "LLM_MODEL=vicuna-13b"
|
||||
msgstr "LLM_MODEL=vicuna-13b"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:8
|
||||
#: bed14b704f154c2db525f7fafd3aa5a4
|
||||
#: 76fb3b1299694730852f120db6fec7f9
|
||||
msgid "MODEL_SERVER_ADDRESS"
|
||||
msgstr "MODEL_SERVER_ADDRESS"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:9
|
||||
#: ea42946cfe4f4ad996bf82c1996e7344
|
||||
msgid "MODEL_SERVER=http://127.0.0.1:8000 LIMIT_MODEL_CONCURRENCY"
|
||||
#: ../../getting_started/install/environment/environment.md:10
|
||||
#: 7476a0ee342f4517bbf999abecec029e
|
||||
#, fuzzy
|
||||
msgid "MODEL_SERVER=http://127.0.0.1:8000"
|
||||
msgstr "MODEL_SERVER=http://127.0.0.1:8000 LIMIT_MODEL_CONCURRENCY"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:12
|
||||
#: 021c261231f342fdba34098b1baa06fd
|
||||
msgid "LIMIT_MODEL_CONCURRENCY=5"
|
||||
#: fb3c73990a6443e8b63c35d61175e467
|
||||
#, fuzzy
|
||||
msgid "LIMIT_MODEL_CONCURRENCY"
|
||||
msgstr "LIMIT_MODEL_CONCURRENCY=5"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:14
|
||||
#: afaf0ba7fd09463d8ff74b514ed7264c
|
||||
#: 0eb187fffa3643dbac4bbe7237d2e011
|
||||
msgid "LIMIT_MODEL_CONCURRENCY=5"
|
||||
msgstr "LIMIT_MODEL_CONCURRENCY=5"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:16
|
||||
#: 1d7b8bf89c1b44e9871d9d0c382db114
|
||||
msgid "MAX_POSITION_EMBEDDINGS"
|
||||
msgstr "MAX_POSITION_EMBEDDINGS"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:16
|
||||
#: e4517a942bca4361a64a00408f993f5b
|
||||
#: ../../getting_started/install/environment/environment.md:18
|
||||
#: 50d0b3f760fd4ff9829cd1ba0653fd79
|
||||
msgid "MAX_POSITION_EMBEDDINGS=4096"
|
||||
msgstr "MAX_POSITION_EMBEDDINGS=4096"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:18
|
||||
#: 78d2ef04ed4548b9b7b0fb8ae35c9d5c
|
||||
#: ../../getting_started/install/environment/environment.md:20
|
||||
#: d07c4bbcde214f5993d73ac2bfb1bf9e
|
||||
msgid "QUANTIZE_QLORA"
|
||||
msgstr "QUANTIZE_QLORA"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:20
|
||||
#: bfa65db03c6d46bba293331f03ab15ac
|
||||
#: ../../getting_started/install/environment/environment.md:22
|
||||
#: 6bceef51780f45d9805270d16847ddc2
|
||||
msgid "QUANTIZE_QLORA=True"
|
||||
msgstr "QUANTIZE_QLORA=True"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:22
|
||||
#: 1947d45a7f184821910b4834ad5f1897
|
||||
#: ../../getting_started/install/environment/environment.md:24
|
||||
#: df9d560f69334e4aa3f6803e40a7f38d
|
||||
msgid "QUANTIZE_8bit"
|
||||
msgstr "QUANTIZE_8bit"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:24
|
||||
#: 4a2ee2919d0e4bdaa13c9d92eefd2aac
|
||||
#: ../../getting_started/install/environment/environment.md:26
|
||||
#: ac433b8574574432add7315558b845ea
|
||||
msgid "QUANTIZE_8bit=True"
|
||||
msgstr "QUANTIZE_8bit=True"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:27
|
||||
#: 348dc1e411b54ab09414f40a20e934e4
|
||||
#: ../../getting_started/install/environment/environment.md:29
|
||||
#: 7b1c407517984bff9f4d509c5f45b92e
|
||||
msgid "LLM PROXY Settings"
|
||||
msgstr "LLM PROXY Settings"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:28
|
||||
#: a692e78425a040f5828ab54ff9a33f77
|
||||
#: ../../getting_started/install/environment/environment.md:30
|
||||
#: ba7d52c0e95143ebb973e7eda69f0bc1
|
||||
msgid "OPENAI Key"
|
||||
msgstr "OPENAI Key"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:30
|
||||
#: 940d00e25a424acf92951a314a64e5ea
|
||||
#: ../../getting_started/install/environment/environment.md:32
|
||||
#: 0f0bd20a7a60461e8bcfc91297cc3666
|
||||
msgid "PROXY_API_KEY={your-openai-sk}"
|
||||
msgstr "PROXY_API_KEY={your-openai-sk}"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:31
|
||||
#: 4bd27547ae6041679e91f2a363cd1deb
|
||||
#: ../../getting_started/install/environment/environment.md:33
|
||||
#: d9c03e0b3316415eb2ca59ad9c419b8c
|
||||
msgid "PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions"
|
||||
msgstr "PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:33
|
||||
#: cfa3071afb0b47baad6bd729d4a02cb9
|
||||
#: ../../getting_started/install/environment/environment.md:35
|
||||
#: 45883f99c1fd494ea513f3c0f92562a3
|
||||
msgid "from https://bard.google.com/ f12-> application-> __Secure-1PSID"
|
||||
msgstr "from https://bard.google.com/ f12-> application-> __Secure-1PSID"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:35
|
||||
#: a17efa03b10f47f68afac9e865982a75
|
||||
#: ../../getting_started/install/environment/environment.md:37
|
||||
#: 70665dbe72c545a3b61c6efe37dfa7d5
|
||||
msgid "BARD_PROXY_API_KEY={your-bard-token}"
|
||||
msgstr "BARD_PROXY_API_KEY={your-bard-token}"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:38
|
||||
#: 6bcfe90574da4d82a459e8e11bf73cba
|
||||
#: ../../getting_started/install/environment/environment.md:40
|
||||
#: 782f8a9c9cd745a4990542ba8130c66a
|
||||
msgid "DATABASE SETTINGS"
|
||||
msgstr "DATABASE SETTINGS"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:39
|
||||
#: 2b1e62d9bf5d4af5a22f68c8248eaafb
|
||||
#: ../../getting_started/install/environment/environment.md:41
|
||||
#: 50ad9eae827a407c8c77692f48b9d423
|
||||
msgid "SQLite database (Current default database)"
|
||||
msgstr "SQLite database (Current default database)"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:40
|
||||
#: 8a909ac3b3c943da8dbc4e8dd596c80c
|
||||
#: ../../getting_started/install/environment/environment.md:42
|
||||
#: 410041683b664cabbe7ce6cb2050c629
|
||||
msgid "LOCAL_DB_PATH=data/default_sqlite.db"
|
||||
msgstr "LOCAL_DB_PATH=data/default_sqlite.db"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:41
|
||||
#: 90ae6507932f4815b6e180051738bb93
|
||||
#: ../../getting_started/install/environment/environment.md:43
|
||||
#: 0fcf0f9da84d4e4a8a1503a96dd6734b
|
||||
msgid "LOCAL_DB_TYPE=sqlite # Database Type default:sqlite"
|
||||
msgstr "LOCAL_DB_TYPE=sqlite # Database Type default:sqlite"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:43
|
||||
#: d2ce34e0dcf44ccf9e8007d548ba7b0a
|
||||
#: ../../getting_started/install/environment/environment.md:45
|
||||
#: 15fb9cdc51e44b71a1a375e49fb7bc6d
|
||||
msgid "MYSQL database"
|
||||
msgstr "MYSQL database"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:44
|
||||
#: c07159d63c334f6cbb95fcc30bfb7ea5
|
||||
#: ../../getting_started/install/environment/environment.md:46
|
||||
#: c8cc4cb61d1c44cd9ef3546455929ef6
|
||||
msgid "LOCAL_DB_TYPE=mysql"
|
||||
msgstr "LOCAL_DB_TYPE=mysql"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:45
|
||||
#: e16700b2ea8d411e91d010c1cde7aecc
|
||||
#: ../../getting_started/install/environment/environment.md:47
|
||||
#: a6caf3cabc4041b5879ec3af25c85139
|
||||
msgid "LOCAL_DB_USER=root"
|
||||
msgstr "LOCAL_DB_USER=root"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:46
|
||||
#: bfc2dce1bf374121b6861e677b4e1ffa
|
||||
#: ../../getting_started/install/environment/environment.md:48
|
||||
#: b839bde122374e299086f120fce0144c
|
||||
msgid "LOCAL_DB_PASSWORD=aa12345678"
|
||||
msgstr "LOCAL_DB_PASSWORD=aa12345678"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:47
|
||||
#: bc384739f5b04e21a34d0d2b78e7906c
|
||||
#: ../../getting_started/install/environment/environment.md:49
|
||||
#: 52cdbfdefda142b4a3b5cb3b060916a8
|
||||
msgid "LOCAL_DB_HOST=127.0.0.1"
|
||||
msgstr "LOCAL_DB_HOST=127.0.0.1"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:48
|
||||
#: e5253d452e0d42b7ac308fe6fbfb5017
|
||||
#: ../../getting_started/install/environment/environment.md:50
|
||||
#: 492db6e5c13b40898f38063980c5897c
|
||||
msgid "LOCAL_DB_PORT=3306"
|
||||
msgstr "LOCAL_DB_PORT=3306"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:51
|
||||
#: 9ca8f6fe06ed4cbab390f94be252e165
|
||||
#: ../../getting_started/install/environment/environment.md:53
|
||||
#: 20b101603f054c70af633439abddefec
|
||||
msgid "EMBEDDING SETTINGS"
|
||||
msgstr "EMBEDDING SETTINGS"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:52
|
||||
#: 76c7c260293c4b49bae057143fd48377
|
||||
#: ../../getting_started/install/environment/environment.md:54
|
||||
#: 3463a5a74cea494c8442100c0069285c
|
||||
msgid "EMBEDDING MODEL Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
|
||||
msgstr "EMBEDDING模型, 参考see /pilot/configs/model_config.LLM_MODEL_CONFIG"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:53
|
||||
#: f1d63a0128ce493cae37d34f1976bcca
|
||||
#: ../../getting_started/install/environment/environment.md:55
|
||||
#: 4c8adbf52110474bbfcd3b63cf2839f6
|
||||
msgid "EMBEDDING_MODEL=text2vec"
|
||||
msgstr "EMBEDDING_MODEL=text2vec"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:55
|
||||
#: b8fbb99109d04781b2dd5bc5d6efa5bd
|
||||
#: ../../getting_started/install/environment/environment.md:57
|
||||
#: 8a85a75151e64827971b1a367b31ecfa
|
||||
msgid "Embedding Chunk size, default 500"
|
||||
msgstr "Embedding 切片大小, 默认500"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:57
|
||||
#: bf8256576ea34f6a9c5f261ab9aab676
|
||||
#: ../../getting_started/install/environment/environment.md:59
|
||||
#: 947939b0fa7e46de97d48eadf5c443d2
|
||||
msgid "KNOWLEDGE_CHUNK_SIZE=500"
|
||||
msgstr "KNOWLEDGE_CHUNK_SIZE=500"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:59
|
||||
#: 9b156c6b599b4c02a58ce023b4ff25f2
|
||||
#: ../../getting_started/install/environment/environment.md:61
|
||||
#: 2785ad6bb0de4534a6523ac420f2c84c
|
||||
msgid "Embedding Chunk Overlap, default 100"
|
||||
msgstr "Embedding chunk Overlap, 文本块之间的最大重叠量。保留一些重叠可以保持文本块之间的连续性(例如使用滑动窗口),默认100"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:60
|
||||
#: dcafd903c36041ac85ac99a14dbee512
|
||||
#: ../../getting_started/install/environment/environment.md:62
|
||||
#: 40b6a8f57ee14ec1ab73143ba1516e78
|
||||
msgid "KNOWLEDGE_CHUNK_OVERLAP=100"
|
||||
msgstr "KNOWLEDGE_CHUNK_OVERLAP=100"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:62
|
||||
#: 6c3244b7e5e24b0188c7af4bb52e9134
|
||||
#: ../../getting_started/install/environment/environment.md:64
|
||||
#: e410faa1087c45639ee210be99cf9336
|
||||
#, fuzzy
|
||||
msgid "embedding recall top k,5"
|
||||
msgstr "embedding 召回topk, 默认5"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:64
|
||||
#: f4a2f30551cf4fe1a7ff3c7c74ec77be
|
||||
#: ../../getting_started/install/environment/environment.md:66
|
||||
#: abfca38fe2a04161a11259588fa4d205
|
||||
msgid "KNOWLEDGE_SEARCH_TOP_SIZE=5"
|
||||
msgstr "KNOWLEDGE_SEARCH_TOP_SIZE=5"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:66
|
||||
#: 593f2512362f467e92fdaa60dd5903a0
|
||||
#: ../../getting_started/install/environment/environment.md:68
|
||||
#: 31182c38607b4c3bbc657b5fe5b7a4f6
|
||||
#, fuzzy
|
||||
msgid "embedding recall max token ,2000"
|
||||
msgstr "embedding向量召回最大token, 默认2000"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:68
|
||||
#: 83d6d28914be4d6282d457272e508ddc
|
||||
#: ../../getting_started/install/environment/environment.md:70
|
||||
#: 96cd042635bc468e90c792fd9d1a7f4d
|
||||
msgid "KNOWLEDGE_SEARCH_MAX_TOKEN=5"
|
||||
msgstr "KNOWLEDGE_SEARCH_MAX_TOKEN=5"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:71
|
||||
#: ../../getting_started/install/environment/environment.md:87
|
||||
#: 6bc1b9d995e74294a1c78e783c550db7 d33c77ded834438e9f4a2df06e7e041a
|
||||
#: ../../getting_started/install/environment/environment.md:73
|
||||
#: d43b408ad9bc46f2b3c97aa91627f6b3
|
||||
msgid "Vector Store SETTINGS"
|
||||
msgstr "Vector Store SETTINGS"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:72
|
||||
#: ../../getting_started/install/environment/environment.md:88
|
||||
#: 9cafa06e2d584f70afd848184e0fa52a f01057251b8b4ffea806192dfe1048ed
|
||||
#: ../../getting_started/install/environment/environment.md:74
|
||||
#: b1fcbf6049af4eeea91edd3de58c8512
|
||||
msgid "Chroma"
|
||||
msgstr "Chroma"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:73
|
||||
#: ../../getting_started/install/environment/environment.md:89
|
||||
#: e6c16fab37484769b819aeecbc13e6db faad299722e5400e95ec6ac3c1e018b8
|
||||
#: ../../getting_started/install/environment/environment.md:75
|
||||
#: 2fb31575b274448fb945d47ee0eb108c
|
||||
msgid "VECTOR_STORE_TYPE=Chroma"
|
||||
msgstr "VECTOR_STORE_TYPE=Chroma"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:74
|
||||
#: ../../getting_started/install/environment/environment.md:90
|
||||
#: 4eca3a51716d406f8ffd49c06550e871 581ee9dd38064b119660c44bdd00cbaa
|
||||
#: ../../getting_started/install/environment/environment.md:76
|
||||
#: 601b87cc6f1d4732b935747e907cba5a
|
||||
msgid "MILVUS"
|
||||
msgstr "MILVUS"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:75
|
||||
#: ../../getting_started/install/environment/environment.md:91
|
||||
#: 814c93048bed46589358a854d6c99683 b72b1269a2224f5f961214e41c019f21
|
||||
#: ../../getting_started/install/environment/environment.md:77
|
||||
#: fde6cf6982764020aa1174f7fe3a5b3e
|
||||
msgid "VECTOR_STORE_TYPE=Milvus"
|
||||
msgstr "VECTOR_STORE_TYPE=Milvus"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:76
|
||||
#: ../../getting_started/install/environment/environment.md:92
|
||||
#: 73ae665f1db9402883662734588fd02c c4da20319c994e83ba5a7706db967178
|
||||
#: ../../getting_started/install/environment/environment.md:78
|
||||
#: 40c6206c7a614edf9b0af82c2c76f518
|
||||
msgid "MILVUS_URL=127.0.0.1"
|
||||
msgstr "MILVUS_URL=127.0.0.1"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:77
|
||||
#: ../../getting_started/install/environment/environment.md:93
|
||||
#: e30c5288516d42aa858a485db50490c1 f843b2e58bcb4e4594e3c28499c341d0
|
||||
#: ../../getting_started/install/environment/environment.md:79
|
||||
#: abde3c75269442cbb94a59c657d847a9
|
||||
msgid "MILVUS_PORT=19530"
|
||||
msgstr "MILVUS_PORT=19530"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:78
|
||||
#: ../../getting_started/install/environment/environment.md:94
|
||||
#: 158669efcc7d4bcaac1c8dd01b499029 24e88ffd32f242f281c56c0ec3ad2639
|
||||
#: ../../getting_started/install/environment/environment.md:80
|
||||
#: 375a837cbf6d4d65891612a7f073414a
|
||||
msgid "MILVUS_USERNAME"
|
||||
msgstr "MILVUS_USERNAME"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:79
|
||||
#: ../../getting_started/install/environment/environment.md:95
|
||||
#: 111a985297184c8aa5a0dd8e14a58445 6602093a6bb24d6792548e2392105c82
|
||||
#: ../../getting_started/install/environment/environment.md:81
|
||||
#: f785a796c8d3452c802d9a637f34cb57
|
||||
msgid "MILVUS_PASSWORD"
|
||||
msgstr "MILVUS_PASSWORD"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:80
|
||||
#: ../../getting_started/install/environment/environment.md:96
|
||||
#: 47bdfcd78fbe4ccdb5f49b717a6d01a6 b96c0545b2044926a8a8190caf94ad25
|
||||
#: ../../getting_started/install/environment/environment.md:82
|
||||
#: 18cd17a50dc14add9b31f6b4c55069ef
|
||||
msgid "MILVUS_SECURE="
|
||||
msgstr "MILVUS_SECURE="
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:82
|
||||
#: ../../getting_started/install/environment/environment.md:98
|
||||
#: 755c32b5d6c54607907a138b5474c0ec ff4f2a7ddaa14f089dda7a14e1062c36
|
||||
#: ../../getting_started/install/environment/environment.md:84
|
||||
#: a4783d775bf2444788b758a71bd5a7e7
|
||||
msgid "WEAVIATE"
|
||||
msgstr "WEAVIATE"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:83
|
||||
#: 23b2ce83385d40a589a004709f9864be
|
||||
#: ../../getting_started/install/environment/environment.md:85
|
||||
#: 3cc5ca99670947e6868e27db588031e0
|
||||
msgid "VECTOR_STORE_TYPE=Weaviate"
|
||||
msgstr "VECTOR_STORE_TYPE=Weaviate"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:84
|
||||
#: ../../getting_started/install/environment/environment.md:99
|
||||
#: 9acef304d89a448a9e734346705ba872 cf5151b6c1594ccd8beb1c3f77769acb
|
||||
#: ../../getting_started/install/environment/environment.md:86
|
||||
#: 141a3da2e36e40ffaa0fb863081a4c07
|
||||
msgid "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network"
|
||||
msgstr "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:102
|
||||
#: c3003516b2364051bf34f8c3086e348a
|
||||
#: ../../getting_started/install/environment/environment.md:89
|
||||
#: fde1941617ec4148b33c298bebeb45e4
|
||||
msgid "Multi-GPU Setting"
|
||||
msgstr "Multi-GPU Setting"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:103
|
||||
#: ade8fc381c5e438aa29d159c10041713
|
||||
#: ../../getting_started/install/environment/environment.md:90
|
||||
#: fe162354e15e42cda54f6c9322409321
|
||||
msgid ""
|
||||
"See https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-"
|
||||
"visibility-cuda_visible_devices/ If CUDA_VISIBLE_DEVICES is not "
|
||||
@ -314,50 +309,50 @@ msgstr ""
|
||||
"参考 https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-"
|
||||
"cuda_visible_devices/ 如果 CUDA_VISIBLE_DEVICES没有设置, 会使用所有可用的gpu"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:106
|
||||
#: e137bd19be5e410ba6709027dbf2923a
|
||||
#: ../../getting_started/install/environment/environment.md:93
|
||||
#: c8a83b09bfc94dab8226840b275ca034
|
||||
msgid "CUDA_VISIBLE_DEVICES=0"
|
||||
msgstr "CUDA_VISIBLE_DEVICES=0"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:108
|
||||
#: 7669947acbdc4b1d92bcc029a8353a5d
|
||||
#: ../../getting_started/install/environment/environment.md:95
|
||||
#: a1d33bd2492a4a80bd8b679c1331280a
|
||||
msgid ""
|
||||
"Optionally, you can also specify the gpu ID to use before the starting "
|
||||
"command"
|
||||
msgstr "你也可以通过启动命令设置gpu ID"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:110
|
||||
#: 751743d1753b4051beea46371278d793
|
||||
#: ../../getting_started/install/environment/environment.md:97
|
||||
#: 961087a5cf1b45168c7439e3a2103253
|
||||
msgid "CUDA_VISIBLE_DEVICES=3,4,5,6"
|
||||
msgstr "CUDA_VISIBLE_DEVICES=3,4,5,6"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:112
|
||||
#: 3acc3de0af0d4df2bb575e161e377f85
|
||||
#: ../../getting_started/install/environment/environment.md:99
|
||||
#: 545b438ecb9d46edacbd8b4cc95886f9
|
||||
msgid "You can configure the maximum memory used by each GPU."
|
||||
msgstr "可以设置GPU的最大内存"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:114
|
||||
#: 67f1d9b172b84294a44ecace5436e6e0
|
||||
#: ../../getting_started/install/environment/environment.md:101
|
||||
#: a78dc8082fa04e13a7a3e43302830c26
|
||||
msgid "MAX_GPU_MEMORY=16Gib"
|
||||
msgstr "MAX_GPU_MEMORY=16Gib"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:117
|
||||
#: 3c69dfe48bcf46b89b76cac1e7849a66
|
||||
#: ../../getting_started/install/environment/environment.md:104
|
||||
#: eaebcb1784be4047b739ff1b8a78faa1
|
||||
msgid "Other Setting"
|
||||
msgstr "Other Setting"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:118
|
||||
#: d5015b70f4fe4d20a63de9d87f86957a
|
||||
#: ../../getting_started/install/environment/environment.md:105
|
||||
#: 21f524662fa34bfa9cfb8855bc191cc7
|
||||
msgid "Language Settings(influence prompt language)"
|
||||
msgstr "Language Settings(涉及prompt语言以及知识切片方式)"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:119
|
||||
#: 5543c28bb8e34c9fb3bb6b063c2b1750
|
||||
#: ../../getting_started/install/environment/environment.md:106
|
||||
#: bb5ce4a6ee794f0e910363673e54055a
|
||||
msgid "LANGUAGE=en"
|
||||
msgstr "LANGUAGE=en"
|
||||
|
||||
#: ../../getting_started/install/environment/environment.md:120
|
||||
#: cb4ed5b892ee41068c1ca76cb29aa400
|
||||
#: ../../getting_started/install/environment/environment.md:107
|
||||
#: 862f113d63b94084b89bfef29f8ab48d
|
||||
msgid "LANGUAGE=zh"
|
||||
msgstr "LANGUAGE=zh"
|
||||
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.3.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-11-06 19:00+0800\n"
|
||||
"POT-Creation-Date: 2023-11-14 17:55+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -19,151 +19,176 @@ msgstr ""
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../index.rst:34 ../../index.rst:45 8bc3a47457a34995816985436034e233
|
||||
#: ../../index.rst:44 ../../index.rst:54 fb98707559574eb29f01bd8f6ebfac60
|
||||
msgid "Getting Started"
|
||||
msgstr "开始"
|
||||
|
||||
#: ../../index.rst:60 ../../index.rst:81 1a4e8a5dc7754967a0af9fb3d2e53017
|
||||
#: ../../index.rst:70 ../../index.rst:92 6d9603b978d44e54a257fb359c871867
|
||||
msgid "Modules"
|
||||
msgstr "模块"
|
||||
|
||||
#: ../../index.rst:96 ../../index.rst:99 c815772ae8514f0c9b26911b0dd73f54
|
||||
msgid "Reference"
|
||||
msgstr "参考"
|
||||
|
||||
#: ../../index.rst:109 ../../index.rst:115 dabe4c3409df489f84e4ec588f2b34a5
|
||||
#: ../../index.rst:106 ../../index.rst:112 9df06739ca4446bc86ec2ff6907763ce
|
||||
msgid "Resources"
|
||||
msgstr "资源"
|
||||
|
||||
#: ../../index.rst:7 7626b01b253546ac83ca0cf130dfa091
|
||||
msgid "Welcome to DB-GPT!"
|
||||
msgstr "欢迎来到DB-GPT中文文档"
|
||||
#: ../../index.rst:7 7de875cfbc764937ab7f8b362d997952
|
||||
msgid "Overview"
|
||||
msgstr "概览"
|
||||
|
||||
#: ../../index.rst:8 6037e5e0d7f7428ba92315a91ccfd53f
|
||||
#: ../../index.rst:9 770a756bd0b640ef863fd72b8d7e882a
|
||||
msgid ""
|
||||
"As large models are released and iterated upon, they are becoming "
|
||||
"increasingly intelligent. However, in the process of using large models, "
|
||||
"we face significant challenges in data security and privacy. We need to "
|
||||
"ensure that our sensitive data and environments remain completely "
|
||||
"controlled and avoid any data privacy leaks or security risks. Based on "
|
||||
"this, we have launched the DB-GPT project to build a complete private "
|
||||
"large model solution for all database-based scenarios. This solution "
|
||||
"supports local deployment, allowing it to be applied not only in "
|
||||
"independent private environments but also to be independently deployed "
|
||||
"and isolated according to business modules, ensuring that the ability of "
|
||||
"large models is absolutely private, secure, and controllable."
|
||||
msgstr ""
|
||||
"随着大型模型的发布和迭代,它们变得越来越智能。然而,在使用大型模型的过程中,我们在数据安全和隐私方面面临着重大挑战。我们需要确保我们的敏感数据和环境得到完全控制,避免任何数据隐私泄露或安全风险。基于此"
|
||||
",我们启动了DB-"
|
||||
"GPT项目,为所有基于数据库的场景构建一个完整的私有大模型解决方案。该方案“”支持本地部署,既可应用于“独立私有环境”,又可根据业务模块进行“独立部署”和“隔离”,确保“大模型”的能力绝对私有、安全、可控。"
|
||||
"DB-GPT is an open-source framework for large models in the database "
|
||||
"field. Its purpose is to build infrastructure for the domain of large "
|
||||
"models, making it easier and more convenient to develop applications "
|
||||
"around databases. By developing various technical capabilities such as:"
|
||||
msgstr "DB-GPT是一个开源的数据库领域大模型框架。目的是构建大模型领域的基础设施,通过开发如"
|
||||
|
||||
#: ../../index.rst:10 ab2a181d517047e6992171786c83f8e3
|
||||
#: ../../index.rst:11 8774a5ad5ce14baf9eae35fefd62e40b
|
||||
msgid "**SMMF(Service-oriented Multi-model Management Framework)**"
|
||||
msgstr "**服务化多模型管理**"
|
||||
|
||||
#: ../../index.rst:12 b2ba120fc994436db7066486c9acd6ad
|
||||
msgid "**Text2SQL Fine-tuning**"
|
||||
msgstr "**Text2SQL微调**"
|
||||
|
||||
#: ../../index.rst:13 d55efe86dd6b40ebbe63079edb60e421
|
||||
msgid "**RAG(Retrieval Augmented Generation) framework and optimization**"
|
||||
msgstr "**检索增强**"
|
||||
|
||||
#: ../../index.rst:14 3eca943c44464c9cb9bbc5724c27ad1c
|
||||
msgid "**Data-Driven Agents framework collaboration**"
|
||||
msgstr "**数据驱动的Agents协作框架**"
|
||||
|
||||
#: ../../index.rst:15 bf41d57cbc474e2c9829f09d6b983ae1
|
||||
msgid "**GBI(Generative Business intelligence)**"
|
||||
msgstr "**生成式报表分析**"
|
||||
|
||||
#: ../../index.rst:17 36630469cc064317a1c196dd377c3d93
|
||||
msgid ""
|
||||
"**DB-GPT** is an experimental open-source project that uses localized GPT"
|
||||
" large models to interact with your data and environment. With this "
|
||||
"solution, you can be assured that there is no risk of data leakage, and "
|
||||
"your data is 100% private and secure."
|
||||
msgstr ""
|
||||
"DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地化的GPT大模型与您的数据和环境进行交互,无数据泄露风险100% 私密,100%"
|
||||
" 安全。"
|
||||
"etc, DB-GPT simplifies the construction of large model applications based"
|
||||
" on databases."
|
||||
msgstr "等能力, 让围绕数据库构建大模型应用更简单,更方便。"
|
||||
|
||||
#: ../../index.rst:12 9cfb7515430d49af8a1ca47f60264a58
|
||||
msgid "**Features**"
|
||||
#: ../../index.rst:19 82f03535c6914ebfa8b3adad34eeed2f
|
||||
msgid ""
|
||||
"In the era of Data 3.0, enterprises and developers can build their own "
|
||||
"customized applications with less code, leveraging models and databases."
|
||||
msgstr "*数据3.0 时代,基于模型、数据库,企业/开发者可以用更少的代码搭建自己的专属应用*。"
|
||||
|
||||
#: ../../index.rst:22 daf64ec39c28458087d542879d106d1b
|
||||
msgid "Features"
|
||||
msgstr "特性"
|
||||
|
||||
#: ../../index.rst:13 2a1f84e455c84d9ca66c65f92e5b0d78
|
||||
#: ../../index.rst:24 7ceb41b710f847e683479dc892baa3d5
|
||||
msgid "**1. Private Domain Q&A & Data Processing**"
|
||||
msgstr "**1. 私域问答&数据处理**"
|
||||
|
||||
#: ../../index.rst:25 3f480e259ee9432b934ee6474bc8de79
|
||||
msgid ""
|
||||
"Currently, we have released multiple key features, which are listed below"
|
||||
" to demonstrate our current capabilities:"
|
||||
msgstr "目前我们已经发布了多种关键的特性,这里一一列举展示一下当前发布的能力。"
|
||||
"Supports custom construction of knowledge bases through methods such as "
|
||||
"built-in, multi-file format uploads, and plugin-based web scraping. "
|
||||
"Enables unified vector storage and retrieval of massive structured and "
|
||||
"unstructured data."
|
||||
msgstr "支持内置、多文件格式上传、插件自抓取等方式自定义构建知识库,对海量结构化,非结构化数据做统一向量存储与检索"
|
||||
|
||||
#: ../../index.rst:15 43de30ce92da4c3cbe43ae4e4c9f1869
|
||||
msgid "SQL language capabilities - SQL generation - SQL diagnosis"
|
||||
msgstr "SQL语言能力 - SQL生成 - SQL诊断"
|
||||
#: ../../index.rst:27 1f9f12be761a4a6c996788051a3fa4dd
|
||||
msgid "**2.Multi-Data Source & GBI(Generative Business intelligence)**"
|
||||
msgstr "**2.多数据源与可视化**"
|
||||
|
||||
#: ../../index.rst:19 edfeef5284e7426a9e551e782bc5702c
|
||||
#: ../../index.rst:28 e597e6c2d4ad4d1bbcc440b3afb7c0fa
|
||||
msgid ""
|
||||
"Private domain Q&A and data processing - Database knowledge Q&A - Data "
|
||||
"processing"
|
||||
msgstr "私有领域问答与数据处理 - 数据库知识问答 - 数据处理"
|
||||
"Supports interaction between natural language and various data sources "
|
||||
"such as Excel, databases, and data warehouses. Also supports analysis "
|
||||
"reporting."
|
||||
msgstr "支持自然语言与Excel、数据库、数仓等多种数据源交互,并支持分析报告。"
|
||||
|
||||
#: ../../index.rst:23 7a42f17049b943f88dd8f17baa440144
|
||||
#: ../../index.rst:30 9c63ecf927874f9ea79f1ef5c1535e67
|
||||
msgid "**3.SMMF(Service-oriented Multi-model Management Framework)**"
|
||||
msgstr "**3.多模型管理**"
|
||||
|
||||
#: ../../index.rst:31 d6cfb9b69f9743d083c4644c90fd6108
|
||||
msgid ""
|
||||
"Plugins - Support custom plugin execution tasks and natively support the "
|
||||
"Auto-GPT plugin, such as:"
|
||||
msgstr "插件模型 - 支持自定义插件执行任务,并原生支持Auto-GPT插件,例如:* SQL自动执行,获取查询结果 * 自动爬取学习知识"
|
||||
"Supports a wide range of models, including dozens of large language "
|
||||
"models such as open-source models and API proxies. Examples include "
|
||||
"LLaMA/LLaMA2, Baichuan, ChatGLM, Wenxin, Tongyi, Zhipu, Xinghuo, etc."
|
||||
msgstr "海量模型支持,包括开源、API代理等几十种大语言模型。如LLaMA/LLaMA2、Baichuan、ChatGLM、文心、通义、智谱、星火等。"
|
||||
|
||||
#: ../../index.rst:26 8b48d7b60bbc439da50a624c4048e6f6
|
||||
#: ../../index.rst:33 dda6cec4316e48f2afe77005baa53a06
|
||||
msgid "**4.Automated Fine-tuning**"
|
||||
msgstr "**4.自动化微调**"
|
||||
|
||||
#: ../../index.rst:34 7cf1654a9779444ab3982435887d087b
|
||||
msgid ""
|
||||
"Unified vector storage/indexing of knowledge base - Support for "
|
||||
"unstructured data such as PDF, Markdown, CSV, and WebURL"
|
||||
msgstr "知识库统一向量存储/索引 - 非结构化数据支持包括PDF、MarkDown、CSV、WebURL"
|
||||
"A lightweight framework for automated fine-tuning built around large "
|
||||
"language models, Text2SQL datasets, and methods like LoRA/QLoRA/Pturning."
|
||||
" Makes TextSQL fine-tuning as convenient as a production line."
|
||||
msgstr ""
|
||||
"围绕大语言模型、Text2SQL数据集、LoRA/QLoRA/Pturning等微调方法构建的自动化微调轻量框架, "
|
||||
"让TextSQL微调像流水线一样方便。"
|
||||
|
||||
#: ../../index.rst:29 97df482893924bd18e9a101922e7c374
|
||||
#, fuzzy
|
||||
#: ../../index.rst:36 f58f114546f04b658aaa67fd895fba2b
|
||||
msgid "**5.Data-Driven Multi-Agents & Plugins**"
|
||||
msgstr "**5.数据驱动的插件模型**"
|
||||
|
||||
#: ../../index.rst:37 a93fdca3de054cb0812d7f5ca3d12375
|
||||
msgid ""
|
||||
"Multi LLMs Support - Supports multiple large language models, currently "
|
||||
"supporting Vicuna (7b, 13b), ChatGLM-6b (int4, int8) - TODO: codegen2, "
|
||||
"codet5p"
|
||||
msgstr "多模型支持 - 支持多种大语言模型, 当前已支持Vicuna(7b,13b), ChatGLM-6b(int4, int8)"
|
||||
"Supports executing tasks through custom plugins and natively supports the"
|
||||
" Auto-GPT plugin model. Agents protocol follows the Agent Protocol "
|
||||
"standard."
|
||||
msgstr "支持自定义插件执行任务,原生支持Auto-GPT插件模型,Agents协议采用Agent Protocol标准"
|
||||
|
||||
#: ../../index.rst:35 1ef26ead30ed4b7fb966c8a17307cdc5
|
||||
#: ../../index.rst:39 3a0e89b151694e4b8e87646efe313568
|
||||
msgid "**6.Privacy and Security**"
|
||||
msgstr "**6.隐私安全**"
|
||||
|
||||
#: ../../index.rst:40 aa50fc40f22f4fae8225a0a0a97c17dc
|
||||
msgid ""
|
||||
"How to get started using DB-GPT to interact with your data and "
|
||||
"environment."
|
||||
msgstr "开始使用DB-GPT与您的数据环境进行交互。"
|
||||
"Ensures data privacy and security through techniques such as privatizing "
|
||||
"large models and proxy de-identification."
|
||||
msgstr "通过私有化大模型、代理脱敏等多种技术保障数据的隐私安全"
|
||||
|
||||
#: ../../index.rst:36 3b44ab3576944bf6aa221f35bc051f4e
|
||||
#: ../../index.rst:46 d8bf21a7abd749608cddcdb2e358f3be
|
||||
msgid "Quickstart"
|
||||
msgstr "快速开始"
|
||||
|
||||
#: ../../index.rst:48 d1f117a7cbb94c80afc0660e899d8154
|
||||
#, fuzzy
|
||||
msgid "`Quickstart Guide <./getting_started/getting_started.html>`_"
|
||||
msgstr "`使用指南 <./getting_started/getting_started.html>`_"
|
||||
|
||||
#: ../../index.rst:38 430cb239cdce42a0b62db46aba3f3bdb
|
||||
#: ../../index.rst:50 5fd56979f31b4a0b93082004f1cb90c7
|
||||
msgid "Concepts and terminology"
|
||||
msgstr "相关概念"
|
||||
|
||||
#: ../../index.rst:40 ded4d9f80066498e90ba6214520013f7
|
||||
#: ../../index.rst:52 09c6889d02fa417c9ffde312211726f0
|
||||
#, fuzzy
|
||||
msgid "`Concepts and Terminology <./getting_started/concepts.html>`_"
|
||||
msgstr "`相关概念 <./getting_started/concepts.html>`_"
|
||||
|
||||
#: ../../index.rst:42 cd662e53621e474d901146813c750044
|
||||
msgid "Coming soon..."
|
||||
msgstr ""
|
||||
|
||||
#: ../../index.rst:44 15edba57f1de44af8aff76735a2593de
|
||||
msgid "`Tutorials <.getting_started/tutorials.html>`_"
|
||||
msgstr "`教程 <.getting_started/tutorials.html>`_"
|
||||
|
||||
#: ../../index.rst:62 779454b29d8e4e6eb21497025922d1b8
|
||||
#: ../../index.rst:72 5bd727134fc94cfb88abb755ccceac03
|
||||
msgid ""
|
||||
"These modules are the core abstractions with which we can interact with "
|
||||
"data and environment smoothly."
|
||||
msgstr "这些模块是我们可以与数据和环境顺利地进行交互的核心组成。"
|
||||
"data and environment smoothly. It's very important for DB-GPT, DB-GPT "
|
||||
"also provide standard, extendable interfaces."
|
||||
msgstr "这些模块是我们能够与数据和环境顺利交互的核心抽象。这对于DB-GPT来说非常重要,DB-GPT还提供了标准的、可扩展的接口。"
|
||||
|
||||
#: ../../index.rst:63 bcd0e8c88c7b4807a91dd442416bec19
|
||||
msgid ""
|
||||
"It's very important for DB-GPT, DB-GPT also provide standard, extendable "
|
||||
"interfaces."
|
||||
msgstr "DB-GPT还提供了标准的、可扩展的接口。"
|
||||
|
||||
#: ../../index.rst:65 1e785dc6925045e8ba106cf4a3b17cac
|
||||
#: ../../index.rst:74 1a5eb0b7cb884309be3431112c8f38e5
|
||||
msgid ""
|
||||
"The docs for each module contain quickstart examples, how to guides, "
|
||||
"reference docs, and conceptual guides."
|
||||
msgstr "每个模块的文档都包含快速入门的例子、操作指南、参考文档和相关概念等内容。"
|
||||
|
||||
#: ../../index.rst:67 9c9fddd14bfd40339889f5d1f0b04163
|
||||
#: ../../index.rst:76 24aa8c08d1dc460ab23d69a5bb9c8fc3
|
||||
msgid "The modules are as follows"
|
||||
msgstr "组成模块如下:"
|
||||
|
||||
#: ../../index.rst:69 4a19083cadd04b8e8b649a622e0ceccd
|
||||
#: ../../index.rst:78 9f4280cca1f743cb9b868cc67e3f3ce7
|
||||
msgid ""
|
||||
"`LLMs <./modules/llms.html>`_: Supported multi models management and "
|
||||
"integrations."
|
||||
msgstr "`LLMs <./modules/llms.html>`_:基于FastChat提供大模型的运行环境。支持多模型管理和集成。 "
|
||||
|
||||
#: ../../index.rst:71 436a139225574aa5b066a1835d38238d
|
||||
#: ../../index.rst:80 d357811f110f40e79f0c20ef9cb60d0c
|
||||
msgid ""
|
||||
"`Prompts <./modules/prompts.html>`_: Prompt management, optimization, and"
|
||||
" serialization for multi database."
|
||||
@ -171,41 +196,35 @@ msgstr ""
|
||||
"`Prompt自动生成与优化 <./modules/prompts.html>`_: 自动化生成高质量的Prompt "
|
||||
",并进行优化,提高系统的响应效率"
|
||||
|
||||
#: ../../index.rst:73 6c53edfb2e494c5fba6efb5ade48c310
|
||||
#: ../../index.rst:82 3cb9acc9f11a46638e6687f743d6b7f3
|
||||
msgid "`Plugins <./modules/plugins.html>`_: Plugins management, scheduler."
|
||||
msgstr "`Agent与插件: <./modules/plugins.html>`_:提供Agent和插件机制,使得用户可以自定义并增强系统的行为。"
|
||||
|
||||
#: ../../index.rst:75 6328760e8faf4e8296f3e1edd486316c
|
||||
#: ../../index.rst:84 b24c462cb5364890a6ca990f09f48cfc
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"`Knowledge <./modules/knowledge.html>`_: Knowledge management, embedding,"
|
||||
" and search."
|
||||
msgstr "`知识库能力: <./modules/knowledge.html>`_: 支持私域知识库问答能力, "
|
||||
|
||||
#: ../../index.rst:77 da272ccf56e3498d92009ac7101b0c45
|
||||
#: ../../index.rst:86 7448b231fe8745f1965a1f48ffc5444a
|
||||
msgid ""
|
||||
"`Connections <./modules/connections.html>`_: Supported multi databases "
|
||||
"connection. management connections and interact with this."
|
||||
msgstr "`连接模块 <./modules/connections.html>`_: 用于连接不同的模块和数据源,实现数据的流转和交互 "
|
||||
|
||||
#: ../../index.rst:79 1a0551f62d9d418a9dec267fbcb49af0
|
||||
#: ../../index.rst:88 c677fb24869347ff907f1529ef333b6b
|
||||
#, fuzzy
|
||||
msgid "`Vector <./modules/vector.html>`_: Supported multi vector database."
|
||||
msgstr "`LLMs <./modules/llms.html>`_:基于FastChat提供大模型的运行环境。支持多模型管理和集成。 "
|
||||
|
||||
#: ../../index.rst:97 9aceee0dbe1e4f7da499ac6aab23aea2
|
||||
msgid ""
|
||||
"Full documentation on all methods, classes, installation methods, and "
|
||||
"integration setups for DB-GPT."
|
||||
msgstr "关于DB-GPT的所有方法、类、安装方法和集成设置的完整文档。"
|
||||
|
||||
#: ../../index.rst:111 c9a729f4e1964894bae215793647ab75
|
||||
#: ../../index.rst:108 2e56f2cb1a8b40dda9465c0a1af94196
|
||||
msgid ""
|
||||
"Additional resources we think may be useful as you develop your "
|
||||
"application!"
|
||||
msgstr "“我们认为在您开发应用程序时可能有用的其他资源!”"
|
||||
msgstr "我们认为在您开发应用程序时可能有用的其他资源!"
|
||||
|
||||
#: ../../index.rst:113 06e6e4b7776c405fa94ae7b59253162d
|
||||
#: ../../index.rst:110 590362cb3b7442d49eafa58cb323e127
|
||||
msgid ""
|
||||
"`Discord <https://discord.gg/eZHE94MN>`_: if your have some problem or "
|
||||
"ideas, you can talk from discord."
|
||||
@ -278,3 +297,270 @@ msgstr "`Discord <https://discord.gg/eZHE94MN>`_:如果您有任何问题,可
|
||||
#~ "autonomoly."
|
||||
#~ msgstr "`插件工具 <./use_cases/tool_use_with_plugin>`_: 根据插件使用工具自主管理数据库。"
|
||||
|
||||
#~ msgid "Reference"
|
||||
#~ msgstr "参考"
|
||||
|
||||
#~ msgid "Welcome to DB-GPT!"
|
||||
#~ msgstr "欢迎来到DB-GPT中文文档"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "As large models are released and "
|
||||
#~ "iterated upon, they are becoming "
|
||||
#~ "increasingly intelligent. However, in the "
|
||||
#~ "process of using large models, we "
|
||||
#~ "face significant challenges in data "
|
||||
#~ "security and privacy. We need to "
|
||||
#~ "ensure that our sensitive data and "
|
||||
#~ "environments remain completely controlled and"
|
||||
#~ " avoid any data privacy leaks or "
|
||||
#~ "security risks. Based on this, we "
|
||||
#~ "have launched the DB-GPT project "
|
||||
#~ "to build a complete private large "
|
||||
#~ "model solution for all database-based"
|
||||
#~ " scenarios. This solution supports local"
|
||||
#~ " deployment, allowing it to be "
|
||||
#~ "applied not only in independent private"
|
||||
#~ " environments but also to be "
|
||||
#~ "independently deployed and isolated according"
|
||||
#~ " to business modules, ensuring that "
|
||||
#~ "the ability of large models is "
|
||||
#~ "absolutely private, secure, and controllable."
|
||||
#~ msgstr ""
|
||||
#~ "随着大型模型的发布和迭代,它们变得越来越智能。然而,在使用大型模型的过程中,我们在数据安全和隐私方面面临着重大挑战。我们需要确保我们的敏感数据和环境得到完全控制,避免任何数据隐私泄露或安全风险。基于此"
|
||||
#~ ",我们启动了DB-"
|
||||
#~ "GPT项目,为所有基于数据库的场景构建一个完整的私有大模型解决方案。该方案“”支持本地部署,既可应用于“独立私有环境”,又可根据业务模块进行“独立部署”和“隔离”,确保“大模型”的能力绝对私有、安全、可控。"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**DB-GPT** is an experimental open-"
|
||||
#~ "source project that uses localized GPT"
|
||||
#~ " large models to interact with your"
|
||||
#~ " data and environment. With this "
|
||||
#~ "solution, you can be assured that "
|
||||
#~ "there is no risk of data leakage,"
|
||||
#~ " and your data is 100% private "
|
||||
#~ "and secure."
|
||||
#~ msgstr ""
|
||||
#~ "DB-GPT "
|
||||
#~ "是一个开源的以数据库为基础的GPT实验项目,使用本地化的GPT大模型与您的数据和环境进行交互,无数据泄露风险100% "
|
||||
#~ "私密,100% 安全。"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Currently, we have released multiple key"
|
||||
#~ " features, which are listed below to"
|
||||
#~ " demonstrate our current capabilities:"
|
||||
#~ msgstr "目前我们已经发布了多种关键的特性,这里一一列举展示一下当前发布的能力。"
|
||||
|
||||
#~ msgid "SQL language capabilities - SQL generation - SQL diagnosis"
|
||||
#~ msgstr "SQL语言能力 - SQL生成 - SQL诊断"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Private domain Q&A and data processing"
|
||||
#~ " - Database knowledge Q&A - Data "
|
||||
#~ "processing"
|
||||
#~ msgstr "私有领域问答与数据处理 - 数据库知识问答 - 数据处理"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Plugins - Support custom plugin "
|
||||
#~ "execution tasks and natively support the"
|
||||
#~ " Auto-GPT plugin, such as:"
|
||||
#~ msgstr "插件模型 - 支持自定义插件执行任务,并原生支持Auto-GPT插件,例如:* SQL自动执行,获取查询结果 * 自动爬取学习知识"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Unified vector storage/indexing of knowledge"
|
||||
#~ " base - Support for unstructured data"
|
||||
#~ " such as PDF, Markdown, CSV, and "
|
||||
#~ "WebURL"
|
||||
#~ msgstr "知识库统一向量存储/索引 - 非结构化数据支持包括PDF、MarkDown、CSV、WebURL"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Multi LLMs Support - Supports multiple"
|
||||
#~ " large language models, currently "
|
||||
#~ "supporting Vicuna (7b, 13b), ChatGLM-6b"
|
||||
#~ " (int4, int8) - TODO: codegen2, "
|
||||
#~ "codet5p"
|
||||
#~ msgstr "多模型支持 - 支持多种大语言模型, 当前已支持Vicuna(7b,13b), ChatGLM-6b(int4, int8)"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Full documentation on all methods, "
|
||||
#~ "classes, installation methods, and integration"
|
||||
#~ " setups for DB-GPT."
|
||||
#~ msgstr "关于DB-GPT的所有方法、类、安装方法和集成设置的完整文档。"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**DB-GPT** is an open-source "
|
||||
#~ "framework for large models in the "
|
||||
#~ "database field. Its purpose is to "
|
||||
#~ "build infrastructure for the domain of"
|
||||
#~ " large models, making it easier and"
|
||||
#~ " more convenient to develop applications"
|
||||
#~ " around databases."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid "By developing various technical capabilities such as"
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid "SMMF(Service-oriented Multi-model Management Framework)"
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid "Text2SQL Fine-tuning"
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid "RAG(Retrieval Augmented Generation) framework and optimization"
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid "Data-Driven Agents framework collaboration"
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "5. GBI(Generative Business intelligence) etc,"
|
||||
#~ " DB-GPT simplifies the construction "
|
||||
#~ "of large model applications based on "
|
||||
#~ "databases."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**1. Private Domain Q&A & Data "
|
||||
#~ "Processing** Supports custom construction of"
|
||||
#~ " knowledge bases through methods such "
|
||||
#~ "as built-in, multi-file format "
|
||||
#~ "uploads, and plugin-based web scraping."
|
||||
#~ " Enables unified vector storage and "
|
||||
#~ "retrieval of massive structured and "
|
||||
#~ "unstructured data."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**2.Multi-Data Source & GBI(Generative "
|
||||
#~ "Business intelligence)** Supports interaction "
|
||||
#~ "between natural language and various "
|
||||
#~ "data sources such as Excel, databases,"
|
||||
#~ " and data warehouses. Also supports "
|
||||
#~ "analysis reporting."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**3.SMMF(Service-oriented Multi-model "
|
||||
#~ "Management Framework)** Supports a wide "
|
||||
#~ "range of models, including dozens of "
|
||||
#~ "large language models such as open-"
|
||||
#~ "source models and API proxies. Examples"
|
||||
#~ " include LLaMA/LLaMA2, Baichuan, ChatGLM, "
|
||||
#~ "Wenxin, Tongyi, Zhipu, Xinghuo, etc."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**4.Automated Fine-tuning** A lightweight "
|
||||
#~ "framework for automated fine-tuning "
|
||||
#~ "built around large language models, "
|
||||
#~ "Text2SQL datasets, and methods like "
|
||||
#~ "LoRA/QLoRA/Pturning. Makes TextSQL fine-tuning"
|
||||
#~ " as convenient as a production line."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**5.Data-Driven Multi-Agents & Plugins**"
|
||||
#~ " Supports executing tasks through custom"
|
||||
#~ " plugins and natively supports the "
|
||||
#~ "Auto-GPT plugin model. Agents protocol "
|
||||
#~ "follows the Agent Protocol standard."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**6.Privacy and Security** Ensures data "
|
||||
#~ "privacy and security through techniques "
|
||||
#~ "such as privatizing large models and "
|
||||
#~ "proxy de-identification."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid "Coming soon..."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid "`Tutorials <.getting_started/tutorials.html>`_"
|
||||
#~ msgstr "`教程 <.getting_started/tutorials.html>`_"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "DB-GPT is an open-source framework"
|
||||
#~ " for large models in the database "
|
||||
#~ "field. Its purpose is to build "
|
||||
#~ "infrastructure for the domain of large"
|
||||
#~ " models, making it easier and more"
|
||||
#~ " convenient to develop applications around"
|
||||
#~ " databases. By developing various technical"
|
||||
#~ " capabilities such as **1. SMMF(Service-"
|
||||
#~ "oriented Multi-model Management Framework)**"
|
||||
#~ " **2. Text2SQL Fine-tuning** **3. "
|
||||
#~ "RAG(Retrieval Augmented Generation) framework "
|
||||
#~ "and optimization** **4. Data-Driven "
|
||||
#~ "Agents framework collaboration** **5. "
|
||||
#~ "GBI(Generative Business intelligence)** etc, "
|
||||
#~ "DB-GPT simplifies the construction of "
|
||||
#~ "large model applications based on "
|
||||
#~ "databases."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**1. Private Domain Q&A & Data "
|
||||
#~ "Processing** ::Supports custom construction of"
|
||||
#~ " knowledge bases through methods such "
|
||||
#~ "as built-in, multi-file format "
|
||||
#~ "uploads, and plugin-based web scraping."
|
||||
#~ " Enables unified vector storage and "
|
||||
#~ "retrieval of massive structured and "
|
||||
#~ "unstructured data."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**2.Multi-Data Source & GBI(Generative "
|
||||
#~ "Business intelligence)** ::Supports interaction "
|
||||
#~ "between natural language and various "
|
||||
#~ "data sources such as Excel, databases,"
|
||||
#~ " and data warehouses. Also supports "
|
||||
#~ "analysis reporting."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**3.SMMF(Service-oriented Multi-model "
|
||||
#~ "Management Framework)** ::Supports a wide "
|
||||
#~ "range of models, including dozens of "
|
||||
#~ "large language models such as open-"
|
||||
#~ "source models and API proxies. Examples"
|
||||
#~ " include LLaMA/LLaMA2, Baichuan, ChatGLM, "
|
||||
#~ "Wenxin, Tongyi, Zhipu, Xinghuo, etc."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**4.Automated Fine-tuning** ::A lightweight"
|
||||
#~ " framework for automated fine-tuning "
|
||||
#~ "built around large language models, "
|
||||
#~ "Text2SQL datasets, and methods like "
|
||||
#~ "LoRA/QLoRA/Pturning. Makes TextSQL fine-tuning"
|
||||
#~ " as convenient as a production line."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**5.Data-Driven Multi-Agents & Plugins**"
|
||||
#~ " ::Supports executing tasks through custom"
|
||||
#~ " plugins and natively supports the "
|
||||
#~ "Auto-GPT plugin model. Agents protocol "
|
||||
#~ "follows the Agent Protocol standard."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "**6.Privacy and Security** ::Ensures data "
|
||||
#~ "privacy and security through techniques "
|
||||
#~ "such as privatizing large models and "
|
||||
#~ "proxy de-identification."
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "How to get started using DB-GPT"
|
||||
#~ " to interact with your data and "
|
||||
#~ "environment."
|
||||
#~ msgstr "开始使用DB-GPT与您的数据环境进行交互。"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "It's very important for DB-GPT, "
|
||||
#~ "DB-GPT also provide standard, extendable"
|
||||
#~ " interfaces."
|
||||
#~ msgstr "DB-GPT还提供了标准的、可扩展的接口。"
|
||||
|
||||
|
@ -1 +0,0 @@
|
||||
# Reference
|
@ -4,7 +4,7 @@
|
||||
import os
|
||||
from typing import List
|
||||
import logging
|
||||
|
||||
import importlib.metadata as metadata
|
||||
from pilot.model.proxy.llms.proxy_model import ProxyModel
|
||||
from pilot.model.parameter import ProxyModelParameters
|
||||
from pilot.scene.base_message import ModelMessage, ModelMessageRoleType
|
||||
@ -57,14 +57,48 @@ def _initialize_openai(params: ProxyModelParameters):
|
||||
return openai_params
|
||||
|
||||
|
||||
def _initialize_openai_v1(params: ProxyModelParameters):
|
||||
try:
|
||||
from openai import OpenAI
|
||||
except ImportError as exc:
|
||||
raise ValueError(
|
||||
"Could not import python package: openai "
|
||||
"Please install openai by command `pip install openai"
|
||||
)
|
||||
|
||||
api_type = params.proxy_api_type or os.getenv("OPENAI_API_TYPE", "open_ai")
|
||||
|
||||
base_url = params.proxy_api_base or os.getenv(
|
||||
"OPENAI_API_TYPE",
|
||||
os.getenv("AZURE_OPENAI_ENDPOINT") if api_type == "azure" else None,
|
||||
)
|
||||
api_key = params.proxy_api_key or os.getenv(
|
||||
"OPENAI_API_KEY",
|
||||
os.getenv("AZURE_OPENAI_KEY") if api_type == "azure" else None,
|
||||
)
|
||||
api_version = params.proxy_api_version or os.getenv("OPENAI_API_VERSION")
|
||||
|
||||
if not base_url and params.proxy_server_url:
|
||||
# Adapt previous proxy_server_url configuration
|
||||
base_url = params.proxy_server_url.split("/chat/completions")[0]
|
||||
|
||||
if params.http_proxy:
|
||||
openai.proxies = params.http_proxy
|
||||
openai_params = {
|
||||
"api_key": api_key,
|
||||
"base_url": base_url,
|
||||
"proxies": params.http_proxy,
|
||||
}
|
||||
|
||||
return openai_params, api_type, api_version
|
||||
|
||||
|
||||
def _build_request(model: ProxyModel, params):
|
||||
history = []
|
||||
|
||||
model_params = model.get_params()
|
||||
logger.info(f"Model: {model}, model_params: {model_params}")
|
||||
|
||||
openai_params = _initialize_openai(model_params)
|
||||
|
||||
messages: List[ModelMessage] = params["messages"]
|
||||
# Add history conversation
|
||||
for message in messages:
|
||||
@ -95,13 +129,19 @@ def _build_request(model: ProxyModel, params):
|
||||
}
|
||||
proxyllm_backend = model_params.proxyllm_backend
|
||||
|
||||
if openai_params["api_type"] == "azure":
|
||||
# engine = "deployment_name".
|
||||
proxyllm_backend = proxyllm_backend or "gpt-35-turbo"
|
||||
payloads["engine"] = proxyllm_backend
|
||||
else:
|
||||
if metadata.version("openai") >= "1.0.0":
|
||||
openai_params, api_type, api_version = _initialize_openai_v1(model_params)
|
||||
proxyllm_backend = proxyllm_backend or "gpt-3.5-turbo"
|
||||
payloads["model"] = proxyllm_backend
|
||||
else:
|
||||
openai_params = _initialize_openai(model_params)
|
||||
if openai_params["api_type"] == "azure":
|
||||
# engine = "deployment_name".
|
||||
proxyllm_backend = proxyllm_backend or "gpt-35-turbo"
|
||||
payloads["engine"] = proxyllm_backend
|
||||
else:
|
||||
proxyllm_backend = proxyllm_backend or "gpt-3.5-turbo"
|
||||
payloads["model"] = proxyllm_backend
|
||||
|
||||
logger.info(
|
||||
f"Send request to real model {proxyllm_backend}, openai_params: {openai_params}"
|
||||
@ -112,32 +152,87 @@ def _build_request(model: ProxyModel, params):
|
||||
def chatgpt_generate_stream(
|
||||
model: ProxyModel, tokenizer, params, device, context_len=2048
|
||||
):
|
||||
import openai
|
||||
if metadata.version("openai") >= "1.0.0":
|
||||
model_params = model.get_params()
|
||||
openai_params, api_type, api_version = _initialize_openai_v1(model_params)
|
||||
history, payloads = _build_request(model, params)
|
||||
if api_type == "azure":
|
||||
from openai import AzureOpenAI
|
||||
|
||||
history, payloads = _build_request(model, params)
|
||||
client = AzureOpenAI(
|
||||
api_key=openai_params["api_key"],
|
||||
api_version=api_version,
|
||||
azure_endpoint=openai_params[
|
||||
"base_url"
|
||||
], # Your Azure OpenAI resource's endpoint value.
|
||||
)
|
||||
else:
|
||||
from openai import OpenAI
|
||||
|
||||
res = openai.ChatCompletion.create(messages=history, **payloads)
|
||||
client = OpenAI(**openai_params)
|
||||
res = client.chat.completions.create(messages=history, **payloads)
|
||||
text = ""
|
||||
for r in res:
|
||||
if r.choices[0].delta.content is not None:
|
||||
content = r.choices[0].delta.content
|
||||
text += content
|
||||
yield text
|
||||
|
||||
text = ""
|
||||
for r in res:
|
||||
if r["choices"][0]["delta"].get("content") is not None:
|
||||
content = r["choices"][0]["delta"]["content"]
|
||||
text += content
|
||||
yield text
|
||||
else:
|
||||
import openai
|
||||
|
||||
history, payloads = _build_request(model, params)
|
||||
|
||||
res = openai.ChatCompletion.create(messages=history, **payloads)
|
||||
|
||||
text = ""
|
||||
print("res", res)
|
||||
for r in res:
|
||||
if r["choices"][0]["delta"].get("content") is not None:
|
||||
content = r["choices"][0]["delta"]["content"]
|
||||
text += content
|
||||
yield text
|
||||
|
||||
|
||||
async def async_chatgpt_generate_stream(
|
||||
model: ProxyModel, tokenizer, params, device, context_len=2048
|
||||
):
|
||||
import openai
|
||||
if metadata.version("openai") >= "1.0.0":
|
||||
model_params = model.get_params()
|
||||
openai_params, api_type, api_version = _initialize_openai_v1(model_params)
|
||||
history, payloads = _build_request(model, params)
|
||||
if api_type == "azure":
|
||||
from openai import AsyncAzureOpenAI
|
||||
|
||||
history, payloads = _build_request(model, params)
|
||||
client = AsyncAzureOpenAI(
|
||||
api_key=openai_params["api_key"],
|
||||
api_version=api_version,
|
||||
azure_endpoint=openai_params[
|
||||
"base_url"
|
||||
], # Your Azure OpenAI resource's endpoint value.
|
||||
)
|
||||
else:
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
res = await openai.ChatCompletion.acreate(messages=history, **payloads)
|
||||
client = AsyncOpenAI(**openai_params)
|
||||
|
||||
text = ""
|
||||
async for r in res:
|
||||
if r["choices"][0]["delta"].get("content") is not None:
|
||||
content = r["choices"][0]["delta"]["content"]
|
||||
text += content
|
||||
yield text
|
||||
res = await client.chat.completions.create(messages=history, **payloads)
|
||||
text = ""
|
||||
for r in res:
|
||||
if r.choices[0].delta.content is not None:
|
||||
content = r.choices[0].delta.content
|
||||
text += content
|
||||
yield text
|
||||
else:
|
||||
import openai
|
||||
|
||||
history, payloads = _build_request(model, params)
|
||||
|
||||
res = await openai.ChatCompletion.acreate(messages=history, **payloads)
|
||||
|
||||
text = ""
|
||||
async for r in res:
|
||||
if r["choices"][0]["delta"].get("content") is not None:
|
||||
content = r["choices"][0]["delta"]["content"]
|
||||
text += content
|
||||
yield text
|
||||
|
Loading…
Reference in New Issue
Block a user