mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-08-08 11:47:44 +00:00
doc:readme and tutorial for DB-GPT
This commit is contained in:
parent
f8f5434c68
commit
06b8b3f5fd
31
README.md
31
README.md
@ -21,6 +21,7 @@ As large models are released and iterated upon, they are becoming increasingly i
|
||||
DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
|
||||
|
||||
## News
|
||||
- [2023/06/30]🔥DB-GPT product. [documents](https://db-gpt.readthedocs.io/en/latest/modules/llms.html)
|
||||
- [2023/06/25]🔥support chatglm2-6b model. [documents](https://db-gpt.readthedocs.io/en/latest/modules/llms.html)
|
||||
- [2023/06/14] support gpt4all model, which can run at M1/M2, or cpu machine. [documents](https://db-gpt.readthedocs.io/en/latest/modules/llms.html)
|
||||
- [2023/06/01]🔥 On the basis of the Vicuna-13B basic model, task chain calls are implemented through plugins. For example, the implementation of creating a database with a single sentence.[demo](./assets/auto_plugin.gif)
|
||||
@ -48,9 +49,6 @@ https://github.com/csunny/DB-GPT/assets/17919400/654b5a49-5ea4-4c02-b5b2-72d089d
|
||||
<img src="./assets/auto_sql_en.gif" width="800px" />
|
||||
</p>
|
||||
|
||||
<p align="left">
|
||||
<img src="./assets/knownledge_qa_en.jpg" width="800px" />
|
||||
</p>
|
||||
|
||||
## Features
|
||||
|
||||
@ -60,8 +58,9 @@ Currently, we have released multiple key features, which are listed below to dem
|
||||
- SQL generation
|
||||
- SQL diagnosis
|
||||
- Private domain Q&A and data processing
|
||||
- Knowledge Management(We currently support many document formats: txt, pdf, md, html, doc, ppt, and url.)
|
||||
- Database knowledge Q&A
|
||||
- Data processing
|
||||
- knowledge Embedding
|
||||
- Plugins
|
||||
- Support custom plugin execution tasks and natively support the Auto-GPT plugin, such as:
|
||||
- Automatic execution of SQL and retrieval of query results
|
||||
@ -69,13 +68,15 @@ Currently, we have released multiple key features, which are listed below to dem
|
||||
- Unified vector storage/indexing of knowledge base
|
||||
- Support for unstructured data such as PDF, TXT, Markdown, CSV, DOC, PPT, and WebURL
|
||||
|
||||
- Milti LLMs Support
|
||||
- Multi LLMs Support
|
||||
- Supports multiple large language models, currently supporting Vicuna (7b, 13b), ChatGLM-6b (int4, int8), guanaco(7b,13b,33b), Gorilla(7b,13b)
|
||||
- TODO: codegen2, codet5p
|
||||
|
||||
|
||||
## Introduction
|
||||
DB-GPT creates a vast model operating system using [FastChat](https://github.com/lm-sys/FastChat) and offers a large language model powered by [Vicuna](https://huggingface.co/Tribbiani/vicuna-7b). In addition, we provide private domain knowledge base question-answering capability through LangChain. Furthermore, we also provide support for additional plugins, and our design natively supports the Auto-GPT plugin.
|
||||
DB-GPT creates a vast model operating system using [FastChat](https://github.com/lm-sys/FastChat) and offers a large language model powered by [Vicuna](https://huggingface.co/Tribbiani/vicuna-7b). In addition, we provide private domain knowledge base question-answering capability. Furthermore, we also provide support for additional plugins, and our design natively supports the Auto-GPT plugin.Our vision is to make it easier and more convenient to build applications around databases and llm.
|
||||
|
||||
|
||||
|
||||
Is the architecture of the entire DB-GPT shown in the following figure:
|
||||
|
||||
@ -102,24 +103,6 @@ The core capabilities mainly consist of the following parts:
|
||||
- [Multi LLMs Usage](https://db-gpt.readthedocs.io/en/latest/modules/llms.html)
|
||||
- [Create your own knowledge repository](https://db-gpt.readthedocs.io/en/latest/modules/knowledge.html)
|
||||
|
||||
We currently support many document formats: txt, pdf, md, html, doc, ppt, and url.
|
||||
before execution:
|
||||
|
||||
```
|
||||
python -m spacy download zh_core_web_sm
|
||||
|
||||
```
|
||||
2.set .env configuration set your vector store type, eg:VECTOR_STORE_TYPE=Chroma, now we support Chroma and Milvus(version > 2.1)
|
||||
|
||||
3.Run the knowledge repository script in the tools directory.
|
||||
|
||||
```bash
|
||||
& python tools/knowledge_init.py
|
||||
|
||||
--vector_name : your vector store name default_value:default
|
||||
--append: append mode, True:append, False: not append default_value:False
|
||||
```
|
||||
|
||||
If nltk-related errors occur during the use of the knowledge base, you need to install the nltk toolkit. For more details, please refer to: [nltk documents](https://www.nltk.org/data.html)
|
||||
Run the Python interpreter and type the commands:
|
||||
|
||||
|
37
README.zh.md
37
README.zh.md
@ -15,7 +15,7 @@
|
||||
|
||||
## DB-GPT 是什么?
|
||||
|
||||
随着大模型的发布迭代,大模型变得越来越智能,在使用大模型的过程当中,遇到极大的数据安全与隐私挑战。在利用大模型能力的过程中我们的私密数据跟环境需要掌握自己的手里,完全可控,避免任何的数据隐私泄露以及安全风险。基于此,我们发起了DB-GPT项目,为所有以数据库为基础的场景,构建一套完整的私有大模型解决方案。 此方案因为支持本地部署,所以不仅仅可以应用于独立私有环境,而且还可以根据业务模块独立部署隔离,让大模型的能力绝对私有、安全、可控。
|
||||
随着大模型的发布迭代,大模型变得越来越智能,在使用大模型的过程当中,遇到极大的数据安全与隐私挑战。在利用大模型能力的过程中我们的私密数据跟环境需要掌握自己的手里,完全可控,避免任何的数据隐私泄露以及安全风险。基于此,我们发起了DB-GPT项目,为所有以数据库为基础的场景,构建一套完整的私有大模型解决方案。 此方案因为支持本地部署,所以不仅仅可以应用于独立私有环境,而且还可以根据业务模块独立部署隔离,让大模型的能力绝对私有、安全、可控。我们的愿景是让围绕数据库构建大模型应用更简单,更方便。
|
||||
|
||||
DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地化的GPT大模型与您的数据和环境进行交互,无数据泄露风险,100% 私密
|
||||
|
||||
@ -23,6 +23,7 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
|
||||
|
||||
## 最新发布
|
||||
- [2023/06/30]🔥 DB-GPT产品。 [使用文档](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/llms.html)
|
||||
- [2023/06/25]🔥 支持ChatGLM2-6B模型。 [使用文档](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/llms.html)
|
||||
- [2023/06/14]🔥 支持gpt4all模型,可以在M1/M2 或者CPU机器上运行。 [使用文档](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/llms.html)
|
||||
- [2023/06/01]🔥 在Vicuna-13B基础模型的基础上,通过插件实现任务链调用。例如单句创建数据库的实现.
|
||||
@ -39,6 +40,7 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
|
||||
- SQL生成
|
||||
- SQL诊断
|
||||
- 私域问答与数据处理
|
||||
- 知识库管理(目前支持 txt, pdf, md, html, doc, ppt, and url)
|
||||
- 数据库知识问答
|
||||
- 数据处理
|
||||
- 插件模型
|
||||
@ -108,37 +110,6 @@ DB-GPT基于 [FastChat](https://github.com/lm-sys/FastChat) 构建大模型运
|
||||
### 打造属于你的知识库
|
||||
- [参考手册](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/knowledge.html)
|
||||
|
||||
1.将个人知识文件或者文件夹放入pilot/datasets目录中
|
||||
|
||||
当前支持的文档格式: txt, pdf, md, html, doc, ppt, and url.
|
||||
|
||||
在操作之前先执行
|
||||
|
||||
```
|
||||
python -m spacy download zh_core_web_sm
|
||||
|
||||
```
|
||||
|
||||
2.在.env文件指定你的向量数据库类型,VECTOR_STORE_TYPE(默认Chroma),目前支持Chroma,Milvus(需要设置MILVUS_URL和MILVUS_PORT)
|
||||
|
||||
**注意Milvus版本需要>2.1**
|
||||
|
||||
3.在tools目录执行知识入库脚本, 如果是选择默认知识库,不需要指定 --vector_name, 默认default
|
||||
|
||||
```
|
||||
python tools/knowledge_init.py
|
||||
```
|
||||
|
||||
如果选择新增知识库,在界面上新增知识库输入你的知识库名,
|
||||
|
||||
```
|
||||
python tools/knowledge_init.py --vector_name = yourname
|
||||
|
||||
--vector_name: vector_name default_value:default
|
||||
```
|
||||
就可以根据你的知识库进行问答
|
||||
|
||||
注意,这里默认向量模型是text2vec-large-chinese(模型比较大,如果个人电脑配置不够建议采用text2vec-base-chinese),因此确保需要将模型download下来放到models目录中。
|
||||
|
||||
如果在使用知识库时遇到与nltk相关的错误,您需要安装nltk工具包。更多详情,请参见:[nltk文档](https://www.nltk.org/data.html)
|
||||
Run the Python interpreter and type the commands:
|
||||
@ -147,7 +118,7 @@ Run the Python interpreter and type the commands:
|
||||
>>> nltk.download()
|
||||
```
|
||||
|
||||
我们提供了Gradio的用户界面,可以通过我们的用户界面使用DB-GPT, 同时关于我们项目相关的一些代码跟原理介绍,我们也准备了以下几篇参考文章。
|
||||
我们提供了全新的的用户界面,可以通过我们的用户界面使用DB-GPT, 同时关于我们项目相关的一些代码跟原理介绍,我们也准备了以下几篇参考文章。
|
||||
1. [大模型实战系列(1) —— 强强联合Langchain-Vicuna应用实战](https://zhuanlan.zhihu.com/p/628750042)
|
||||
2. [大模型实战系列(2) —— DB-GPT 阿里云部署指南](https://zhuanlan.zhihu.com/p/629467580)
|
||||
3. [大模型实战系列(3) —— DB-GPT插件模型原理与使用](https://zhuanlan.zhihu.com/p/629623125)
|
||||
|
@ -30,10 +30,16 @@ python>=3.10
|
||||
conda create -n dbgpt_env python=3.10
|
||||
conda activate dbgpt_env
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
Before use DB-GPT Knowledge Management
|
||||
```
|
||||
python -m spacy download zh_core_web_sm
|
||||
|
||||
```
|
||||
|
||||
Once the environment is installed, we have to create a new folder "models" in the DB-GPT project, and then we can put all the models downloaded from huggingface in this directory
|
||||
|
||||
make sure you have install git-lfs
|
||||
```
|
||||
git clone https://huggingface.co/Tribbiani/vicuna-13b
|
||||
git clone https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
|
||||
@ -54,20 +60,30 @@ You can refer to this document to obtain the Vicuna weights: [Vicuna](https://gi
|
||||
|
||||
If you have difficulty with this step, you can also directly use the model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a replacement.
|
||||
|
||||
1. Run server
|
||||
1. prepare server sql script
|
||||
```bash
|
||||
$ python pilot/server/llmserver.py
|
||||
mysql> CREATE DATABASE knowledge_management;
|
||||
mysql> use knowledge_management;
|
||||
mysql> source ./assets/schema/knowledge_management.sql
|
||||
```
|
||||
set .env configuration set your vector store type, eg:VECTOR_STORE_TYPE=Chroma, now we support Chroma and Milvus(version > 2.1)
|
||||
|
||||
Starting `llmserver.py` with the following command will result in a relatively stable Python service with multiple processes.
|
||||
```bash
|
||||
$ gunicorn llmserver:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 &
|
||||
```
|
||||
|
||||
Run gradio webui
|
||||
2. Run db-gpt server
|
||||
|
||||
```bash
|
||||
$ python pilot/server/webserver.py
|
||||
$ python pilot/server/dbgpt_server.py
|
||||
```
|
||||
|
||||
Notice: the webserver need to connect llmserver, so you need change the .env file. change the MODEL_SERVER = "http://127.0.0.1:8000" to your address. It's very important.
|
||||
3. Run new webui
|
||||
|
||||
|
||||
```bash
|
||||
$ cd datacenter
|
||||
$ npm i
|
||||
$ npm run dev
|
||||
```
|
||||
Notice: make sure node.js is the latest version, learn more about db-gt webui,
|
||||
read https://github.com/csunny/DB-GPT/tree/new-page-framework/datacenter
|
||||
|
||||
Open http://localhost:3000 with your browser to see the result.
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.1.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-06-14 21:47+0800\n"
|
||||
"POT-Creation-Date: 2023-06-30 17:16+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -19,29 +19,29 @@ msgstr ""
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/getting_started.md:1 d1c1cb0cdf374e60924001460f369485
|
||||
#: ../../getting_started/getting_started.md:1 2e1519d628044c07b384e8bbe441863a
|
||||
msgid "Quickstart Guide"
|
||||
msgstr "使用指南"
|
||||
|
||||
#: ../../getting_started/getting_started.md:3 5c76cdb6530644ed872329ecc1bd51ec
|
||||
#: ../../getting_started/getting_started.md:3 00e8dc6e242d4f3b8b2fbc5e06f1f14e
|
||||
msgid ""
|
||||
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
|
||||
"environment and data."
|
||||
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
|
||||
|
||||
#: ../../getting_started/getting_started.md:5 8c442e4870e549359920ec83d4a77083
|
||||
#: ../../getting_started/getting_started.md:5 4b4473a5fbd64cef996d82fa36abe136
|
||||
msgid "Installation"
|
||||
msgstr "安装"
|
||||
|
||||
#: ../../getting_started/getting_started.md:7 d302009cb3f64959872d278c4aad7cfa
|
||||
#: ../../getting_started/getting_started.md:7 5ab3187dd2134afe958d83a431c98f43
|
||||
msgid "To get started, install DB-GPT with the following steps."
|
||||
msgstr "请按照以下步骤安装DB-GPT"
|
||||
|
||||
#: ../../getting_started/getting_started.md:9 7deb38572ec74f5392ba09749a2b350b
|
||||
#: ../../getting_started/getting_started.md:9 7286e3a0da00450c9a6e9f29dbd27130
|
||||
msgid "1. Hardware Requirements"
|
||||
msgstr "1. 硬件要求"
|
||||
|
||||
#: ../../getting_started/getting_started.md:10 7ee61b468637478cad173fa4685ef952
|
||||
#: ../../getting_started/getting_started.md:10 3f3d279ca8a54c8c8ed16af3e0ffb281
|
||||
msgid ""
|
||||
"As our project has the ability to achieve ChatGPT performance of over "
|
||||
"85%, there are certain hardware requirements. However, overall, the "
|
||||
@ -49,62 +49,62 @@ msgid ""
|
||||
"specific hardware requirements for deployment are as follows:"
|
||||
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能,所以对硬件有一定的要求。但总体来说,我们在消费级的显卡上即可完成项目的部署使用,具体部署的硬件说明如下:"
|
||||
|
||||
#: ../../getting_started/getting_started.md e9f0871662384ab9a24711856a27fdfb
|
||||
#: ../../getting_started/getting_started.md 6e1e882511254687bd46fe45447794d1
|
||||
msgid "GPU"
|
||||
msgstr "GPU"
|
||||
|
||||
#: ../../getting_started/getting_started.md 0dca27585b9c4357bd4f7b52ff664673
|
||||
#: ../../getting_started/getting_started.md f0ee9919e1254bcdbe6e489a5fbf450f
|
||||
msgid "VRAM Size"
|
||||
msgstr "显存大小"
|
||||
|
||||
#: ../../getting_started/getting_started.md fe84cb97a226490eb940dfcf6e581272
|
||||
#: ../../getting_started/getting_started.md eed88601ef0b49b58d95b89928a3810e
|
||||
msgid "Performance"
|
||||
msgstr "显存大小"
|
||||
|
||||
#: ../../getting_started/getting_started.md bfdc630854674c5db6be46114a67542d
|
||||
#: ../../getting_started/getting_started.md 4f717383ef2d4e2da9ee2d1c148aa6c5
|
||||
msgid "RTX 4090"
|
||||
msgstr "RTX 4090"
|
||||
|
||||
#: ../../getting_started/getting_started.md 5e38a7184e024b09be3a858084b60344
|
||||
#: 629db58b707c48a2ae93d7396bcd0d67
|
||||
#: ../../getting_started/getting_started.md d2d9bd1b57694404b39cdef49fd5b570
|
||||
#: d7d914b8d5e34ac192b94d48f0ee1781
|
||||
msgid "24 GB"
|
||||
msgstr "24 GB"
|
||||
|
||||
#: ../../getting_started/getting_started.md 625714d5b7cb4550a81305b5c2410980
|
||||
#: ../../getting_started/getting_started.md cb86730ab05e4172941c3e771384c4ba
|
||||
msgid "Smooth conversation inference"
|
||||
msgstr "可以流畅的进行对话推理,无卡顿"
|
||||
|
||||
#: ../../getting_started/getting_started.md 8334aa5646b84a4ba6c0df7a55e52f6e
|
||||
#: ../../getting_started/getting_started.md 3e32d5c38bf6499cbfedb80944549114
|
||||
msgid "RTX 3090"
|
||||
msgstr "RTX 3090"
|
||||
|
||||
#: ../../getting_started/getting_started.md 84771e5190084d6fab19fa8f3b5e2a30
|
||||
#: ../../getting_started/getting_started.md 1d3caa2a06844997ad55d20863559e9f
|
||||
msgid "Smooth conversation inference, better than V100"
|
||||
msgstr "可以流畅进行对话推理,有卡顿感,但好于V100"
|
||||
|
||||
#: ../../getting_started/getting_started.md 0a539471bf6648e5827d9e10549b81e3
|
||||
#: ../../getting_started/getting_started.md b80ec359bd004d5f801ec09ca3b2d0ff
|
||||
msgid "V100"
|
||||
msgstr "V100"
|
||||
|
||||
#: ../../getting_started/getting_started.md 8bed4645111a4001b5967678a54c6037
|
||||
#: ../../getting_started/getting_started.md aed55a6b8c8d49d9b9c02bfd5c10b062
|
||||
msgid "16 GB"
|
||||
msgstr "16 GB"
|
||||
|
||||
#: ../../getting_started/getting_started.md f3ec49d1591d4cdc9f967c9df5bb8245
|
||||
#: ../../getting_started/getting_started.md dcd6daab75fe4bf8b8dd19ea785f0bd6
|
||||
msgid "Conversation inference possible, noticeable stutter"
|
||||
msgstr "可以进行对话推理,有明显卡顿"
|
||||
|
||||
#: ../../getting_started/getting_started.md:18 6006a3d8744746dbab615b438eb6234b
|
||||
#: ../../getting_started/getting_started.md:18 e39a4b763ed74cea88d54d163ea72ce0
|
||||
msgid "2. Install"
|
||||
msgstr "2. 安装"
|
||||
|
||||
#: ../../getting_started/getting_started.md:20 eea3037d218843b78e56412490ae6a62
|
||||
#: ../../getting_started/getting_started.md:20 9beba274b78a46c6aafb30173372b334
|
||||
msgid ""
|
||||
"This project relies on a local MySQL database service, which you need to "
|
||||
"install locally. We recommend using Docker for installation."
|
||||
msgstr "本项目依赖一个本地的 MySQL 数据库服务,你需要本地安装,推荐直接使用 Docker 安装。"
|
||||
|
||||
#: ../../getting_started/getting_started.md:25 4a220bf247c549eaa0d059f29e1c3a7d
|
||||
#: ../../getting_started/getting_started.md:25 3bce689bb49043eca5b9aa3c5525eaac
|
||||
msgid ""
|
||||
"We use [Chroma embedding database](https://github.com/chroma-core/chroma)"
|
||||
" as the default for our vector database, so there is no need for special "
|
||||
@ -117,7 +117,11 @@ msgstr ""
|
||||
"向量数据库我们默认使用的是Chroma内存数据库,所以无需特殊安装,如果有需要连接其他的同学,可以按照我们的教程进行安装配置。整个DB-"
|
||||
"GPT的安装过程,我们使用的是miniconda3的虚拟环境。创建虚拟环境,并安装python依赖包"
|
||||
|
||||
#: ../../getting_started/getting_started.md:35 b51c85b6ec0f4c45afc648d98424a79f
|
||||
#: ../../getting_started/getting_started.md:34 61ad49740d0b49afa254cb2d10a0d2ae
|
||||
msgid "Before use DB-GPT Knowledge Management"
|
||||
msgstr "使用知识库管理功能之前"
|
||||
|
||||
#: ../../getting_started/getting_started.md:40 656041e456f248a0a472be06357d7f89
|
||||
msgid ""
|
||||
"Once the environment is installed, we have to create a new folder "
|
||||
"\"models\" in the DB-GPT project, and then we can put all the models "
|
||||
@ -126,24 +130,28 @@ msgstr ""
|
||||
"环境安装完成后,我们必须在DB-"
|
||||
"GPT项目中创建一个新文件夹\"models\",然后我们可以把从huggingface下载的所有模型放到这个目录下。"
|
||||
|
||||
#: ../../getting_started/getting_started.md:43 617f2b53e33e4e7a96b3aef879a1ebe7
|
||||
#: ../../getting_started/getting_started.md:42 4dfb7d63fdf544f2bf9dd8663efa8d31
|
||||
msgid "make sure you have install git-lfs"
|
||||
msgstr "确保你已经安装了git-lfs"
|
||||
|
||||
#: ../../getting_started/getting_started.md:50 a52c137b8ef54b7ead41a2d8ff81d457
|
||||
msgid ""
|
||||
"The model files are large and will take a long time to download. During "
|
||||
"the download, let's configure the .env file, which needs to be copied and"
|
||||
" created from the .env.template"
|
||||
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件,它需要从。env.template中复制和创建。"
|
||||
|
||||
#: ../../getting_started/getting_started.md:49 0f403de4e9574c3a9058495df8c21961
|
||||
#: ../../getting_started/getting_started.md:56 db87d872a47047dc8cd1de390d068ed4
|
||||
msgid ""
|
||||
"You can configure basic parameters in the .env file, for example setting "
|
||||
"LLM_MODEL to the model to be used"
|
||||
msgstr "您可以在.env文件中配置基本参数,例如将LLM_MODEL设置为要使用的模型。"
|
||||
|
||||
#: ../../getting_started/getting_started.md:51 e3dce051a2e64478872433890b06cb5d
|
||||
#: ../../getting_started/getting_started.md:58 c8865a327b4b44daa55813479c743e3c
|
||||
msgid "3. Run"
|
||||
msgstr "3. 运行"
|
||||
|
||||
#: ../../getting_started/getting_started.md:52 b7ec5fab25b249b5bc811d08049307c3
|
||||
#: ../../getting_started/getting_started.md:59 e81dabe730134753a4daa05a7bdd44af
|
||||
msgid ""
|
||||
"You can refer to this document to obtain the Vicuna weights: "
|
||||
"[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-"
|
||||
@ -152,7 +160,7 @@ msgstr ""
|
||||
"关于基础模型, 可以根据[Vicuna](https://github.com/lm-"
|
||||
"sys/FastChat/blob/main/README.md#model-weights) 合成教程进行合成。"
|
||||
|
||||
#: ../../getting_started/getting_started.md:54 6cab130775394122bfb040ea9797c694
|
||||
#: ../../getting_started/getting_started.md:61 714cbc9485ea47d0a06aa1a31b9af3e3
|
||||
msgid ""
|
||||
"If you have difficulty with this step, you can also directly use the "
|
||||
"model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a "
|
||||
@ -161,33 +169,52 @@ msgstr ""
|
||||
"如果此步有困难的同学,也可以直接使用[此链接](https://huggingface.co/Tribbiani/vicuna-"
|
||||
"7b)上的模型进行替代。"
|
||||
|
||||
#: ../../getting_started/getting_started.md:56 381fc71965dd44adac2141677e0dd085
|
||||
msgid "Run server"
|
||||
#: ../../getting_started/getting_started.md:63 2b8f6985fe1a414e95d334d3ee9d0878
|
||||
msgid "prepare server sql script"
|
||||
msgstr "准备db-gpt server sql脚本"
|
||||
|
||||
#: ../../getting_started/getting_started.md:69 7cb9beb0e15a46759dbcb4606dcb6867
|
||||
msgid ""
|
||||
"set .env configuration set your vector store type, "
|
||||
"eg:VECTOR_STORE_TYPE=Chroma, now we support Chroma and Milvus(version > "
|
||||
"2.1)"
|
||||
msgstr "在.env文件设置向量数据库环境变量,eg:VECTOR_STORE_TYPE=Chroma, 目前我们支持了 Chroma and Milvus(version >2.1) "
|
||||
|
||||
#: ../../getting_started/getting_started.md:72 cdb7ef30e8c9441293e8b3fd95d621ed
|
||||
#, fuzzy
|
||||
msgid "Run db-gpt server"
|
||||
msgstr "运行模型服务"
|
||||
|
||||
#: ../../getting_started/getting_started.md:61 47001c4d9f9449fab0dadefd02b76f6d
|
||||
msgid ""
|
||||
"Starting `llmserver.py` with the following command will result in a "
|
||||
"relatively stable Python service with multiple processes."
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/getting_started.md:66 ae3b8aa1e8694715a721717ab4bc182e
|
||||
msgid "Run gradio webui"
|
||||
#: ../../getting_started/getting_started.md:78 10fdefed00d34863819fffca48ca5bea
|
||||
#, fuzzy
|
||||
msgid "Run new webui"
|
||||
msgstr "运行模型服务"
|
||||
|
||||
#: ../../getting_started/getting_started.md:72 0ad4bbedf5ed498686dafb8b148bf63c
|
||||
#: ../../getting_started/getting_started.md:86 60b48f6f0a7f43efa30c636a127860b6
|
||||
msgid ""
|
||||
"Notice: the webserver need to connect llmserver, so you need change the"
|
||||
" .env file. change the MODEL_SERVER = \"http://127.0.0.1:8000\" to your "
|
||||
"address. It's very important."
|
||||
msgstr ""
|
||||
"注意: 在启动Webserver之前, 需要修改.env 文件中的MODEL_SERVER = "
|
||||
"\"http://127.0.0.1:8000\", 将地址设置为你的服务器地址。"
|
||||
"Notice: make sure node.js is the latest version, learn more about db-gpt "
|
||||
"webui, read https://github.com/csunny/DB-GPT/tree/new-page-"
|
||||
"framework/datacenter"
|
||||
msgstr "确保node.js是最新的版本,想知道更多请访问https://github.com/csunny/DB-GPT/tree/new-page-framework/datacenter,"
|
||||
|
||||
#: ../../getting_started/getting_started.md:89 e7bb3001d46b458aa0c522c4a7a8d45b
|
||||
msgid "Open http://localhost:3000 with your browser to see the result."
|
||||
msgstr "打开浏览器访问http://localhost:3000"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Starting `llmserver.py` with the following "
|
||||
#~ "command will result in a relatively "
|
||||
#~ "stable Python service with multiple "
|
||||
#~ "processes."
|
||||
#~ msgstr "使用以下命令启动llmserver.py将会得到一个相对稳定的Python服务,并且具有多个进程。"
|
||||
#~ msgstr ""
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Notice: the webserver need to connect"
|
||||
#~ " llmserver, so you need change the"
|
||||
#~ " .env file. change the MODEL_SERVER ="
|
||||
#~ " \"http://127.0.0.1:8000\" to your address. "
|
||||
#~ "It's very important."
|
||||
#~ msgstr ""
|
||||
#~ "注意: 在启动Webserver之前, 需要修改.env 文件中的MODEL_SERVER "
|
||||
#~ "= \"http://127.0.0.1:8000\", 将地址设置为你的服务器地址。"
|
||||
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.1.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-06-19 19:10+0800\n"
|
||||
"POT-Creation-Date: 2023-06-30 17:16+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -19,25 +19,25 @@ msgstr ""
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/tutorials.md:1 23db0155c8ae4a2288cbf9137599c973
|
||||
#: ../../getting_started/tutorials.md:1 e494f27e68fd40efa2864a532087cfef
|
||||
msgid "Tutorials"
|
||||
msgstr "教程"
|
||||
|
||||
#: ../../getting_started/tutorials.md:4 8f2b93792b1947b7a544623ec637dd48
|
||||
#: ../../getting_started/tutorials.md:4 8eecfbf3240b44fcb425034600316cea
|
||||
msgid "This is a collection of DB-GPT tutorials on Medium."
|
||||
msgstr "这是知乎上DB-GPT教程的集合。."
|
||||
|
||||
#: ../../getting_started/tutorials.md:6 7216c2d145674002bd82b1134aae9377
|
||||
#: ../../getting_started/tutorials.md:6 a40601867a3d4ce886a197f2f337ec0f
|
||||
msgid ""
|
||||
"DB-GPT is divided into several functions, including chat with knowledge "
|
||||
"base, execute SQL, chat with database, and execute plugins."
|
||||
msgstr "DB-GPT包含以下功能,和知识库聊天,执行SQL,和数据库聊天以及执行插件。"
|
||||
|
||||
#: ../../getting_started/tutorials.md:8 726f4394d6214c45979995ce521f8964
|
||||
#: ../../getting_started/tutorials.md:8 493e6f56a75d45ef8bb15d3049a24994
|
||||
msgid "Introduction"
|
||||
msgstr "介绍"
|
||||
|
||||
#: ../../getting_started/tutorials.md:9 16c9deecc5b848a2a17eccb0f2cbdafd
|
||||
#: ../../getting_started/tutorials.md:9 4526a793cdb94b8f99f41c48cd5ee453
|
||||
#, fuzzy
|
||||
msgid "[What is DB-GPT](https://www.youtube.com/watch?v=QszhVJerc0I)"
|
||||
msgstr ""
|
||||
@ -45,12 +45,12 @@ msgstr ""
|
||||
"GPT](https://www.bilibili.com/video/BV1SM4y1a7Nj/?buvid=551b023900b290f9497610b2155a2668&is_story_h5=false&mid=%2BVyE%2Fwau5woPcUKieCWS0A%3D%3D&p=1&plat_id=116&share_from=ugc&share_medium=iphone&share_plat=ios&share_session_id=5D08B533-82A4-4D40-9615-7826065B4574&share_source=GENERIC&share_tag=s_i×tamp=1686307943&unique_k=bhO3lgQ&up_id=31375446)"
|
||||
" by csunny (https://github.com/csunny/DB-GPT)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:11 461a585616ae49518a8e314d13bb886c
|
||||
#: ../../getting_started/tutorials.md:11 95313384e5da4f5db96ac990596b2e73
|
||||
#, fuzzy
|
||||
msgid "Knowledge"
|
||||
msgstr "知识库"
|
||||
|
||||
#: ../../getting_started/tutorials.md:13 4fa6c8d8ec5e43fcb4e79443c83a68ae
|
||||
#: ../../getting_started/tutorials.md:13 e7a141f4df8d4974b0797dd7723c4658
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
"[How to Create your own knowledge repository](https://db-"
|
||||
@ -59,54 +59,55 @@ msgstr ""
|
||||
"[怎么创建自己的知识库](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/modules/knowledge.html)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:15 c03c45687f6b46f1a8da8085b45cee98
|
||||
#: ../../getting_started/tutorials.md:15 f7db5b05a2db44e6a98b7d0df0a6f4ee
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:15 efacc5af342248f4935e401ed044ec9e
|
||||
#: ../../getting_started/tutorials.md:15 1a1647a7ca23423294823529301dd75f
|
||||
#, fuzzy
|
||||
msgid "Add new Knowledge demonstration"
|
||||
msgstr "[新增知识库演示](../../assets/new_knownledge_en.gif)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:17 dcdb8d1e73f241649756a126e1ddc185
|
||||
#: ../../getting_started/tutorials.md:17 de26224a814e4c6798d3a342b0f0fe3a
|
||||
msgid "SQL Generation"
|
||||
msgstr "SQL生成"
|
||||
|
||||
#: ../../getting_started/tutorials.md:18 517960225da64780afc858958ab34446
|
||||
#: ../../getting_started/tutorials.md:18 f8fe82c554424239beb522f94d285c52
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
msgstr "[sql生成演示](../../assets/demo_en.gif)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:18 1e1a38abf60241058ee7a50759f9c426
|
||||
#: ../../getting_started/tutorials.md:18 41e932b692074fccb8059cadb0ed320e
|
||||
#, fuzzy
|
||||
msgid "sql generation demonstration"
|
||||
msgstr "[sql生成演示](../../assets/demo_en.gif)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:20 7bec46758a5e4581a0d1636fb32ac2b8
|
||||
#: ../../getting_started/tutorials.md:20 78bda916272f4cf99e9b26b4d9ba09ab
|
||||
msgid "SQL Execute"
|
||||
msgstr "SQL执行"
|
||||
|
||||
#: ../../getting_started/tutorials.md:21 64cd323f38694ae4aa4cb49303041742
|
||||
#: ../../getting_started/tutorials.md:21 53cc83de34784c3c8d4d8204eacccbe9
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
msgstr "[sql execute 演示](../../assets/auto_sql_en.gif)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:21 2c6d6c7428c3487eb969f87ccbc00961
|
||||
#: ../../getting_started/tutorials.md:21 535c06f487ed4d15a6cdd17a0154d798
|
||||
#, fuzzy
|
||||
msgid "sql execute demonstration"
|
||||
msgstr "SQL执行演示"
|
||||
|
||||
#: ../../getting_started/tutorials.md:23 4ff018bac02f45fe8da5b649c501bf6b
|
||||
#: ../../getting_started/tutorials.md:23 0482e6155dc44843adc3a3aa77528f03
|
||||
#, fuzzy
|
||||
msgid "Plugins"
|
||||
msgstr "DB插件"
|
||||
|
||||
#: ../../getting_started/tutorials.md:24 0ea9c7c5fd5e4457a41602467758fd47
|
||||
#: ../../getting_started/tutorials.md:24 632617dd88fe4688b789fbb941686c0f
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
msgid ""
|
||||
msgstr "[db plugins 演示](../../assets/dbgpt_bytebase_plugin.gif)"
|
||||
|
||||
#: ../../getting_started/tutorials.md:24 98a384eb772e44c2954ccd7989c5905f
|
||||
#: ../../getting_started/tutorials.md:24 020ff499469145f0a34ac468fff91948
|
||||
msgid "db plugins demonstration"
|
||||
msgstr "DB插件演示"
|
||||
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 0.2.3\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-06-14 22:33+0800\n"
|
||||
"POT-Creation-Date: 2023-06-30 17:16+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -19,11 +19,11 @@ msgstr ""
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../modules/llms.md:1 7737e31f5fcc4dc4be573a0bb73ca419
|
||||
#: ../../modules/llms.md:1 7fa87bcbc5de40b2902c41545027d8b8
|
||||
msgid "LLMs"
|
||||
msgstr "大语言模型"
|
||||
|
||||
#: ../../modules/llms.md:3 8a8422f18e5d4c7aa1c1abf3a89f5d27
|
||||
#: ../../modules/llms.md:3 48a7c98128114cf58c57c41575ae53a6
|
||||
#, python-format
|
||||
msgid ""
|
||||
"In the underlying large model integration, we have designed an open "
|
||||
@ -36,23 +36,23 @@ msgid ""
|
||||
"of use."
|
||||
msgstr "在底层大模型接入中,我们设计了开放的接口,支持对接多种大模型。同时对于接入模型的效果,我们有非常严格的把控与评审机制。对大模型能力上与ChatGPT对比,在准确率上需要满足85%以上的能力对齐。我们用更高的标准筛选模型,是期望在用户使用过程中,可以省去前面繁琐的测试评估环节。"
|
||||
|
||||
#: ../../modules/llms.md:5 c48ee3f4c51c49ae9dba40b1854dd483
|
||||
#: ../../modules/llms.md:5 8d162757e5934dca9395605a5809193a
|
||||
msgid "Multi LLMs Usage"
|
||||
msgstr "多模型使用"
|
||||
|
||||
#: ../../modules/llms.md:6 ad44cef629f64bf4a4d72431568149fe
|
||||
#: ../../modules/llms.md:6 b1a5b940bee64aaa8945c078f9788459
|
||||
msgid ""
|
||||
"To use multiple models, modify the LLM_MODEL parameter in the .env "
|
||||
"configuration file to switch between the models."
|
||||
msgstr "如果要使用不同的模型,请修改.env配置文件中的LLM MODEL参数以在模型之间切换。"
|
||||
|
||||
#: ../../modules/llms.md:8 b08325fd36af4ef582c8a46685986aaf
|
||||
#: ../../modules/llms.md:8 77c89d2a77d64b8aa6796c59aad07ed2
|
||||
msgid ""
|
||||
"Notice: you can create .env file from .env.template, just use command "
|
||||
"like this:"
|
||||
msgstr "注意:你可以从 .env.template 创建 .env 文件。只需使用如下命令:"
|
||||
|
||||
#: ../../modules/llms.md:14 75ec45409dc84fd9bf3dfa98835d4645
|
||||
#: ../../modules/llms.md:14 4fc870de8fc14861a97a5417a9e886f4
|
||||
msgid ""
|
||||
"now we support models vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, "
|
||||
"guanaco-33b-merged, falcon-40b, gorilla-7b."
|
||||
@ -60,29 +60,36 @@ msgstr ""
|
||||
"现在我们支持的模型有vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, guanaco-33b-"
|
||||
"merged, falcon-40b, gorilla-7b."
|
||||
|
||||
#: ../../modules/llms.md:16 a6e99cf291e049c99e2d6813d03b0427
|
||||
#: ../../modules/llms.md:16 76cd62716d864bc48c362a5f095dbe94
|
||||
msgid ""
|
||||
"if you want use other model, such as chatglm-6b, you just need update "
|
||||
".env config file."
|
||||
msgstr "如果你想使用其他模型,比如chatglm-6b, 仅仅需要修改.env 配置文件"
|
||||
|
||||
#: ../../modules/llms.md:21 1b9a9aa83dc7420fb5c6c17c982abb20
|
||||
#: ../../modules/llms.md:20 be990b58d30c469486c21e4a11739274
|
||||
msgid ""
|
||||
"or chatglm2-6b, which is the second-generation version of the open-"
|
||||
"source bilingual (Chinese-English) chat model ChatGLM-6B."
|
||||
msgstr ""
|
||||
|
||||
#: ../../modules/llms.md:27 52b37848bd8345aea6a5a331019a01cd
|
||||
msgid "Run Model with cpu."
|
||||
msgstr "用CPU运行模型"
|
||||
|
||||
#: ../../modules/llms.md:22 e6bdeeca06764ed583154ddc78f0f26e
|
||||
#: ../../modules/llms.md:28 e3090ffbb05646e6af1b0eebedabc9e6
|
||||
msgid ""
|
||||
"we alse support smaller models, like gpt4all. you can use it with "
|
||||
"cpu/mps(M1/M2), Download from [gpt4all model](https://gpt4all.io/models"
|
||||
"/ggml-gpt4all-j-v1.3-groovy.bin)"
|
||||
msgstr "我们也支持一些小模型,你可以通过CPU/MPS(M1、M2)运行, 模型下载[gpt4all](https://gpt4all.io/models"
|
||||
msgstr ""
|
||||
"我们也支持一些小模型,你可以通过CPU/MPS(M1、M2)运行, 模型下载[gpt4all](https://gpt4all.io/models"
|
||||
"/ggml-gpt4all-j-v1.3-groovy.bin)"
|
||||
|
||||
#: ../../modules/llms.md:24 da5b575421ef45e7ad0f56fac151948d
|
||||
#: ../../modules/llms.md:30 67b0f633acee493d86033806a5f0db9a
|
||||
msgid "put it in the models path, then change .env config."
|
||||
msgstr "将模型放在models路径, 修改.env 配置文件"
|
||||
|
||||
#: ../../modules/llms.md:29 03453d147e64404ab2c116faf0147b70
|
||||
#: ../../modules/llms.md:35 f1103e6c60b44657ba47129f97156251
|
||||
msgid ""
|
||||
"DB-GPT provides a model load adapter and chat adapter. load adapter which"
|
||||
" allows you to easily adapt load different LLM models by inheriting the "
|
||||
@ -91,15 +98,15 @@ msgstr ""
|
||||
"DB-GPT提供了多模型适配器load adapter和chat adapter.load adapter通过继承BaseLLMAdapter类,"
|
||||
" 实现match和loader方法允许你适配不同的LLM."
|
||||
|
||||
#: ../../modules/llms.md:31 06b14da2931349859182473cd79abd68
|
||||
#: ../../modules/llms.md:37 768ae456f55b4b1894d7dff43c7a7f01
|
||||
msgid "vicuna llm load adapter"
|
||||
msgstr "vicuna llm load adapter"
|
||||
|
||||
#: ../../modules/llms.md:48 fe7be51e9e2240c1882ef05f94a39d90
|
||||
#: ../../modules/llms.md:54 5364ea2d1b2e4759a98045b4ff7be4fe
|
||||
msgid "chatglm load adapter"
|
||||
msgstr "chatglm load adapter"
|
||||
|
||||
#: ../../modules/llms.md:75 3535d4e0a0b946a49ed13710ae0ae5f3
|
||||
#: ../../modules/llms.md:81 eb7e5dc538a14a67b030b7a8b89a8e8a
|
||||
msgid ""
|
||||
"chat adapter which allows you to easily adapt chat different LLM models "
|
||||
"by inheriting the BaseChatAdpter.you just implement match() and "
|
||||
@ -108,43 +115,43 @@ msgstr ""
|
||||
"chat "
|
||||
"adapter通过继承BaseChatAdpter允许你通过实现match和get_generate_stream_func方法允许你适配不同的LLM."
|
||||
|
||||
#: ../../modules/llms.md:77 b8dba90d769d45f090086fa044f22a96
|
||||
#: ../../modules/llms.md:83 f8e2cbbbf1e64b07af1a18bc089d268f
|
||||
msgid "vicuna llm chat adapter"
|
||||
msgstr "vicuna llm chat adapter"
|
||||
|
||||
#: ../../modules/llms.md:89 f1d6e8145f704b5bbd1c49224c1e30f9
|
||||
#: ../../modules/llms.md:95 dba52e8a07354536bbfa38319129b98f
|
||||
msgid "chatglm llm chat adapter"
|
||||
msgstr "chatglm llm chat adapter"
|
||||
|
||||
#: ../../modules/llms.md:102 295e498cec384d9589431b1d5942f590
|
||||
#: ../../modules/llms.md:108 3b9b667a154f4123ac0c2618ce9a8626
|
||||
msgid ""
|
||||
"if you want to integrate your own model, just need to inheriting "
|
||||
"BaseLLMAdaper and BaseChatAdpter and implement the methods"
|
||||
msgstr "如果你想集成自己的模型,只需要继承BaseLLMAdaper和BaseChatAdpter类,然后实现里面的方法即可"
|
||||
|
||||
#: ../../modules/llms.md:104 07dd4757b06440fe8e9959446ff05892
|
||||
#: ../../modules/llms.md:110 713bb7cbd7644cc3b3734901b46d3ee0
|
||||
#, fuzzy
|
||||
msgid "Multi Proxy LLMs"
|
||||
msgstr "多模型使用"
|
||||
|
||||
#: ../../modules/llms.md:105 1b9fc9ce08b94f6493f4b6ce51878fe2
|
||||
#: ../../modules/llms.md:111 125f310663e54a5b9a78f036df2c7062
|
||||
msgid "1. Openai proxy"
|
||||
msgstr ""
|
||||
|
||||
#: ../../modules/llms.md:106 64e44b3c7c254034a1f72d2e362f4c4d
|
||||
#: ../../modules/llms.md:112 56a578a5e1234fe38a61eacd74d52b93
|
||||
msgid ""
|
||||
"If you haven't deployed a private infrastructure for a large model, or if"
|
||||
" you want to use DB-GPT in a low-cost and high-efficiency way, you can "
|
||||
"also use OpenAI's large model as your underlying model."
|
||||
msgstr ""
|
||||
|
||||
#: ../../modules/llms.md:108 bca7d8118cd546ca8160400fe729be89
|
||||
#: ../../modules/llms.md:114 ce9d8db45c0441a59dc67babedfa1829
|
||||
msgid ""
|
||||
"If your environment deploying DB-GPT has access to OpenAI, then modify "
|
||||
"the .env configuration file as below will work."
|
||||
msgstr ""
|
||||
|
||||
#: ../../modules/llms.md:116 7b84ddb937954787b4c8422f743afeda
|
||||
#: ../../modules/llms.md:122 cc81057c5bb6453e90bd5ab6c5d29c19
|
||||
msgid ""
|
||||
"If you can't access OpenAI locally but have an OpenAI proxy service, you "
|
||||
"can configure as follows."
|
||||
|
Loading…
Reference in New Issue
Block a user