docs: update readme and llms (#138)

This commit is contained in:
csunny 2023-06-14 21:40:48 +08:00
parent b4c01cc299
commit 9a1424fc86
3 changed files with 15 additions and 3 deletions

View File

@ -20,7 +20,7 @@ As large models are released and iterated upon, they are becoming increasingly i
DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
## News
- [2023/06/14] support gpt4all model, which can run at M1/M2, or cpu machine. [documents](https://db-gpt.readthedocs.io/en/latest/modules/llms.html)
- [2023/06/01]🔥 On the basis of the Vicuna-13B basic model, task chain calls are implemented through plugins. For example, the implementation of creating a database with a single sentence.[demo](./assets/auto_plugin.gif)
- [2023/06/01]🔥 QLoRA guanaco(7b, 13b, 33b) support.
- [2023/05/28]🔥 Learning from crawling data from the Internet [demo](./assets/chaturl_en.gif)

View File

@ -21,7 +21,7 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目使用本地
[DB-GPT视频介绍](https://www.bilibili.com/video/BV1SM4y1a7Nj/?buvid=551b023900b290f9497610b2155a2668&is_story_h5=false&mid=%2BVyE%2Fwau5woPcUKieCWS0A%3D%3D&p=1&plat_id=116&share_from=ugc&share_medium=iphone&share_plat=ios&share_session_id=5D08B533-82A4-4D40-9615-7826065B4574&share_source=GENERIC&share_tag=s_i&timestamp=1686307943&unique_k=bhO3lgQ&up_id=31375446)
## 最新发布
- [2023/06/14]🔥 支持gpt4all模型可以在M1/M2 或者CPU机器上运行。 [使用文档](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/llms.html)
- [2023/06/01]🔥 在Vicuna-13B基础模型的基础上通过插件实现任务链调用。例如单句创建数据库的实现.[演示](./assets/dbgpt_bytebase_plugin.gif)
- [2023/06/01]🔥 QLoRA guanaco(原驼)支持, 支持4090运行33B
- [2023/05/28]🔥根据URL进行对话 [演示](./assets/chat_url_zh.gif)

View File

@ -13,6 +13,19 @@ MODEL_SERVER=http://127.0.0.1:8000
```
now we support models vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, guanaco-33b-merged, falcon-40b, gorilla-7b.
if you want use other model, such as chatglm-6b, you just need update .env config file.
```
LLM_MODEL=chatglm-6b
```
## Run Model with cpu.
we alse support smaller models, like gpt4all. you can use it with cpu/mps(M1/M2), Download from [gpt4all model](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin)
put it in the models path, then change .env config.
```
LLM_MODEL=gptj-6b
```
DB-GPT provides a model load adapter and chat adapter. load adapter which allows you to easily adapt load different LLM models by inheriting the BaseLLMAdapter. You just implement match() and loader() method.
vicuna llm load adapter
@ -87,7 +100,6 @@ class ChatGLMChatAdapter(BaseChatAdpter):
return chatglm_generate_stream
```
if you want to integrate your own model, just need to inheriting BaseLLMAdaper and BaseChatAdpter and implement the methods
## Multi Proxy LLMs
### 1. Openai proxy