mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-07-30 23:28:35 +00:00
doc:faq and llama.cpp llm usage
This commit is contained in:
parent
013f363432
commit
281ea7cee6
16
README.zh.md
16
README.zh.md
@ -148,25 +148,29 @@ DB-GPT基于 [FastChat](https://github.com/lm-sys/FastChat) 构建大模型运
|
||||
- [DB-GPT-Web](https://github.com/csunny/DB-GPT-Web) 多端交互前端界面
|
||||
|
||||
## Image
|
||||
|
||||
🌐 [AutoDL镜像](https://www.codewithgpu.com/i/csunny/DB-GPT/dbgpt-0.3.1-v2)
|
||||
|
||||
🌐 [阿里云镜像](http://dbgpt.site/web/#/p/dc4bb97e0bc15302dbf3a5d5571142dd)
|
||||
|
||||
## 安装
|
||||
[快速开始](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/getting_started/getting_started.html)
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
[**快速开始
|
||||
**](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/getting_started/getting_started.html)
|
||||
|
||||
### 多语言切换
|
||||
在.env 配置文件当中,修改LANGUAGE参数来切换使用不同的语言,默认是英文(中文zh, 英文en, 其他语言待补充)
|
||||
|
||||
### 平台部署
|
||||
- autodl
|
||||
[autodl镜像](https://www.codewithgpu.com/i/csunny/DB-GPT/csunny-db-gpt),从头搭建可参考镜像说明,或通过`docker pull`获取共享镜像,按照文档中的说明操作即可,若有问题,欢迎评论。
|
||||
在.env 配置文件当中,修改LANGUAGE参数来切换使用不同的语言,默认是英文(中文zh, 英文en, 其他语言待补充)
|
||||
|
||||
## 使用说明
|
||||
|
||||
### 多模型使用
|
||||
[使用指南](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/llms.html)
|
||||
|
||||
[使用指南](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/llms.html)
|
||||
|
||||
如果在使用知识库时遇到与nltk相关的错误,您需要安装nltk工具包。更多详情,请参见:[nltk文档](https://www.nltk.org/data.html)
|
||||
Run the Python interpreter and type the commands:
|
||||
|
@ -61,8 +61,11 @@ Once the environment is installed, we have to create a new folder "models" in th
|
||||
|
||||
```{tip}
|
||||
Notice make sure you have install git-lfs
|
||||
|
||||
centos:yum install git-lfs
|
||||
|
||||
ubuntu:app-get install git-lfs
|
||||
|
||||
macos:brew install git-lfs
|
||||
```
|
||||
|
||||
@ -99,10 +102,16 @@ You can configure basic parameters in the .env file, for example setting LLM_MOD
|
||||
```bash
|
||||
$ python pilot/server/dbgpt_server.py
|
||||
```
|
||||
|
||||
Open http://localhost:5000 with your browser to see the product.
|
||||
|
||||
If you want to access an external LLM service, you need to 1.set the variables LLM_MODEL=YOUR_MODEL_NAME MODEL_SERVER=YOUR_MODEL_SERVER(eg:http://localhost:5000) in the .env file.
|
||||
```tip
|
||||
If you want to access an external LLM service, you need to
|
||||
|
||||
1.set the variables LLM_MODEL=YOUR_MODEL_NAME, MODEL_SERVER=YOUR_MODEL_SERVER(eg:http://localhost:5000) in the .env file.
|
||||
|
||||
2.execute dbgpt_server.py in light mode
|
||||
```
|
||||
|
||||
If you want to learn about dbgpt-webui, read https://github./csunny/DB-GPT/tree/new-page-framework/datacenter
|
||||
|
||||
@ -110,8 +119,7 @@ If you want to learn about dbgpt-webui, read https://github./csunny/DB-GPT/tree/
|
||||
$ python pilot/server/dbgpt_server.py --light
|
||||
```
|
||||
|
||||
|
||||
### 4. Multiple GPUs
|
||||
### Multiple GPUs
|
||||
|
||||
DB-GPT will use all available gpu by default. And you can modify the setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu IDs.
|
||||
|
||||
@ -127,7 +135,7 @@ CUDA_VISIBLE_DEVICES=3,4,5,6 python3 pilot/server/dbgpt_server.py
|
||||
|
||||
You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to configure the maximum memory used by each GPU.
|
||||
|
||||
### 5. Not Enough Memory
|
||||
### Not Enough Memory
|
||||
|
||||
DB-GPT supported 8-bit quantization and 4-bit quantization.
|
||||
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 👏👏 0.3.0\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-08-16 18:31+0800\n"
|
||||
"POT-Creation-Date: 2023-08-17 21:23+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -19,22 +19,129 @@ msgstr ""
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../faq.md:8 ded9afcc91594bce8950aa688058a5b6
|
||||
#: ../../faq.md:1 a39cbc25271841d79095c1557a817a76
|
||||
msgid "FAQ"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:2 b08ce199a11b4d309142866a637bc3d0
|
||||
msgid "Q1: text2vec-large-chinese not found"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:4 754a61fa05a846f4847bd988c4049ceb
|
||||
msgid ""
|
||||
"A1: make sure you have download text2vec-large-chinese embedding model in"
|
||||
" right way"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:16 5a3d32eacdd94f59bb4039c3d6380fc9
|
||||
msgid ""
|
||||
"Q2: execute `pip install -r requirements.txt` error, found some package "
|
||||
"cannot find correct version."
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:19 54b322726a074d6d9c1a957310774aba
|
||||
msgid "A2: change the pip source."
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:26 ../../faq.md:33 0c238f86900243e5b5e9a49e4ef37063
|
||||
#: 245e48f636524172b1b9ba4144946007
|
||||
msgid "or"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:41 f2f7025f324c4065abf244a3adb4e4f6
|
||||
msgid "Q3:Access denied for user 'root@localhost'(using password :NO)"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:43 f14fd1c2d2ed454491e0a876fd2971a4
|
||||
msgid "A3: make sure you have installed mysql instance in right way"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:45 6d499bfb6c0142ec838f68696f793c3d
|
||||
msgid "Docker:"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:49 71137fe8a30d42e7943dd2a4402b2094
|
||||
msgid "Normal: [download mysql instance](https://dev.mysql.com/downloads/mysql/)"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:52 ec5d5f79cbe54328902e6e9b820276e7
|
||||
msgid "Q4:When I use openai(MODEL_SERVER=proxyllm) to chat"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:58 d2cbee8bbfd54b4b853ccbdbf1c30c97
|
||||
msgid "A4: make sure your openapi API_KEY is available"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:60 c506819975c841468af1899730df3ed1
|
||||
msgid "Q5:When I Chat Data and Chat Meta Data, I found the error"
|
||||
msgstr "Chat Data and Chat Meta Data报如下错"
|
||||
|
||||
#: ../../faq.md:13 25237221f65c47a2b62f5afbe637d6e7
|
||||
#: ../../faq.md:67 af52123acad74c28a50f93d53da6afa9
|
||||
msgid "A5: you have not create your database and table"
|
||||
msgstr "需要创建自己的数据库"
|
||||
|
||||
#: ../../faq.md:14 8c9024f1f4d7414499587e3bdf7d56d1
|
||||
#: ../../faq.md:68 05bf6d858df44157bfb5480f9e8759fb
|
||||
msgid "1.create your database."
|
||||
msgstr "1.先创建数据库"
|
||||
|
||||
#: ../../faq.md:20 afc7299d3b4e4d98b17fd6157d440970
|
||||
#: ../../faq.md:74 363d4fbb2a474c64a54c2659844596b5
|
||||
msgid "2.create table {$your_table} and insert your data. eg:"
|
||||
msgstr "然后创建数据表,模拟数据"
|
||||
|
||||
#: ../../faq.md:88 5f3a9b9d7e6f444a87deb17b5a1a45af
|
||||
msgid "Q6:How to change Vector DB Type in DB-GPT."
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:90 ee1d4dfa813942e1a3d1219f21bc041f
|
||||
msgid "A6: Update .env file and set VECTOR_STORE_TYPE."
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:91 71e9e9905bdb46e1925f66a6c12a6afd
|
||||
msgid ""
|
||||
"DB-GPT currently support Chroma(Default), Milvus(>2.1), Weaviate vector "
|
||||
"database. If you want to change vector db, Update your .env, set your "
|
||||
"vector store type, VECTOR_STORE_TYPE=Chroma (now only support Chroma and "
|
||||
"Milvus(>2.1), if you set Milvus, please set MILVUS_URL and MILVUS_PORT) "
|
||||
"If you want to support more vector db, you can integrate yourself.[how to"
|
||||
" integrate](https://db-gpt.readthedocs.io/en/latest/modules/vector.html)"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:107 d6c4ed8ff8244aa8aef6ea8d8f0a5555
|
||||
msgid "Q7:When I use vicuna-13b, found some illegal character like this."
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:112 911a5051c37244e1b6ea9d3b1bd1fd97
|
||||
msgid ""
|
||||
"A7: set KNOWLEDGE_SEARCH_TOP_SIZE smaller or set KNOWLEDGE_CHUNK_SIZE "
|
||||
"smaller, and reboot server."
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:114 0566430bbc0541709ed60b81c7372175
|
||||
msgid ""
|
||||
"Q8:space add error (pymysql.err.OperationalError) (1054, \"Unknown column"
|
||||
" 'knowledge_space.context' in 'field list'\")"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:117 37419da934b44575bd39bcffffa81482
|
||||
msgid "A8:"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:118 1c5f46dbccc342329b544ac174a79994
|
||||
msgid "1.shutdown dbgpt_server(ctrl c)"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:120 35bb76e9d9ec4230a8fab9aed475a4d7
|
||||
msgid "2.add column context for table knowledge_space"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:124 e198605e42d4452680359487abc349a3
|
||||
msgid "3.execute sql ddl"
|
||||
msgstr ""
|
||||
|
||||
#: ../../faq.md:129 88495f3e66c448faab9f06c4c5cd27ef
|
||||
msgid "4.restart dbgpt server"
|
||||
msgstr ""
|
||||
|
||||
#~ msgid "FAQ"
|
||||
#~ msgstr "FAQ"
|
||||
|
||||
|
@ -8,7 +8,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-08-16 23:15+0800\n"
|
||||
"POT-Creation-Date: 2023-08-17 21:23+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
@ -20,34 +20,34 @@ msgstr ""
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:1
|
||||
#: de443fce549545518824a89604028a2e
|
||||
#: de0b03c3b94a4e2aad7d380f532b85c0
|
||||
msgid "Installation From Source"
|
||||
msgstr "源码安装"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:3
|
||||
#: d7b1a80599004c589c9045eba98cc5c9
|
||||
#: 65a034e1a90f40bab24899be901cc97f
|
||||
msgid ""
|
||||
"This tutorial gives you a quick walkthrough about use DB-GPT with you "
|
||||
"environment and data."
|
||||
msgstr "本教程为您提供了关于如何使用DB-GPT的使用指南。"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:5
|
||||
#: 0ba98573194c4108aedaa2669915e949
|
||||
#: 33b15956f7ef446a9aa4cac014163884
|
||||
msgid "Installation"
|
||||
msgstr "安装"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:7
|
||||
#: b8f465fcee2b45009bb1c6356df06b20
|
||||
#: ad64dc334e8e43bebc8873afb27f7b15
|
||||
msgid "To get started, install DB-GPT with the following steps."
|
||||
msgstr "请按照以下步骤安装DB-GPT"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:9
|
||||
#: fd5031c97e304023bd6880cd10d58413
|
||||
#: 33e12a5bef6c45dbb30fbffae556b664
|
||||
msgid "1. Hardware Requirements"
|
||||
msgstr "1. 硬件要求"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:10
|
||||
#: 05f570b3999f465982c2648f658aed82
|
||||
#: dfd3b7c074124de78169f34168b7c757
|
||||
msgid ""
|
||||
"As our project has the ability to achieve ChatGPT performance of over "
|
||||
"85%, there are certain hardware requirements. However, overall, the "
|
||||
@ -56,176 +56,176 @@ msgid ""
|
||||
msgstr "由于我们的项目有能力达到85%以上的ChatGPT性能,所以对硬件有一定的要求。但总体来说,我们在消费级的显卡上即可完成项目的部署使用,具体部署的硬件说明如下:"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 5c5ee902c51d4e44aeeac3fa99910098
|
||||
#: 3d4530d981bf4dbab815a11c74bfd897
|
||||
msgid "GPU"
|
||||
msgstr "GPU"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: a3199d1f11474451a06a11503c4e8c74 e3d7c2003b444cb886aec34aaba4acfe
|
||||
#: 348c8f9b734244258416ea2e11b76caa f2f5e55b8c9b4c7da0ac090e763a9f47
|
||||
msgid "VRAM Size"
|
||||
msgstr "显存"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 3bd4ce6f9201483fa579d42ebf8cf556
|
||||
#: 9e099897409f42339bc284c378318a72
|
||||
msgid "Performance"
|
||||
msgstr "Performance"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 8256a27b6a534edea5646589d65eb34e
|
||||
#: 65bd67a198a84a5399f4799b505e062c
|
||||
msgid "RTX 4090"
|
||||
msgstr "RTX 4090"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 25c1f69adc5d4a058dbd28ea4414c3f8 ed85dab6725b4f0baf13ff67a7032777
|
||||
#: 75e7f58f8f5d42f081a3e4d2e51ccc18 d897e949b37344d084f6917b977bcceb
|
||||
msgid "24 GB"
|
||||
msgstr "24 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: f57d2a02d8344a3d9870c1c21728249d
|
||||
#: 9cba72a2be3c41bda797ff447b63e448
|
||||
msgid "Smooth conversation inference"
|
||||
msgstr "Smooth conversation inference"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: aa1e607b65964d43ad93fc9b3cff7712
|
||||
#: 90ea71d2099c47acac027773e69d2b23
|
||||
msgid "RTX 3090"
|
||||
msgstr "RTX 3090"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: c0220f95d58543b498bdf896b2c1a2a1
|
||||
#: 7dd339251a1a45be9a45e0bb30bd09f7
|
||||
msgid "Smooth conversation inference, better than V100"
|
||||
msgstr "Smooth conversation inference, better than V100"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: acf0daf6aa764953b43464c8d6688dd8
|
||||
#: 8abe188052464e6aa395392db834c842
|
||||
msgid "V100"
|
||||
msgstr "V100"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 902f8c48bdad47d587acb1990b4d45b7 e53c23b23b414025be52191beb6d33da
|
||||
#: 0b5263806a19446991cbd59c0fec6ba7 d645833d7e854eab81102374bf3fb7d8
|
||||
msgid "16 GB"
|
||||
msgstr "16 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 68f4b835131c4753b1ba690f3b34daea fac3351a3901481c9e0c5204d6790c75
|
||||
#: 83bb5f40fa7f4e9389ac6abdb6bbb285 f9e58862a0cb488194d7ad536f359f0d
|
||||
msgid "Conversation inference possible, noticeable stutter"
|
||||
msgstr "Conversation inference possible, noticeable stutter"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: d4b9ff72353b4a10bff0647bf50bfe5c
|
||||
#: 281f7b3c9a9f450798e5cc7612ea3890
|
||||
msgid "T4"
|
||||
msgstr "T4"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:19
|
||||
#: ddc9544667654f539ca91ac7e8af1268
|
||||
#: 7276d432615040b3a9eea3f2c5764319
|
||||
msgid ""
|
||||
"if your VRAM Size is not enough, DB-GPT supported 8-bit quantization and "
|
||||
"4-bit quantization."
|
||||
msgstr "如果你的显存不够,DB-GPT支持8-bit和4-bit量化版本"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:21
|
||||
#: a6ec9822bc754670bbfc1a8a75e71eb2
|
||||
#: f68d085e03d244ed9b5ccec347466889
|
||||
msgid ""
|
||||
"Here are some of the VRAM size usage of the models we tested in some "
|
||||
"common scenarios."
|
||||
msgstr "这里是量化版本的相关说明"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: b307fe62a5564cadbf3f2d1387165c6b
|
||||
#: 8d397d7ee603448b97153192e6d3e372
|
||||
msgid "Model"
|
||||
msgstr "Model"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 718fb2ff4fcc488aba8963fc6ad5ea8c
|
||||
#: 4c789597627447278e89462690563faa
|
||||
msgid "Quantize"
|
||||
msgstr "Quantize"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 6079a14fca3d43bfbf14021fcd1534c7 785489b458ca4578bfd586c495b5abb9
|
||||
#: a8d7c76224544ce69f376ac7cb2f5a3b dbf2beb53d7e491393b00848d63d7ffa
|
||||
msgid "vicuna-7b-v1.5"
|
||||
msgstr "vicuna-7b-v1.5"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 1d6a5c19584247d89fb2eb98bcaecc83 278d03ee54e749e1b5f20204ddc36149
|
||||
#: 69c01cb441894f059e91400502cd33ae 7fa7d4922bfb4b3bb44b98ea02ff7e78
|
||||
#: b8b4566e3a994919b9821cd536504936 d6f4afc865cb40b085b5fc79a09bc7f9
|
||||
#: ef05aa05a2d2411a91449ccc18a76211
|
||||
#: 1bd5afee2af84597b5a423308e92362c 46d030b6ff1849ebb22b590e6978d914
|
||||
#: 4cf8f3ebb0b743ffb2f1c123a25b75d0 6d96be424e6043daa2c02649894aa796
|
||||
#: 83bb935f520c4c818bfe37e13034b2a7 92ba161085374917b7f82810b1a2bf00
|
||||
#: ca5ccc49ba1046d2b4b13aaa7ceb62f5
|
||||
msgid "4-bit"
|
||||
msgstr "4-bit"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 1266b6e1dde64dab9e6d8bba2f3f6d09 8ab98ed2c80c48ab9e9694131ffcac67
|
||||
#: b94deb7b80c24ce8a694984511e5a02a
|
||||
#: 1173b5ee04cb4686ba34a527bc618bdb 558136997f4d49998f2f4e6a9bb656b0
|
||||
#: 8d203a9a70684cbaa9d937af8450847f
|
||||
msgid "8 GB"
|
||||
msgstr "8 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 065f1cf1a1b94ad5803f95f8f019d882 0689708416e14942a76c2808a26bc26e
|
||||
#: 29dc55e7659a4d6a999a347c346e1327 5f0fa6c729db4cd7ab42dbdc73ca4e40
|
||||
#: 6401e59dc85541a0b20cb2d2c26e4fd0 9071acd973b24d5582f8d879d5e55931
|
||||
#: 96f12483ac7447baab6592538cfd567c
|
||||
#: 033458b772e3493b80041122e067e194 0f3eda083eac4739b2cf7d21337b145e
|
||||
#: 6404eaa486cf45a69a27b0f87a7f6302 8e850aa3acf14ab2b231be74ddb34e86
|
||||
#: ba258645f41f47a693aacbbc0f38e981 df67cac599234c4ba667a9b40eb6d9bc
|
||||
#: fc07aeb321434722a320fed0afe3ffb8
|
||||
msgid "8-bit"
|
||||
msgstr "8-bit"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 2d56e3dc1f6a4035a770f7b94c8e0f96 5eebdf37bc544624be5d1b6dabda4716
|
||||
#: b9fd2505b4644257b91777bc68d5f41e e7056c195656413f92a0c78b5d14219c
|
||||
#: e7b87586700e4da0aaccff0b4c7c54f7 eb5ad729ae784c7cb8dd52fbb12699ae
|
||||
#: 3ed2cb5787c14f268c446b03d5531233 68e7f0b0e8ad44ee86a86189bb3b553d
|
||||
#: 8b4ea703d1df45c5be90d83c4723f16f cb606e0a458746fd86307c1e8aea08f1
|
||||
#: d5da58dbde3c4bb4ac8b464a0a507c62 e8e140c610ec4971afe1b7ec2690382a
|
||||
msgid "12 GB"
|
||||
msgstr "12 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 529ead731c98461b8cb5452c4e72ab23 7cce32961a654ed2a31edc82724e6a1f
|
||||
#: 512cd29c308c4d3ab66dbe63e7ea8f48 78f8307ab96c4245a1f09abcd714034c
|
||||
msgid "vicuna-13b-v1.5"
|
||||
msgstr "vicuna-13b-v1.5"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 0085b850f3574ba6bf3b3654123882dd 69b2df6df91c49b2b26f6749bf6dc657
|
||||
#: 714e9441566e4c8bbdeaad944e64c699
|
||||
#: 48849903b3cb4888b6dd3d5efcbb24fb 83178c05f6cf431f82bb1d6d25b2645e
|
||||
#: 979e3ab64df14753b0987bdd49bd5cc6
|
||||
msgid "20 GB"
|
||||
msgstr "20 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 133b65fb88f74645ae5db5cd0009bb35 1e7dedf510e94a47b23eaef61f9687b1
|
||||
#: 0496046d1fb644c28361959b395231d7 3871c7c4432b4a15a9888586cdc70eda
|
||||
msgid "llama-2-7b"
|
||||
msgstr "llama-2-7b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 0951d03bb6544a2391dcd72eea47c1a7 89f93c8aadc84a0d97d3d89ee55d06bf
|
||||
#: 0c7c632d7c4d44fabfeed58fcc13db8f 78ee1adc6a7e4fa188706a1d5356059f
|
||||
msgid "llama-2-13b"
|
||||
msgstr "llama-2-13b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 6e5a32858b20441daa4b2584faa46ec4 8bcd62d8cf4f49aebb7d97cd9e015252
|
||||
#: 15a9341d4a6649908ef88045edd0cb93 d385381b0b4d4eff96a93a9d299cf516
|
||||
msgid "llama-2-70b"
|
||||
msgstr "llama-2-70b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 7f7333221b014cc6857fd9a9e358d85c
|
||||
#: de37a5c2b02a498b9344b24f626db9dc
|
||||
msgid "48 GB"
|
||||
msgstr "48 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 77c24e304e9e4de7b62f99ce29a66a70
|
||||
#: e897098f81314ce4bf729aee1de7354c
|
||||
msgid "80 GB"
|
||||
msgstr "80 GB"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 32c04dc45efb45bcb516640a6d15cce1 e04ad78be6774c32bc53ddd7951cedae
|
||||
#: 0782883260f840db8e8bf7c10b5ddf62 b03b5c9343454119ae11fcb2dedf9f90
|
||||
msgid "baichuan-7b"
|
||||
msgstr "baichuan-7b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md
|
||||
#: 0fe379939b164e56b0d93113e85fbd98 3400143cf1b94edfbf5da63ed388b08c
|
||||
#: 008a7d56e8dc4242ae3503bbbf4db153 65ea9ba20adb45519d65da7b16069fa8
|
||||
msgid "baichuan-13b"
|
||||
msgstr "baichuan-13b"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:40
|
||||
#: 7a05f116e0904d0d84d9fc98e5465494
|
||||
#: 1e434048c4844cc1906d83dd68af6d8c
|
||||
msgid "2. Install"
|
||||
msgstr "2. Install"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:45
|
||||
#: 8f4d6c2b69cb46288f593b6c2aa7701e
|
||||
#: c6190ea13c024ddcb8e45ea22a235c3b
|
||||
msgid ""
|
||||
"We use Sqlite as default database, so there is no need for database "
|
||||
"installation. If you choose to connect to other databases, you can "
|
||||
@ -240,12 +240,12 @@ msgstr ""
|
||||
" Miniconda](https://docs.conda.io/en/latest/miniconda.html)"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:54
|
||||
#: 3ffaf7fed0c8422b9ceb2ab82d6ddd4d
|
||||
#: ee1e44044b73460ea8cd2f6c2eb6100d
|
||||
msgid "Before use DB-GPT Knowledge"
|
||||
msgstr "在使用知识库之前"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:60
|
||||
#: 2c2ef86e379d4db18bdfdba6133a0b2f
|
||||
#: 6a9ff4138c69429bb159bf452fa7ee55
|
||||
msgid ""
|
||||
"Once the environment is installed, we have to create a new folder "
|
||||
"\"models\" in the DB-GPT project, and then we can put all the models "
|
||||
@ -253,44 +253,56 @@ msgid ""
|
||||
msgstr "如果你已经安装好了环境需要创建models, 然后到huggingface官网下载模型"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:63
|
||||
#: 73a766538b3d4cfaa8d7a68b3c9915b8
|
||||
msgid ""
|
||||
"Notice make sure you have install git-lfs centos:yum install git-lfs "
|
||||
"ubuntu:app-get install git-lfs macos:brew install git-lfs"
|
||||
#: 1299b19bd0f24cc896c59e2c8e7e656c
|
||||
msgid "Notice make sure you have install git-lfs"
|
||||
msgstr ""
|
||||
"注意下载模型之前确保git-lfs已经安ubuntu:app-get install git-lfs macos:brew install "
|
||||
"git-lfs"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:83
|
||||
#: 3c26909ece094ecb9f6343d15cca394a
|
||||
#: ../../getting_started/install/deploy/deploy.md:65
|
||||
#: 69b3433c8e5c4cbb960e0178bdd6ac97
|
||||
msgid "centos:yum install git-lfs"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:67
|
||||
#: 50e3cfee5fd5484bb063d41693ac75f0
|
||||
msgid "ubuntu:app-get install git-lfs"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:69
|
||||
#: 81c85ca1188b4ef5b94e0431c6309f9b
|
||||
msgid "macos:brew install git-lfs"
|
||||
msgstr ""
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:86
|
||||
#: 9b503ea553a24d488e1c180bf30055ff
|
||||
msgid ""
|
||||
"The model files are large and will take a long time to download. During "
|
||||
"the download, let's configure the .env file, which needs to be copied and"
|
||||
" created from the .env.template"
|
||||
msgstr "模型文件很大,需要很长时间才能下载。在下载过程中,让我们配置.env文件,它需要从。env.template中复制和创建。"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:85
|
||||
#: efab7120927d4b3f90e591d736b927a3
|
||||
#: ../../getting_started/install/deploy/deploy.md:88
|
||||
#: 643b6a27bc0f43ee9451d18d52a9a2eb
|
||||
msgid ""
|
||||
"if you want to use openai llm service, see [LLM Use FAQ](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
|
||||
msgstr "如果想使用openai大模型服务, 可以参考[LLM Use FAQ](https://db-"
|
||||
msgstr ""
|
||||
"如果想使用openai大模型服务, 可以参考[LLM Use FAQ](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/getting_started/faq/llm/llm_faq.html)"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:88
|
||||
#: 2009fcaad7c34ebfaa900215650256fc
|
||||
#: ../../getting_started/install/deploy/deploy.md:91
|
||||
#: cc869640e66949e99faa17b1098b1306
|
||||
msgid "cp .env.template .env"
|
||||
msgstr "cp .env.template .env"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:91
|
||||
#: ee97ddf25daf45e3bc32b33693af447a
|
||||
#: ../../getting_started/install/deploy/deploy.md:94
|
||||
#: 1b94ed0e469f413b8e9d0ff3cdabca33
|
||||
msgid ""
|
||||
"You can configure basic parameters in the .env file, for example setting "
|
||||
"LLM_MODEL to the model to be used"
|
||||
msgstr "您可以在.env文件中配置基本参数,例如将LLM_MODEL设置为要使用的模型。"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:93
|
||||
#: a86fd88e1d0f4925b8d0dbc27535663b
|
||||
#: ../../getting_started/install/deploy/deploy.md:96
|
||||
#: 52cfa3636f2b4f949035d2d54b39a123
|
||||
msgid ""
|
||||
"([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on "
|
||||
"llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-"
|
||||
@ -300,52 +312,23 @@ msgstr ""
|
||||
"/vicuna-13b-v1.5), "
|
||||
"目前Vicuna-v1.5模型(基于llama2)已经开源了,我们推荐你使用这个模型通过设置LLM_MODEL=vicuna-13b-v1.5"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:95
|
||||
#: 5395445ea6324e7c9e15485fad084937
|
||||
#: ../../getting_started/install/deploy/deploy.md:98
|
||||
#: 491fd44ede1645a3a2db10097c10dbe8
|
||||
msgid "3. Run"
|
||||
msgstr "3. Run"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:96
|
||||
#: cbbc83183f0d49bdb16a3df18adbe8b2
|
||||
msgid ""
|
||||
"You can refer to this document to obtain the Vicuna weights: "
|
||||
"[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-"
|
||||
"weights) ."
|
||||
msgstr "你可以参考如何获取Vicuna weights文档[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-"
|
||||
"weights) ."
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:98
|
||||
#: e0ffb578c7894520bbb850b257e7773c
|
||||
msgid ""
|
||||
"If you have difficulty with this step, you can also directly use the "
|
||||
"model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a "
|
||||
"replacement."
|
||||
msgstr "如果觉得模型太大你也可以下载vicuna-7b [this link](https://huggingface.co/Tribbiani/vicuna-7b) "
|
||||
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:103
|
||||
#: 590c7c07cf5347b4aeee0809185c7f45
|
||||
#: ../../getting_started/install/deploy/deploy.md:100
|
||||
#: f66b8a2b18b34df5b3e74674b4a9d7a9
|
||||
msgid "1.Run db-gpt server"
|
||||
msgstr "1.Run db-gpt server"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:108
|
||||
#: cc1f6d2e37464a4291ee7d33d9ebd75f
|
||||
#: ../../getting_started/install/deploy/deploy.md:105
|
||||
#: b72283f0ffdc4ecbb4da5239be5fd126
|
||||
msgid "Open http://localhost:5000 with your browser to see the product."
|
||||
msgstr "打开浏览器访问http://localhost:5000"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:110
|
||||
#: 7eef6b17573e4300aa6b693200461f58
|
||||
msgid ""
|
||||
"If you want to access an external LLM service, you need to 1.set the "
|
||||
"variables LLM_MODEL=YOUR_MODEL_NAME "
|
||||
"MODEL_SERVER=YOUR_MODEL_SERVER(eg:http://localhost:5000) in the .env "
|
||||
"file. 2.execute dbgpt_server.py in light mode"
|
||||
msgstr ""
|
||||
"如果你想访问外部的大模型服务(是通过DB-"
|
||||
"GPT/pilot/server/llmserver.py启动的模型服务),1.需要在.env文件设置模型名和外部模型服务地址。2.使用light模式启动服务"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:113
|
||||
#: 2fa89081574d4d3a92a4c7d33b090d02
|
||||
#: ../../getting_started/install/deploy/deploy.md:116
|
||||
#: 1fae8a8ce4184feba2d74f877a25d8d2
|
||||
msgid ""
|
||||
"If you want to learn about dbgpt-webui, read https://github./csunny/DB-"
|
||||
"GPT/tree/new-page-framework/datacenter"
|
||||
@ -353,53 +336,55 @@ msgstr ""
|
||||
"如果你想了解web-ui, 请访问https://github./csunny/DB-GPT/tree/new-page-"
|
||||
"framework/datacenter"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:120
|
||||
#: 3b825bc956a0406fb8464e51cfeb769e
|
||||
msgid "4. Multiple GPUs"
|
||||
#: ../../getting_started/install/deploy/deploy.md:123
|
||||
#: 573c0349bd2140e9bb356b53f1da6ee3
|
||||
#, fuzzy
|
||||
msgid "Multiple GPUs"
|
||||
msgstr "4. Multiple GPUs"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:122
|
||||
#: 568ea5e67ad745858870e66c42ba6833
|
||||
#: ../../getting_started/install/deploy/deploy.md:125
|
||||
#: af5d6a12ec954da19576decdf434df5d
|
||||
msgid ""
|
||||
"DB-GPT will use all available gpu by default. And you can modify the "
|
||||
"setting `CUDA_VISIBLE_DEVICES=0,1` in `.env` file to use the specific gpu"
|
||||
" IDs."
|
||||
msgstr "DB-GPT默认加载可利用的gpu,你也可以通过修改 在`.env`文件 `CUDA_VISIBLE_DEVICES=0,1`来指定gpu IDs"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:124
|
||||
#: c5b980733d7a4c8d997123ff5524a055
|
||||
#: ../../getting_started/install/deploy/deploy.md:127
|
||||
#: de96662007194418a2877cece51dc5cb
|
||||
msgid ""
|
||||
"Optionally, you can also specify the gpu ID to use before the starting "
|
||||
"command, as shown below:"
|
||||
msgstr "你也可以指定gpu ID启动"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:134
|
||||
#: 2a5d283a614644d1bb98bbe721aee8e1
|
||||
#: ../../getting_started/install/deploy/deploy.md:137
|
||||
#: 9cb0ff253fb2428dbaec97570e5c4fa4
|
||||
msgid ""
|
||||
"You can modify the setting `MAX_GPU_MEMORY=xxGib` in `.env` file to "
|
||||
"configure the maximum memory used by each GPU."
|
||||
msgstr "同时你可以通过在.env文件设置`MAX_GPU_MEMORY=xxGib`修改每个GPU的最大使用内存"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:136
|
||||
#: c29c956d3071455bb11694df721e6612
|
||||
msgid "5. Not Enough Memory"
|
||||
#: ../../getting_started/install/deploy/deploy.md:139
|
||||
#: c708ee0a321444dd91be00cda469976c
|
||||
#, fuzzy
|
||||
msgid "Not Enough Memory"
|
||||
msgstr "5. Not Enough Memory"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:138
|
||||
#: 0174e92fdbfa4af08063c89f6bbe3957
|
||||
#: ../../getting_started/install/deploy/deploy.md:141
|
||||
#: 760347ecf9a44d03a8e17cba153a2cc6
|
||||
msgid "DB-GPT supported 8-bit quantization and 4-bit quantization."
|
||||
msgstr "DB-GPT 支持 8-bit quantization 和 4-bit quantization."
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:140
|
||||
#: 277f67fa08a541b3bd1fe77cdab39757
|
||||
#: ../../getting_started/install/deploy/deploy.md:143
|
||||
#: 32e3dc941bfe4d6587e8be262f8fb4d3
|
||||
msgid ""
|
||||
"You can modify the setting `QUANTIZE_8bit=True` or `QUANTIZE_4bit=True` "
|
||||
"in `.env` file to use quantization(8-bit quantization is enabled by "
|
||||
"default)."
|
||||
msgstr "你可以通过在.env文件设置`QUANTIZE_8bit=True` or `QUANTIZE_4bit=True`"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:142
|
||||
#: 00884fdf7c9a4f8c983ee52bfbb820aa
|
||||
#: ../../getting_started/install/deploy/deploy.md:145
|
||||
#: bdc9a3788149427bac9f3cf35578e206
|
||||
msgid ""
|
||||
"Llama-2-70b with 8-bit quantization can run with 80 GB of VRAM, and 4-bit"
|
||||
" quantization can run with 48 GB of VRAM."
|
||||
@ -407,8 +392,8 @@ msgstr ""
|
||||
"Llama-2-70b with 8-bit quantization 可以运行在 80 GB VRAM机器, 4-bit "
|
||||
"quantization 可以运行在 48 GB VRAM"
|
||||
|
||||
#: ../../getting_started/install/deploy/deploy.md:144
|
||||
#: a73698444bb4426ca779cc126497a2e0
|
||||
#: ../../getting_started/install/deploy/deploy.md:147
|
||||
#: 9b6085c41b5c4b96ac3e917dc5002fc2
|
||||
msgid ""
|
||||
"Note: you need to install the latest dependencies according to "
|
||||
"[requirements.txt](https://github.com/eosphoros-ai/DB-"
|
||||
@ -417,3 +402,42 @@ msgstr ""
|
||||
"注意,需要安装[requirements.txt](https://github.com/eosphoros-ai/DB-"
|
||||
"GPT/blob/main/requirements.txt)涉及的所有的依赖"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "Notice make sure you have install "
|
||||
#~ "git-lfs centos:yum install git-lfs "
|
||||
#~ "ubuntu:app-get install git-lfs "
|
||||
#~ "macos:brew install git-lfs"
|
||||
#~ msgstr ""
|
||||
#~ "注意下载模型之前确保git-lfs已经安ubuntu:app-get install "
|
||||
#~ "git-lfs macos:brew install git-lfs"
|
||||
|
||||
#~ msgid ""
|
||||
#~ "You can refer to this document to"
|
||||
#~ " obtain the Vicuna weights: "
|
||||
#~ "[Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md"
|
||||
#~ "#model-weights) ."
|
||||
#~ msgstr ""
|
||||
#~ "你可以参考如何获取Vicuna weights文档[Vicuna](https://github.com/lm-"
|
||||
#~ "sys/FastChat/blob/main/README.md#model-weights) ."
|
||||
|
||||
#~ msgid ""
|
||||
#~ "If you have difficulty with this "
|
||||
#~ "step, you can also directly use "
|
||||
#~ "the model from [this "
|
||||
#~ "link](https://huggingface.co/Tribbiani/vicuna-7b) as "
|
||||
#~ "a replacement."
|
||||
#~ msgstr ""
|
||||
#~ "如果觉得模型太大你也可以下载vicuna-7b [this "
|
||||
#~ "link](https://huggingface.co/Tribbiani/vicuna-7b) "
|
||||
|
||||
#~ msgid ""
|
||||
#~ "If you want to access an external"
|
||||
#~ " LLM service, you need to 1.set "
|
||||
#~ "the variables LLM_MODEL=YOUR_MODEL_NAME "
|
||||
#~ "MODEL_SERVER=YOUR_MODEL_SERVER(eg:http://localhost:5000) in "
|
||||
#~ "the .env file. 2.execute dbgpt_server.py "
|
||||
#~ "in light mode"
|
||||
#~ msgstr ""
|
||||
#~ "如果你想访问外部的大模型服务(是通过DB-"
|
||||
#~ "GPT/pilot/server/llmserver.py启动的模型服务),1.需要在.env文件设置模型名和外部模型服务地址。2.使用light模式启动服务"
|
||||
|
||||
|
@ -0,0 +1,322 @@
|
||||
# SOME DESCRIPTIVE TITLE.
|
||||
# Copyright (C) 2023, csunny
|
||||
# This file is distributed under the same license as the DB-GPT package.
|
||||
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
|
||||
#
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: 2023-08-17 21:23+0800\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language: zh_CN\n"
|
||||
"Language-Team: zh_CN <LL@li.org>\n"
|
||||
"Plural-Forms: nplurals=1; plural=0;\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=utf-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"Generated-By: Babel 2.12.1\n"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:1
|
||||
#: 911085eb102a47c1832411ada8b8b906
|
||||
msgid "llama.cpp"
|
||||
msgstr "llama.cpp"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:3
|
||||
#: 8099fe03f2204c7f90968ae5c0cae117
|
||||
msgid ""
|
||||
"DB-GPT is now supported by [llama-cpp-python](https://github.com/abetlen"
|
||||
"/llama-cpp-python) through "
|
||||
"[llama.cpp](https://github.com/ggerganov/llama.cpp)."
|
||||
msgstr "DB-GPT is now supported by [llama-cpp-python](https://github.com/abetlen"
|
||||
"/llama-cpp-python) through "
|
||||
"[llama.cpp](https://github.com/ggerganov/llama.cpp)."
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:5
|
||||
#: 79c33615ad6d44859b33ed0d05fdb1a5
|
||||
msgid "Running llama.cpp"
|
||||
msgstr "运行 llama.cpp"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:7
|
||||
#: 3111bf827639484fb5e5f72a42b1b4e7
|
||||
msgid "Preparing Model Files"
|
||||
msgstr "准备模型文件"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:9
|
||||
#: 823f04ec193946d080b203a19c4ed96f
|
||||
msgid ""
|
||||
"To use llama.cpp, you need to prepare a ggml format model file, and there"
|
||||
" are two common ways to obtain it, you can choose either:"
|
||||
msgstr "使用llama.cpp, 你需要准备ggml格式的文件,你可以通过以下两种方法获取"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:11
|
||||
#: c631f3c1a1db429c801c24fa7799b2e1
|
||||
msgid "Download a pre-converted model file."
|
||||
msgstr "Download a pre-converted model file."
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:13
|
||||
#: 1ac7d5845ca241519ec15236e9802af6
|
||||
msgid ""
|
||||
"Suppose you want to use [Vicuna 7B v1.5](https://huggingface.co/lmsys"
|
||||
"/vicuna-7b-v1.5), you can download the file already converted from "
|
||||
"[TheBloke/vicuna-7B-v1.5-GGML](https://huggingface.co/TheBloke/vicuna-"
|
||||
"7B-v1.5-GGML), only one file is needed. Download it to the `models` "
|
||||
"directory and rename it to `ggml-model-q4_0.bin`."
|
||||
msgstr "假设您想使用[Vicuna 7B v1.5](https://huggingface.co/lmsys"
|
||||
"/vicuna-7b-v1.5)您可以从[TheBloke/vicuna-7B-v1.5-GGML](https://huggingface.co/TheBloke/vicuna-"
|
||||
"7B-v1.5-GGML)下载已转换的文件,只需要一个文件。将其下载到models目录并将其重命名为ggml-model-q4_0.bin。"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:19
|
||||
#: 65344fafdaa1469797592e454ebee7b5
|
||||
msgid "Convert It Yourself"
|
||||
msgstr "Convert It Yourself"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:21
|
||||
#: 8da2bda172884c9fb2d64901d8b9178c
|
||||
msgid ""
|
||||
"You can convert the model file yourself according to the instructions in "
|
||||
"[llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp"
|
||||
"#prepare-data--run), and put the converted file in the models directory "
|
||||
"and rename it to `ggml-model-q4_0.bin`."
|
||||
msgstr "您可以根据[llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp"
|
||||
"#prepare-data--run)中的说明自己转换模型文件,然后将转换后的文件放入models目录中,并将其重命名为ggml-model-q4_0.bin。"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:23
|
||||
#: d30986c0a84448ff89bc4bb84e3d0deb
|
||||
msgid "Installing Dependencies"
|
||||
msgstr "安装依赖"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:25
|
||||
#: b91ca009587b45679c54f4dce07c2eb3
|
||||
msgid ""
|
||||
"llama.cpp is an optional dependency in DB-GPT, and you can manually "
|
||||
"install it using the following command:"
|
||||
msgstr "llama.cpp在DB-GPT中是可选安装项, 你可以通过一下命令进行安装"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:31
|
||||
#: 2c89087ba2214d97bc01a286826042bc
|
||||
msgid "Modifying the Configuration File"
|
||||
msgstr "修改配置文件"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:33
|
||||
#: e4ebd4dac0cd4fb4a8e3c1f6edde7ea8
|
||||
msgid "Next, you can directly modify your `.env` file to enable llama.cpp."
|
||||
msgstr "修改`.env`文件使用llama.cpp"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:40
|
||||
#: 2fce7ec613784c8e96f19e9f4c4fb818
|
||||
msgid ""
|
||||
"Then you can run it according to [Run](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run)."
|
||||
msgstr "然后你可以通过[Run](https://db-"
|
||||
"gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run).来运行"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:43
|
||||
#: 9bbaa16512d2420aa368ba34825cc024
|
||||
msgid "More Configurations"
|
||||
msgstr "更多配置文件"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:45
|
||||
#: 5ce014aa175d4a119150cf184098a0c3
|
||||
msgid ""
|
||||
"In DB-GPT, the model configuration can be done through `{model "
|
||||
"name}_{config key}`."
|
||||
msgstr "In DB-GPT, the model configuration can be done through `{model "
|
||||
"name}_{config key}`."
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 09d1f9eaf6cc4267b1eb94e4a8e78ba9
|
||||
msgid "Environment Variable Key"
|
||||
msgstr "Environment Variable Key"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 6650b3f2495e41588e23d8a2647e7ce3
|
||||
msgid "default"
|
||||
msgstr "default"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 8f33e692a7fc41e1a42663535b95a08c
|
||||
msgid "Prompt Template Name"
|
||||
msgstr "Prompt Template Name"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 2f8ca1f267694949a2b251d0ef576fd8
|
||||
msgid "llama_cpp_prompt_template"
|
||||
msgstr "llama_cpp_prompt_template"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 9b6d310d3f3c454488f76062c8bcda67 c80abec11c3240cf9d0122543e9401c3
|
||||
#: ed439c9374d74543a8d2a4f88f4db958 f49b3e4281b14f1b8909cd13159d406a
|
||||
#: ffa824cc22a946ab851124b58cf7441a
|
||||
msgid "None"
|
||||
msgstr "None"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: f41bd292bcb9491096c39b36bceb3816
|
||||
msgid ""
|
||||
"Prompt template name, now support: `zero_shot, vicuna_v1.1, llama-2"
|
||||
",baichuan-chat`, If None, the prompt template is automatically determined"
|
||||
" from model path。"
|
||||
msgstr "Prompt template 现在可以支持`zero_shot, vicuna_v1.1, llama-2"
|
||||
",baichuan-chat`, 如果是None, the prompt template可以自动选择模型路径"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 8125df74f1a7429eb0c5ce350edc9315
|
||||
msgid "llama_cpp_model_path"
|
||||
msgstr "llama_cpp_model_path"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 28ad71d410fb4757821bf8cc1c232357
|
||||
msgid "Model path"
|
||||
msgstr "Model path"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: b7133f115d79477ab416dc63c64d8aa7
|
||||
msgid "llama_cpp_n_gpu_layers"
|
||||
msgstr "llama_cpp_n_gpu_layers"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 7cdd27333a464ebeab6bf41bca709816
|
||||
msgid "1000000000"
|
||||
msgstr "1000000000"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 667ec37bc6824eba999f39f4e6072999
|
||||
msgid "Number of layers to offload to the GPU, Set this to 1000000000 to offload"
|
||||
" all layers to the GPU. If your GPU VRAM is not enough, you can set a low"
|
||||
" number, eg: `10`"
|
||||
msgstr "要将层数转移到GPU上,将其设置为1000000000以将所有层转移到GPU上。如果您的GPU VRAM不足,可以设置较低的数字,例如:10。"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: edae6c77475d4958a96d59b4bd165916
|
||||
msgid "llama_cpp_n_threads"
|
||||
msgstr "llama_cpp_n_threads"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 8fef1cdd4b0b42faadc717a58b9434a4
|
||||
msgid ""
|
||||
"Number of threads to use. If None, the number of threads is automatically"
|
||||
" determined"
|
||||
msgstr "要使用的线程数量。如果为None,则线程数量将自动确定。"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: ef5f27f600aa4eaf8080d6dca46ad434
|
||||
msgid "llama_cpp_n_batch"
|
||||
msgstr "llama_cpp_n_batch"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: be999101e5f84b83bde0d8e801083c52
|
||||
msgid "512"
|
||||
msgstr "512"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: cdc7a51c720a4fe38323eb7a1cfa6bd1
|
||||
msgid "Maximum number of prompt tokens to batch together when calling llama_eval"
|
||||
msgstr "在调用llama_eval时,批处理在一起的prompt tokens的最大数量。
|
||||
|
||||
"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 26d08360c2df4d018b2416cf8e4b3f48
|
||||
msgid "llama_cpp_n_gqa"
|
||||
msgstr "llama_cpp_n_gqa"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 4d22c4f8e285445f9c26d181cf350cb7
|
||||
msgid "Grouped-query attention. Must be 8 for llama-2 70b."
|
||||
msgstr "对于llama-2 70b模型,Grouped-query attention必须为8。"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 18e5e5ba6ac64e818546742458fb4c84
|
||||
msgid "llama_cpp_rms_norm_eps"
|
||||
msgstr "llama_cpp_rms_norm_eps"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: fc85743d71e1428fa2aa0423ff9d9170
|
||||
msgid "5e-06"
|
||||
msgstr "5e-06"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: bfc53bec0df94d9e8e9cc7d3333a69c1
|
||||
msgid "5e-6 is a good value for llama-2 models."
|
||||
msgstr "对于llama-2模型来说,5e-6是一个不错的值。"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 81fc8f37787d48b0b1da826a2f887886
|
||||
msgid "llama_cpp_cache_capacity"
|
||||
msgstr "llama_cpp_cache_capacity"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 125e3285249449cb90bbff063868d4d4
|
||||
msgid "Maximum cache capacity. Examples: 2000MiB, 2GiB"
|
||||
msgstr "cache capacity最大值. Examples: 2000MiB, 2GiB"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: 31ed49d49f574d4983738d91bed95fc9
|
||||
msgid "llama_cpp_prefer_cpu"
|
||||
msgstr "llama_cpp_prefer_cpu"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: b6ed0d5394874bb38c7468216b8bca88
|
||||
msgid "False"
|
||||
msgstr "False"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md
|
||||
#: d148d6b130454666a601b65b444c7e51
|
||||
msgid ""
|
||||
"If a GPU is available, it will be preferred by default, unless "
|
||||
"prefer_cpu=False is configured."
|
||||
msgstr "如果有可用的GPU,默认情况下会优先使用GPU,除非配置了prefer_cpu=False。"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:59
|
||||
#: 75bd31d245b148569bcf9eca6c8bec9c
|
||||
msgid "GPU Acceleration"
|
||||
msgstr "GPU 加速"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:61
|
||||
#: 1cf5538f9d39457e901f4120f76d54c1
|
||||
msgid ""
|
||||
"GPU acceleration is supported by default. If you encounter any issues, "
|
||||
"you can uninstall the dependent packages with the following command:"
|
||||
msgstr "默认情况下支持GPU加速。如果遇到任何问题,您可以使用以下命令卸载相关的依赖包"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:66
|
||||
#: 08f8eea38e5e44fa80b528b75a379acf
|
||||
msgid ""
|
||||
"Then install `llama-cpp-python` according to the instructions in [llama-"
|
||||
"cpp-python](https://github.com/abetlen/llama-cpp-"
|
||||
"python/blob/main/README.md)."
|
||||
msgstr "然后通过指令[llama-"
|
||||
"cpp-python](https://github.com/abetlen/llama-cpp-"
|
||||
"python/blob/main/README.md).安装`llama-cpp-python`"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:69
|
||||
#: abea5d4418c54657981640d6227b7be2
|
||||
msgid "Mac Usage"
|
||||
msgstr "Mac Usage"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:71
|
||||
#: be9fb5ecbdd5495b98007decccbd0372
|
||||
msgid ""
|
||||
"Special attention, if you are using Apple Silicon (M1) Mac, it is highly "
|
||||
"recommended to install arm64 architecture python support, for example:"
|
||||
msgstr "特别注意:如果您正在使用苹果芯片(M1)的Mac电脑,强烈建议安装arm64架构的Python支持,例如:"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:78
|
||||
#: efd6dfd4e9e24bf884803143e2b123f2
|
||||
msgid "Windows Usage"
|
||||
msgstr "Windows使用"
|
||||
|
||||
#: ../../getting_started/install/llm/llama/llama_cpp.md:80
|
||||
#: 27ecf054aa294a2eaed701c46edf27a7
|
||||
msgid ""
|
||||
"The use under the Windows platform has not been rigorously tested and "
|
||||
"verified, and you are welcome to use it. If you have any problems, you "
|
||||
"can create an [issue](https://github.com/eosphoros-ai/DB-GPT/issues) or "
|
||||
"[contact us](https://github.com/eosphoros-ai/DB-GPT/tree/main#contact-"
|
||||
"information) directly."
|
||||
msgstr "在Windows平台上的使用尚未经过严格的测试和验证,欢迎您使用。如果您有任何问题,可以创建一个[issue](https://github.com/eosphoros-ai/DB-GPT/issues)或者[contact us](https://github.com/eosphoros-ai/DB-GPT/tree/main#contact-"
|
||||
"information) directly."
|
||||
|
@ -37,7 +37,7 @@ LLM_MODEL_CONFIG = {
|
||||
"vicuna-13b-v1.5": os.path.join(MODEL_PATH, "vicuna-13b-v1.5"),
|
||||
"vicuna-7b-v1.5": os.path.join(MODEL_PATH, "vicuna-7b-v1.5"),
|
||||
"text2vec": os.path.join(MODEL_PATH, "text2vec-large-chinese"),
|
||||
#https://huggingface.co/moka-ai/m3e-large
|
||||
# https://huggingface.co/moka-ai/m3e-large
|
||||
"m3e-base": os.path.join(MODEL_PATH, "m3e-base"),
|
||||
# https://huggingface.co/moka-ai/m3e-base
|
||||
"m3e-large": os.path.join(MODEL_PATH, "m3e-large"),
|
||||
|
Loading…
Reference in New Issue
Block a user