mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-07-24 12:45:45 +00:00
doc:more integration documents
This commit is contained in:
parent
82bdc6fe94
commit
f6ad6f3943
@ -0,0 +1 @@
|
||||
# Claude
|
@ -0,0 +1 @@
|
||||
# ClickHouse
|
@ -0,0 +1 @@
|
||||
# DeepSeek
|
1
docs/docs/installation/integrations/gitee_llm_install.md
Normal file
1
docs/docs/installation/integrations/gitee_llm_install.md
Normal file
@ -0,0 +1 @@
|
||||
# Gitee
|
@ -1,4 +1,4 @@
|
||||
# Graph RAG Installation
|
||||
# Graph RAG
|
||||
|
||||
|
||||
In this example, we will show how to use the Graph RAG framework in DB-GPT. Using a graph database to implement RAG can, to some extent, alleviate the uncertainty and interpretability issues brought about by vector database retrieval.
|
||||
@ -68,41 +68,3 @@ uv run python packages/dbgpt-app/src/dbgpt_app/dbgpt_server.py --config configs/
|
||||
|
||||
|
||||
|
||||
|
||||
### Load into Knowledge Graph
|
||||
|
||||
When using a graph database as the underlying knowledge storage platform, it is necessary to build a knowledge graph to facilitate the archiving and retrieval of documents. DB-GPT leverages the capabilities of large language models to implement an integrated knowledge graph, while still maintaining the flexibility to freely connect to other knowledge graph systems and graph database systems.
|
||||
|
||||
We created a knowledge graph with graph community summaries based on `CommunitySummaryKnowledgeGraph`.
|
||||
|
||||
|
||||
|
||||
### Chat Knowledge via GraphRAG
|
||||
|
||||
> Note: The current test data is in Chinese.
|
||||
|
||||
Here we demonstrate how to achieve chat knowledge through Graph RAG on web page.
|
||||
|
||||
First, create a knowledge base using the `Knowledge Graph` type.
|
||||
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/graph_rag/create_knowledge_graph.png'} width="1000px"/>
|
||||
</p>
|
||||
|
||||
Then, upload the documents ([graphrag-test.md](https://github.com/eosphoros-ai/DB-GPT/blob/main/examples/test_files/graphrag-test.md)) and process them automatically (markdown header by default).
|
||||
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/graph_rag/upload_file.png'} width="1000px"/>
|
||||
</p>
|
||||
|
||||
After indexing, the graph data may look like this.
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/graph_rag/graph_data.png'} width="1000px"/>
|
||||
</p>
|
||||
|
||||
Start to chat on knowledge graph.
|
||||
<p align="left">
|
||||
<img src={'/img/chat_knowledge/graph_rag/graph_rag_chat.png'} width="1000px"/>
|
||||
</p>
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Milvus RAG Installation
|
||||
# Milvus RAG
|
||||
|
||||
|
||||
In this example, we will show how to use the Milvus as in DB-GPT RAG Storage. Using a graph database to implement RAG can, to some extent, alleviate the uncertainty and interpretability issues brought about by vector database retrieval.
|
46
docs/docs/installation/integrations/oceanbase_rag_install.md
Normal file
46
docs/docs/installation/integrations/oceanbase_rag_install.md
Normal file
@ -0,0 +1,46 @@
|
||||
# Oceanbase Vector RAG
|
||||
|
||||
|
||||
In this example, we will show how to use the Oceanbase Vector as in DB-GPT RAG Storage. Using a graph database to implement RAG can, to some extent, alleviate the uncertainty and interpretability issues brought about by vector database retrieval.
|
||||
|
||||
|
||||
### Install Dependencies
|
||||
|
||||
First, you need to install the `dbgpt Oceanbase Vector storage` library.
|
||||
|
||||
```bash
|
||||
uv sync --all-packages --frozen \
|
||||
--extra "proxy_openai" \
|
||||
--extra "rag" \
|
||||
--extra "storage_obvector" \
|
||||
--extra "dbgpts"
|
||||
````
|
||||
|
||||
### Prepare Oceanbase Vector
|
||||
|
||||
Prepare Oceanbase Vector database service, reference[Oceanbase Vector](https://open.oceanbase.com/) .
|
||||
|
||||
|
||||
### TuGraph Configuration
|
||||
|
||||
Set rag storage variables below in `configs/dbgpt-proxy-openai.toml` file, let DB-GPT know how to connect to Oceanbase Vector.
|
||||
|
||||
```
|
||||
[rag.storage]
|
||||
[rag.storage.vector]
|
||||
type = "Oceanbase"
|
||||
uri = "127.0.0.1"
|
||||
port = "19530"
|
||||
#username="dbgpt"
|
||||
#password=19530
|
||||
```
|
||||
|
||||
Then run the following command to start the webserver:
|
||||
```bash
|
||||
uv run python packages/dbgpt-app/src/dbgpt_app/dbgpt_server.py --config configs/dbgpt-proxy-openai.toml
|
||||
```
|
||||
|
||||
Optionally, you can also use the following command to start the webserver:
|
||||
```bash
|
||||
uv run python packages/dbgpt-app/src/dbgpt_app/dbgpt_server.py --config configs/dbgpt-proxy-openai.toml
|
||||
```
|
@ -0,0 +1 @@
|
||||
# Ollama
|
@ -0,0 +1 @@
|
||||
# SiliconFlow
|
@ -81,6 +81,7 @@ uv --version
|
||||
defaultValue="openai"
|
||||
values={[
|
||||
{label: 'OpenAI (proxy)', value: 'openai'},
|
||||
{label: 'DeepSeek (proxy)', value: 'deepseek'},
|
||||
{label: 'GLM4 (local)', value: 'glm-4'},
|
||||
]}>
|
||||
|
||||
@ -120,6 +121,53 @@ In the above command, `--config` specifies the configuration file, and `configs/
|
||||
Optionally, you can also use the following command to start the webserver:
|
||||
```bash
|
||||
uv run python packages/dbgpt-app/src/dbgpt_app/dbgpt_server.py --config configs/dbgpt-proxy-openai.toml
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="deepseek" label="DeepSeek(proxy)">
|
||||
|
||||
```bash
|
||||
# Use uv to install dependencies needed for OpenAI proxy
|
||||
uv sync --all-packages --frozen \
|
||||
--extra "proxy_openai" \
|
||||
--extra "rag" \
|
||||
--extra "storage_chromadb" \
|
||||
--extra "dbgpts"
|
||||
```
|
||||
|
||||
### Run Webserver
|
||||
|
||||
To run DB-GPT with DeepSeek proxy, you must provide the DeepSeek API key in the `configs/dbgpt-proxy-deepseek.toml`.
|
||||
|
||||
And you can specify your embedding model in the `configs/dbgpt-proxy-deepseek.toml` configuration file, the default embedding model is `BAAI/bge-large-zh-v1.5`. If you want to use other embedding models, you can modify the `configs/dbgpt-proxy-deepseek.toml` configuration file and specify the `name` and `provider` of the embedding model in the `[[models.embeddings]]` section. The provider can be `hf`.
|
||||
|
||||
```toml
|
||||
# Model Configurations
|
||||
[models]
|
||||
[[models.llms]]
|
||||
# name = "deepseek-chat"
|
||||
name = "deepseek-reasoner"
|
||||
provider = "proxy/deepseek"
|
||||
api_key = "your-deepseek-api-key"
|
||||
[[models.embeddings]]
|
||||
name = "BAAI/bge-large-zh-v1.5"
|
||||
provider = "hf"
|
||||
# If not provided, the model will be downloaded from the Hugging Face model hub
|
||||
# uncomment the following line to specify the model path in the local file system
|
||||
# path = "the-model-path-in-the-local-file-system"
|
||||
path = "/data/models/bge-large-zh-v1.5"
|
||||
```
|
||||
|
||||
Then run the following command to start the webserver:
|
||||
|
||||
```bash
|
||||
uv run dbgpt start webserver --config configs/dbgpt-proxy-deepseek.toml
|
||||
```
|
||||
In the above command, `--config` specifies the configuration file, and `configs/dbgpt-proxy-deepseek.toml` is the configuration file for the DeepSeek proxy model, you can also use other configuration files or create your own configuration file according to your needs.
|
||||
|
||||
Optionally, you can also use the following command to start the webserver:
|
||||
```bash
|
||||
uv run python packages/dbgpt-app/src/dbgpt_app/dbgpt_server.py --config configs/dbgpt-proxy-deepseek.toml
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
226
docs/sidebars.js
226
docs/sidebars.js
@ -32,7 +32,140 @@ const sidebars = {
|
||||
label: "Quickstart",
|
||||
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "Installation",
|
||||
collapsed: false,
|
||||
collapsible: false,
|
||||
items: [
|
||||
// {
|
||||
// type: 'doc',
|
||||
// id: 'installation/sourcecode',
|
||||
// },
|
||||
{
|
||||
// type: 'doc',
|
||||
// id: 'installation/integrations',
|
||||
type: "category",
|
||||
label: "Integrations",
|
||||
collapsed: false,
|
||||
collapsible: false,
|
||||
items: [
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/integrations"
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "LLM Integrations",
|
||||
items: [
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/integrations/deepseek_llm_install"
|
||||
},{
|
||||
type: "doc",
|
||||
id: "installation/integrations/ollama_llm_install"
|
||||
},{
|
||||
type: "doc",
|
||||
id: "installation/integrations/claude_llm_install"
|
||||
},{
|
||||
type: "doc",
|
||||
id: "installation/integrations/siliconflow_llm_install"
|
||||
},{
|
||||
type: "doc",
|
||||
id: "installation/integrations/gitee_llm_install"
|
||||
},
|
||||
]
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "Datasource Integrations",
|
||||
items: [
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/integrations/clickhouse_install"
|
||||
},
|
||||
]
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "RAG Integrations",
|
||||
items: [
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/integrations/graph_rag_install"
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/integrations/oceanbase_rag_install"
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/integrations/milvus_rag_install"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
]
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/docker',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/docker_compose',
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Model Service Deployment',
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/model_service/stand_alone',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/model_service/cluster',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/model_service/cluster_ha',
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Advanced Usage',
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/More_proxyllms',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/ollama',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/vLLM_inference',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/Llamacpp_server',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/OpenAI_SDK_call',
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
link: {
|
||||
type: 'generated-index',
|
||||
description: 'DB-GPT provides a wealth of installation and deployment options, supporting source code deployment, Docker deployment, cluster deployment and other modes. At the same time, it can also be deployed and installed based on the AutoDL image.',
|
||||
slug: "installation",
|
||||
},
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "AWEL(Agentic Workflow Expression Language)",
|
||||
@ -217,99 +350,6 @@ const sidebars = {
|
||||
description: "AWEL (Agentic Workflow Expression Language) is an intelligent agent workflow expression language specifically designed for the development of large-model applications",
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
type: "category",
|
||||
label: "Installation",
|
||||
collapsed: false,
|
||||
collapsible: false,
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/sourcecode',
|
||||
},
|
||||
{
|
||||
// type: 'doc',
|
||||
// id: 'installation/integrations',
|
||||
type: "category",
|
||||
label: "Integrations",
|
||||
collapsed: false,
|
||||
collapsible: false,
|
||||
items: [
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/integrations"
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/graph_rag_install"
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "installation/milvus_rag_install"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/docker',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/docker_compose',
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Model Service Deployment',
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/model_service/stand_alone',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/model_service/cluster',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/model_service/cluster_ha',
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Advanced Usage',
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/More_proxyllms',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/ollama',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/vLLM_inference',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/Llamacpp_server',
|
||||
},
|
||||
{
|
||||
type: 'doc',
|
||||
id: 'installation/advanced_usage/OpenAI_SDK_call',
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
link: {
|
||||
type: 'generated-index',
|
||||
description: 'DB-GPT provides a wealth of installation and deployment options, supporting source code deployment, Docker deployment, cluster deployment and other modes. At the same time, it can also be deployed and installed based on the AutoDL image.',
|
||||
slug: "installation",
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
type: "category",
|
||||
label: "Application",
|
||||
|
@ -159,8 +159,8 @@ class DBSchemaAssembler(BaseAssembler):
|
||||
table_chunks.append(chunk)
|
||||
|
||||
if self._field_vector_store_connector and field_chunks:
|
||||
self._field_vector_store_connector.load_document(field_chunks)
|
||||
return self._table_vector_store_connector.load_document(table_chunks)
|
||||
self._field_vector_store_connector.load_document_with_limit(field_chunks)
|
||||
return self._table_vector_store_connector.load_document_with_limit(table_chunks)
|
||||
|
||||
def _extract_info(self, chunks) -> List[Chunk]:
|
||||
"""Extract info from chunks."""
|
||||
|
Loading…
Reference in New Issue
Block a user