mirror of
https://github.com/csunny/DB-GPT.git
synced 2025-09-03 01:54:44 +00:00
doc:update knowledge api
This commit is contained in:
@@ -3,24 +3,78 @@ Knowledge
|
||||
|
||||
| As the knowledge base is currently the most significant user demand scenario, we natively support the construction and processing of knowledge bases. At the same time, we also provide multiple knowledge base management strategies in this project, such as pdf knowledge,md knowledge, txt knowledge, word knowledge, ppt knowledge:
|
||||
|
||||
We currently support many document formats: raw text, txt, pdf, md, html, doc, ppt, and url.
|
||||
|
||||
|
||||
**Create your own knowledge repository**
|
||||
|
||||
1.Place personal knowledge files or folders in the pilot/datasets directory.
|
||||
1.prepare
|
||||
|
||||
We currently support many document formats: txt, pdf, md, html, doc, ppt, and url.
|
||||
We currently support many document formats: raw text, txt, pdf, md, html, doc, ppt, and url.
|
||||
|
||||
before execution: python -m spacy download zh_core_web_sm
|
||||
before execution:
|
||||
|
||||
::
|
||||
|
||||
python -m spacy download zh_core_web_sm
|
||||
|
||||
2.Update your .env, set your vector store type, VECTOR_STORE_TYPE=Chroma
|
||||
(now only support Chroma and Milvus, if you set Milvus, please set MILVUS_URL and MILVUS_PORT)
|
||||
|
||||
2.Run the knowledge repository script in the tools directory.
|
||||
3.init Url Type EmbeddingEngine api and embedding your document into vector store in your code.
|
||||
|
||||
python tools/knowledge_init.py
|
||||
note : --vector_name : your vector store name default_value:default
|
||||
::
|
||||
|
||||
3.Add the knowledge repository in the interface by entering the name of your knowledge repository (if not specified, enter "default") so you can use it for Q&A based on your knowledge base.
|
||||
url = "https://db-gpt.readthedocs.io/en/latest/getting_started/getting_started.html"
|
||||
embedding_model = "text2vec"
|
||||
vector_store_config = {
|
||||
"vector_store_name": your_name,
|
||||
}
|
||||
embedding_engine = EmbeddingEngine(
|
||||
knowledge_source=url,
|
||||
knowledge_type=KnowledgeType.URL.value,
|
||||
model_name=embedding_model,
|
||||
vector_store_config=vector_store_config)
|
||||
embedding_engine.knowledge_embedding()
|
||||
|
||||
4.init Document Type EmbeddingEngine api and embedding your document into vector store in your code.
|
||||
Document type can be .txt, .pdf, .md, .doc, .ppt.
|
||||
|
||||
::
|
||||
|
||||
document_path = "your_path/test.md"
|
||||
embedding_model = "text2vec"
|
||||
vector_store_config = {
|
||||
"vector_store_name": your_name,
|
||||
}
|
||||
embedding_engine = EmbeddingEngine(
|
||||
knowledge_source=document_path,
|
||||
knowledge_type=KnowledgeType.DOCUMENT.value,
|
||||
model_name=embedding_model,
|
||||
vector_store_config=vector_store_config)
|
||||
embedding_engine.knowledge_embedding()
|
||||
|
||||
5.init TEXT Type EmbeddingEngine api and embedding your document into vector store in your code.
|
||||
|
||||
::
|
||||
|
||||
raw_text = "a long passage"
|
||||
embedding_model = "text2vec"
|
||||
vector_store_config = {
|
||||
"vector_store_name": your_name,
|
||||
}
|
||||
embedding_engine = EmbeddingEngine(
|
||||
knowledge_source=raw_text,
|
||||
knowledge_type=KnowledgeType.TEXT.value,
|
||||
model_name=embedding_model,
|
||||
vector_store_config=vector_store_config)
|
||||
embedding_engine.knowledge_embedding()
|
||||
|
||||
4.similar search based on your knowledge base.
|
||||
::
|
||||
query = "please introduce the oceanbase"
|
||||
topk = 5
|
||||
docs = embedding_engine.similar_search(query, topk)
|
||||
|
||||
Note that the default vector model used is text2vec-large-chinese (which is a large model, so if your personal computer configuration is not enough, it is recommended to use text2vec-base-chinese). Therefore, ensure that you download the model and place it in the models directory.
|
||||
|
||||
|
Reference in New Issue
Block a user