doc:update deploy doc

This commit is contained in:
aries_ckt
2023-11-03 10:24:49 +08:00
parent 606d384a55
commit 9cc6386301
6 changed files with 682 additions and 178 deletions

View File

@@ -30,10 +30,24 @@ extensions = [
"myst_nb",
"sphinx_copybutton",
"sphinx_panels",
"sphinx_tabs.tabs",
"IPython.sphinxext.ipython_console_highlighting",
'sphinx.ext.autosectionlabel'
]
source_suffix = [".ipynb", ".html", ".md", ".rst"]
myst_enable_extensions = [
"dollarmath",
"amsmath",
"deflist",
"html_admonition",
"html_image",
"colon_fence",
"smartquotes",
"replacements",
]
# autodoc_pydantic_model_show_json = False
# autodoc_pydantic_field_list_validators = False
# autodoc_pydantic_config_members = False
@@ -56,5 +70,7 @@ gettext_uuid = True
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = "furo"
html_theme = "sphinx_book_theme"
html_static_path = ["_static"]

View File

@@ -18,7 +18,7 @@ DB-GPT product is a Web application that you can chat database, chat knowledge,
:name: deploy
:hidden:
./install/deploy/deploy.md
./install/deploy.rst
./install/docker/docker.md
./install/docker_compose/docker_compose.md
./install/cluster/cluster.rst

View File

@@ -0,0 +1,438 @@
.. _installation:
Installation From Source
==============
To get started, install DB-GPT with the following steps.
DB-GPT can be deployed on servers with low hardware requirements or on servers with high hardware requirements.
You can install DB-GPT by Using third-part LLM REST API Service OpenAI, Azure.
And you can also install DB-GPT by deploy LLM Service by download LLM model.
1.Preparation
-----------------
**Download DB-GPT**
.. code-block:: shell
git clone https://github.com/eosphoros-ai/DB-GPT.git
**Install Miniconda**
We use Sqlite as default database, so there is no need for database installation. If you choose to connect to other databases, you can follow our tutorial for installation and configuration.
For the entire installation process of DB-GPT, we use the miniconda3 virtual environment. Create a virtual environment and install the Python dependencies.
:ref: `https://docs.conda.io/en/latest/miniconda.html<How to install Miniconda>`
.. code-block:: shell
python>=3.10
conda create -n dbgpt_env python=3.10
conda activate dbgpt_env
# it will take some minutes
pip install -e ".[default]"
.. code-block:: shell
cp .env.template .env
2.Deploy LLM Service
-----------------
DB-GPT can be deployed on servers with low hardware requirements or on servers with high hardware requirements.
If you are low hardware requirements you can install DB-GPT by Using third-part LLM REST API Service OpenAI, Azure, tongyi.
.. tabs::
.. tab:: OpenAI
Download embedding model
.. code-block:: shell
cd DB-GPT
mkdir models and cd models
#### embedding model
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
or
git clone https://huggingface.co/moka-ai/m3e-large
Configure LLM_MODEL and PROXY_API_URL and API_KEY in `.env` file
.. code-block:: shell
LLM_MODEL=chatgpt_proxyllm
PROXY_API_KEY={your-openai-sk}
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
.. tip::
Make sure your .env configuration is not overwritten
.. tab:: Vicuna
([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-13b-v1.5` to try this model)
.. list-table:: vicuna-v1.5 hardware requirements
:widths: 50 50 50
:header-rows: 1
* - Model
- Quantize
- VRAM Size
* - vicuna-7b-v1.5
- 4-bit
- 8 GB
* - vicuna-7b-v1.5
- 8-bit
- 12 GB
* - vicuna-13b-v1.5
- 4-bit
- 12 GB
* - vicuna-13b-v1.5
- 8-bit
- 20 GB
.. note::
Notice make sure you have install git-lfs
centos:yum install git-lfs
ubuntu:apt-get install git-lfs
macos:brew install git-lfs
.. code-block:: shell
cd DB-GPT
mkdir models and cd models
#### embedding model
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
or
git clone https://huggingface.co/moka-ai/m3e-large
#### llm model, if you use openai or Azure or tongyi llm api service, you don't need to download llm model
git clone https://huggingface.co/lmsys/vicuna-13b-v1.5
The model files are large and will take a long time to download.
**Configure LLM_MODEL in `.env` file**
.. code-block:: shell
LLM_MODEL=vicuna-13b-v1.5
.. tab:: Baichuan
.. list-table:: Baichuan hardware requirements
:widths: 50 50 50
:header-rows: 1
* - Model
- Quantize
- VRAM Size
* - baichuan-7b
- 4-bit
- 8 GB
* - baichuan-7b
- 8-bit
- 12 GB
* - baichuan-13b
- 4-bit
- 12 GB
* - baichuan-13b
- 8-bit
- 20 GB
.. note::
Notice make sure you have install git-lfs
centos:yum install git-lfs
ubuntu:apt-get install git-lfs
macos:brew install git-lfs
.. code-block:: shell
cd DB-GPT
mkdir models and cd models
#### embedding model
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
or
git clone https://huggingface.co/moka-ai/m3e-large
#### llm model
git clone https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
or
git clone https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
The model files are large and will take a long time to download.
**Configure LLM_MODEL in `.env` file**
please rename Baichuan path to "baichuan2-13b" or "baichuan2-7b"
.. code-block:: shell
LLM_MODEL=baichuan2-13b
.. tab:: ChatGLM
.. note::
Notice make sure you have install git-lfs
centos:yum install git-lfs
ubuntu:apt-get install git-lfs
macos:brew install git-lfs
.. code-block:: shell
cd DB-GPT
mkdir models and cd models
#### embedding model
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
or
git clone https://huggingface.co/moka-ai/m3e-large
#### llm model
git clone https://huggingface.co/THUDM/chatglm2-6b
The model files are large and will take a long time to download.
**Configure LLM_MODEL in `.env` file**
please rename chatglm model path to "chatglm2-6b"
.. code-block:: shell
LLM_MODEL=chatglm2-6b
.. tab:: Other LLM API
Download embedding model
.. code-block:: shell
cd DB-GPT
mkdir models and cd models
#### embedding model
git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese
or
git clone https://huggingface.co/moka-ai/m3e-large
.. note::
* OpenAI
* Azure
* Aliyun tongyi
* Baidu wenxin
* Zhipu
* Baichuan
* Bard
Configure LLM_MODEL and PROXY_API_URL and API_KEY in `.env` file
.. code-block:: shell
#OpenAI
LLM_MODEL=chatgpt_proxyllm
PROXY_API_KEY={your-openai-sk}
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
#Azure
LLM_MODEL=chatgpt_proxyllm
PROXY_API_KEY={your-azure-sk}
PROXY_API_BASE=https://{your domain}.openai.azure.com/
PROXY_API_TYPE=azure
PROXY_SERVER_URL=xxxx
PROXY_API_VERSION=2023-05-15
PROXYLLM_BACKEND=gpt-35-turbo
#Aliyun tongyi
LLM_MODEL=tongyi_proxyllm
TONGYI_PROXY_API_KEY={your-tongyi-sk}
PROXY_SERVER_URL={your_service_url}
## Baidu wenxin
LLM_MODEL=wenxin_proxyllm
PROXY_SERVER_URL={your_service_url}
WEN_XIN_MODEL_VERSION={version}
WEN_XIN_API_KEY={your-wenxin-sk}
WEN_XIN_SECRET_KEY={your-wenxin-sct}
## Zhipu
LLM_MODEL=zhipu_proxyllm
PROXY_SERVER_URL={your_service_url}
ZHIPU_MODEL_VERSION={version}
ZHIPU_PROXY_API_KEY={your-zhipu-sk}
## Baichuan
LLM_MODEL=bc_proxyllm
PROXY_SERVER_URL={your_service_url}
BAICHUN_MODEL_NAME={version}
BAICHUAN_PROXY_API_KEY={your-baichuan-sk}
BAICHUAN_PROXY_API_SECRET={your-baichuan-sct}
## bard
LLM_MODEL=bard_proxyllm
PROXY_SERVER_URL={your_service_url}
# from https://bard.google.com/ f12-> application-> __Secure-1PSID
BARD_PROXY_API_KEY={your-bard-token}
.. tip::
Make sure your .env configuration is not overwritten
.. tab:: llama.cpp
DB-GPT already supports [llama.cpp](https://github.com/ggerganov/llama.cpp) via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python).
**Preparing Model Files**
To use llama.cpp, you need to prepare a gguf format model file, and there are two common ways to obtain it, you can choose either:
**1. Download a pre-converted model file.**
Suppose you want to use [Vicuna 13B v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5), you can download the file already converted from [TheBloke/vicuna-13B-v1.5-GGUF](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF), only one file is needed. Download it to the `models` directory and rename it to `ggml-model-q4_0.gguf`.
.. code-block::
wget https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF/resolve/main/vicuna-13b-v1.5.Q4_K_M.gguf -O models/ggml-model-q4_0.gguf
**2. Convert It Yourself**
You can convert the model file yourself according to the instructions in [llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp#prepare-data--run), and put the converted file in the models directory and rename it to `ggml-model-q4_0.gguf`.
**Installing Dependencies**
llama.cpp is an optional dependency in DB-GPT, and you can manually install it using the following command:
.. code-block::
pip install -e ".[llama_cpp]"
**3.Modifying the Configuration File**
Next, you can directly modify your `.env` file to enable llama.cpp.
.. code-block::
LLM_MODEL=llama-cpp
llama_cpp_prompt_template=vicuna_v1.1
Then you can run it according to [Run](https://db-gpt.readthedocs.io/en/latest/getting_started/install/deploy/deploy.html#run).
**More Configurations**
In DB-GPT, the model configuration can be done through `{model name}_{config key}`.
.. list-table:: More Configurations
:widths: 50 50 50
:header-rows: 1
* - Environment Variable Key
- Default
- Description
* - llama_cpp_prompt_template
- None
- Prompt template name, now support: zero_shot, vicuna_v1.1,alpaca,llama-2,baichuan-chat,internlm-chat, If None, the prompt template is automatically determined from model path。
* - llama_cpp_model_path
- None
- Model path
* - llama_cpp_n_gpu_layers
- 1000000000
- Number of layers to offload to the GPU, Set this to 1000000000 to offload all layers to the GPU. If your GPU VRAM is not enough, you can set a low number, eg: 10
* - llama_cpp_n_threads
- None
- Number of threads to use. If None, the number of threads is automatically determined
* - llama_cpp_n_batch
- 512
- Maximum number of prompt tokens to batch together when calling llama_eval
* - llama_cpp_n_gqa
- None
- Grouped-query attention. Must be 8 for llama-2 70b.
* - llama_cpp_rms_norm_eps
- 5e-06
- 5e-6 is a good value for llama-2 models.
* - llama_cpp_cache_capacity
- None
- Maximum cache capacity. Examples: 2000MiB, 2GiB
* - llama_cpp_prefer_cpu
- False
- If a GPU is available, it will be preferred by default, unless prefer_cpu=False is configured.
.. tab:: vllm
vLLM is a fast and easy-to-use library for LLM inference and serving.
**Running vLLM**
**1.Installing Dependencies**
vLLM is an optional dependency in DB-GPT, and you can manually install it using the following command:
.. code-block::
pip install -e ".[vllm]"
**2.Modifying the Configuration File**
Next, you can directly modify your .env file to enable vllm.
.. code-block::
LLM_MODEL=vicuna-13b-v1.5
MODEL_TYPE=vllm
3.Prepare sql example(Optional)
-----------------
**(Optional) load examples into SQLite**
.. code-block:: shell
bash ./scripts/examples/load_examples.sh
On windows platform:
.. code-block:: shell
.\scripts\examples\load_examples.bat
4.Run db-gpt server
-----------------
.. code-block:: shell
python pilot/server/dbgpt_server.py
**Open http://localhost:5000 with your browser to see the product.**

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 👏👏 0.3.5\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-08-17 13:07+0800\n"
"POT-Creation-Date: 2023-11-02 10:10+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -20,290 +20,292 @@ msgstr ""
"Generated-By: Babel 2.12.1\n"
#: ../../getting_started/install/environment/environment.md:1
#: be341d16f7b24bf4ad123ab78a6d855a
#: 28d2f84fc8884e78afad8118cd59c654
#, fuzzy
msgid "Environment Parameter"
msgstr "环境变量说明"
#: ../../getting_started/install/environment/environment.md:4
#: 46eddb27c90f41548ea9a724bbcebd37
#: c83fbb5e1aa643cdb09fffe7f3d1a3c5
msgid "LLM MODEL Config"
msgstr "模型配置"
#: ../../getting_started/install/environment/environment.md:5
#: 7deaa85df4a04fb098f5994547a8724f
#: eb675965ae57407e8d8bf90fed8e9e2a
msgid "LLM Model Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr "LLM Model Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
#: ../../getting_started/install/environment/environment.md:6
#: 3902801c546547b3a4009df681ef7d52
#: 5d28d35126d849ea9b0d963fd1ba8699
msgid "LLM_MODEL=vicuna-13b"
msgstr "LLM_MODEL=vicuna-13b"
#: ../../getting_started/install/environment/environment.md:8
#: 84b0fdbfa1544ec28751e9b69b00cc02
#: 01955b2d0fbe4d94939ebf2cbb380bdd
msgid "MODEL_SERVER_ADDRESS"
msgstr "MODEL_SERVER_ADDRESS"
#: ../../getting_started/install/environment/environment.md:9
#: 0b430bfab77d405989470d00ca3f6fe0
#: 4eaaa9ab59854c0386b28b3111c82784
msgid "MODEL_SERVER=http://127.0.0.1:8000 LIMIT_MODEL_CONCURRENCY"
msgstr "MODEL_SERVER=http://127.0.0.1:8000 LIMIT_MODEL_CONCURRENCY"
#: ../../getting_started/install/environment/environment.md:12
#: b477a25586c546729a93fb6785b7b2ec
#: 5c2dd05e16834443b7451c2541b59757
msgid "LIMIT_MODEL_CONCURRENCY=5"
msgstr "LIMIT_MODEL_CONCURRENCY=5"
#: ../../getting_started/install/environment/environment.md:14
#: 1d6ea800af384fff9c265610f71cc94e
#: 7707836c2fb04e7da13d2d59b5f9566f
msgid "MAX_POSITION_EMBEDDINGS"
msgstr "MAX_POSITION_EMBEDDINGS"
#: ../../getting_started/install/environment/environment.md:16
#: 388e758ce4ea4692a4c34294cebce7f2
#: ee24a7d3d8384e61b715ef3bd362b965
msgid "MAX_POSITION_EMBEDDINGS=4096"
msgstr "MAX_POSITION_EMBEDDINGS=4096"
#: ../../getting_started/install/environment/environment.md:18
#: 16a307dce1294ceba892ff93ae4e81c0
#: 90b51aa4e46b4d1298c672e0052c2f68
msgid "QUANTIZE_QLORA"
msgstr "QUANTIZE_QLORA"
#: ../../getting_started/install/environment/environment.md:20
#: 93ceb2b2fcd5454b82eefb0ae8c7ae77
#: 7de7a8eb431e4973ae00f68ca0686281
msgid "QUANTIZE_QLORA=True"
msgstr "QUANTIZE_QLORA=True"
#: ../../getting_started/install/environment/environment.md:22
#: 15ffa35d023a4530b02a85ee6168dd4b
#: e331ca016a474f4aa4e9182165a2693a
msgid "QUANTIZE_8bit"
msgstr "QUANTIZE_8bit"
#: ../../getting_started/install/environment/environment.md:24
#: 81df248ac5cb4ab0b13a711505f6a177
#: 519ccce5a0884778be2719c437a17bd4
msgid "QUANTIZE_8bit=True"
msgstr "QUANTIZE_8bit=True"
#: ../../getting_started/install/environment/environment.md:27
#: 15cc7b7d41ad44f0891c1189709f00f1
#: 1c0586d070f046de8d0f9f94a6b508b4
msgid "LLM PROXY Settings"
msgstr "LLM PROXY Settings"
#: ../../getting_started/install/environment/environment.md:28
#: e6c1115a39404f11b193a1593bc51a22
#: c208c3f4b13f4b39962de814e5be6ab9
msgid "OPENAI Key"
msgstr "OPENAI Key"
#: ../../getting_started/install/environment/environment.md:30
#: 8157e0a831fe4506a426822b7565e4f6
#: 9228bbee2faa4467b1d24f1125faaac8
msgid "PROXY_API_KEY={your-openai-sk}"
msgstr "PROXY_API_KEY={your-openai-sk}"
#: ../../getting_started/install/environment/environment.md:31
#: 89b34d00bdb64e738bd9bc8c086b1f02
#: 759ae581883348019c1ba79e8954728a
msgid "PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions"
msgstr "PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions"
#: ../../getting_started/install/environment/environment.md:33
#: 7a97df730aeb484daf19c8172e61a290
#: 83f3952917d34aab80bd34119f7d1e20
msgid "from https://bard.google.com/ f12-> application-> __Secure-1PSID"
msgstr "from https://bard.google.com/ f12-> application-> __Secure-1PSID"
#: ../../getting_started/install/environment/environment.md:35
#: d430ddf726a049c0a9e0a9bfd5a6fe0e
#: 1d70707ca82749bb90b2bed1aee44d62
msgid "BARD_PROXY_API_KEY={your-bard-token}"
msgstr "BARD_PROXY_API_KEY={your-bard-token}"
#: ../../getting_started/install/environment/environment.md:38
#: 23d6b0da3e7042abb55f6181c4a382d2
#: 38a2091fa223493ea23cb9bbb33cf58e
msgid "DATABASE SETTINGS"
msgstr "DATABASE SETTINGS"
#: ../../getting_started/install/environment/environment.md:39
#: dbae0a2d847f41f5be9396a160ef88d0
#: 5134180d7a5945b48b072a1eb92b27ba
msgid "SQLite database (Current default database)"
msgstr "SQLite database (Current default database)"
#: ../../getting_started/install/environment/environment.md:40
#: bdb55b7280c341a981e9d338cce53345
#: 6875e2300e094668a45fa4f2551e0d30
msgid "LOCAL_DB_PATH=data/default_sqlite.db"
msgstr "LOCAL_DB_PATH=data/default_sqlite.db"
#: ../../getting_started/install/environment/environment.md:41
#: 739d67927a9d46b28500deba1917916b
#: 034e8f06f24f44af9d8184563f99b4b3
msgid "LOCAL_DB_TYPE=sqlite # Database Type default:sqlite"
msgstr "LOCAL_DB_TYPE=sqlite # Database Type default:sqlite"
#: ../../getting_started/install/environment/environment.md:43
#: eb4717bce6a6483b86d9780d924c5ff1
#: f688149a97f740269f80b79775236ce9
msgid "MYSQL database"
msgstr "MYSQL database"
#: ../../getting_started/install/environment/environment.md:44
#: 0f4cdf0ff5dd4ff0b397dfa88541a2e1
#: 6db0b305137d45a3aa036e4f2262f460
msgid "LOCAL_DB_TYPE=mysql"
msgstr "LOCAL_DB_TYPE=mysql"
#: ../../getting_started/install/environment/environment.md:45
#: c971ead492c34487bd766300730a9cba
#: b6d662ce8d5f44f0b54a7f6e7c66f5a5
msgid "LOCAL_DB_USER=root"
msgstr "LOCAL_DB_USER=root"
#: ../../getting_started/install/environment/environment.md:46
#: 02828b29ad044eeab890a2f8af0e5907
#: cd7493d61ac9415283640dc6c018d2f4
msgid "LOCAL_DB_PASSWORD=aa12345678"
msgstr "LOCAL_DB_PASSWORD=aa12345678"
#: ../../getting_started/install/environment/environment.md:47
#: 53dc7f15b3934987b1f4c2e2d0b11299
#: 4ea2a622b23f4342a4c2ab7f8d9c4e8d
msgid "LOCAL_DB_HOST=127.0.0.1"
msgstr "LOCAL_DB_HOST=127.0.0.1"
#: ../../getting_started/install/environment/environment.md:48
#: 1ac95fc482934247a118bab8dcebeb57
#: 936db95a0ab246098028f4dbb596cd17
msgid "LOCAL_DB_PORT=3306"
msgstr "LOCAL_DB_PORT=3306"
#: ../../getting_started/install/environment/environment.md:51
#: 34e46aa926844be19c7196759b03af63
#: d9255f25989840ea9c9e7b34f3947c87
msgid "EMBEDDING SETTINGS"
msgstr "EMBEDDING SETTINGS"
#: ../../getting_started/install/environment/environment.md:52
#: 2b5aa08cc995495e85a1f7dc4f97b5d7
#: b09291d32aca43928a981e873476a985
msgid "EMBEDDING MODEL Name, see /pilot/configs/model_config.LLM_MODEL_CONFIG"
msgstr "EMBEDDING模型, 参考see /pilot/configs/model_config.LLM_MODEL_CONFIG"
#: ../../getting_started/install/environment/environment.md:53
#: 0de0ca551ed040248406f848feca541d
#: 63de573b03a54413b997f18a1ccee279
msgid "EMBEDDING_MODEL=text2vec"
msgstr "EMBEDDING_MODEL=text2vec"
#: ../../getting_started/install/environment/environment.md:55
#: 43019fb570904c9981eb68f33e64569c
#: 0ef8defbab544bd0b9475a036f278489
msgid "Embedding Chunk size, default 500"
msgstr "Embedding 切片大小, 默认500"
#: ../../getting_started/install/environment/environment.md:57
#: 7e3f93854873461286e96887e04167aa
#: 33dbc7941d054baa8c6ecfc0bf1ce271
msgid "KNOWLEDGE_CHUNK_SIZE=500"
msgstr "KNOWLEDGE_CHUNK_SIZE=500"
#: ../../getting_started/install/environment/environment.md:59
#: 9504f4a59ae74352a524b7741113e2d6
#: e6ee9f2620ab45ecbc8e9c0642f5ca42
msgid "Embedding Chunk Overlap, default 100"
msgstr "Embedding chunk Overlap, 文本块之间的最大重叠量。保留一些重叠可以保持文本块之间的连续性(例如使用滑动窗口),默认100"
#: ../../getting_started/install/environment/environment.md:60
#: 24e6119c2051479bbd9dba71a9c23dbe
#: fcddf64340a04df4ab95176fc2fc67a6
msgid "KNOWLEDGE_CHUNK_OVERLAP=100"
msgstr "KNOWLEDGE_CHUNK_OVERLAP=100"
#: ../../getting_started/install/environment/environment.md:62
#: 0d180d7f2230442abee901c19526e442
msgid "embeding recall top k,5"
#: 61272200194b4461a921581feb1273da
#, fuzzy
msgid "embedding recall top k,5"
msgstr "embedding 召回topk, 默认5"
#: ../../getting_started/install/environment/environment.md:64
#: a5bb9ab2ba50411cbbe87f7836bfbb6d
#: b433091f055542b1b89ff2d525ac99e4
msgid "KNOWLEDGE_SEARCH_TOP_SIZE=5"
msgstr "KNOWLEDGE_SEARCH_TOP_SIZE=5"
#: ../../getting_started/install/environment/environment.md:66
#: 183b8dd78cba4ae19bd2e08d69d21e0b
msgid "embeding recall max token ,2000"
#: 1db0de41aebd4caa8cc2eaecb4cacd6a
#, fuzzy
msgid "embedding recall max token ,2000"
msgstr "embedding向量召回最大token, 默认2000"
#: ../../getting_started/install/environment/environment.md:68
#: ce0c711febcb44c18ae0fc858c3718d1
#: 81b9d862e58941a4b09680a7520cdabe
msgid "KNOWLEDGE_SEARCH_MAX_TOKEN=5"
msgstr "KNOWLEDGE_SEARCH_MAX_TOKEN=5"
#: ../../getting_started/install/environment/environment.md:71
#: ../../getting_started/install/environment/environment.md:87
#: 4cab1f399cc245b4a1a1976d2c4fc926 ec9cec667a1c4473bf9a796a26e1ce20
#: cac73575d54544778bdee09b18532fd9 f78a509949a64f03aa330f31901e2e7a
msgid "Vector Store SETTINGS"
msgstr "Vector Store SETTINGS"
#: ../../getting_started/install/environment/environment.md:72
#: ../../getting_started/install/environment/environment.md:88
#: 4dd04aadd46948a5b1dcf01fdb0ef074 bab7d512f33e40cf9e10f0da67e699c8
#: 5ebba1cb047b4b09849000244237dfbb 7e9285e91bcb4b2d9413909c0d0a06a7
msgid "Chroma"
msgstr "Chroma"
#: ../../getting_started/install/environment/environment.md:73
#: ../../getting_started/install/environment/environment.md:89
#: 13eec36741b14e028e2d3859a320826e ab3ffbcf9358401993af636ba9ab2e2d
#: 05625cfcc23c4745ae1fa0d94ce5450c 3a8615f1507f4fc49d1adda5100a4edf
msgid "VECTOR_STORE_TYPE=Chroma"
msgstr "VECTOR_STORE_TYPE=Chroma"
#: ../../getting_started/install/environment/environment.md:74
#: ../../getting_started/install/environment/environment.md:90
#: d15b91e2a2884f23a1dd2d54783b0638 d1f856d571b547098bb0c2a18f9f1979
#: 5b559376aea44f159262e6d4b75c7ec1 e954782861404b10b4e893e01cf74452
msgid "MILVUS"
msgstr "MILVUS"
#: ../../getting_started/install/environment/environment.md:75
#: ../../getting_started/install/environment/environment.md:91
#: 1e165f6c934343c7808459cc7a65bc70 985dd60c2b7d4baaa6601a810a6522d7
#: 55ee8199c97a4929aeefd32370f2b92d 8f40c02543ea4a2ca9632dd9e8a08c2e
msgid "VECTOR_STORE_TYPE=Milvus"
msgstr "VECTOR_STORE_TYPE=Milvus"
#: ../../getting_started/install/environment/environment.md:76
#: ../../getting_started/install/environment/environment.md:92
#: a1a53f051cee40ed886346a94babd75a d263e8eaee684935a58f0a4fe61c6f0e
#: 528a01d25720491c8e086bf43a62ad92 ba1386d551d7494a85681a2803081a6f
msgid "MILVUS_URL=127.0.0.1"
msgstr "MILVUS_URL=127.0.0.1"
#: ../../getting_started/install/environment/environment.md:77
#: ../../getting_started/install/environment/environment.md:93
#: 2741a312db1a4c6a8a1c1d62415c5fba d03bbf921ddd4f4bb715fe5610c3d0aa
#: b031950dafcd4d4783c120dc933c4178 c2e9c8cdd41741e3aba01e59a6ef245d
msgid "MILVUS_PORT=19530"
msgstr "MILVUS_PORT=19530"
#: ../../getting_started/install/environment/environment.md:78
#: ../../getting_started/install/environment/environment.md:94
#: d0786490d38c4e4f971cc14f62fe1fc8 e9e0854873dc4c209861ee4eb77d25cd
#: 27b0a64af6434cb2840373e2b38c9bd5 d0e4d79af7954b129ffff7303a1ec3ce
msgid "MILVUS_USERNAME"
msgstr "MILVUS_USERNAME"
#: ../../getting_started/install/environment/environment.md:79
#: ../../getting_started/install/environment/environment.md:95
#: 9a82d07153cc432ebe754b5bc02fde0d a6485c1cfa7d4069a6894c43674c8c2b
#: 27aa1a5b61e64dd6bfe29124e274809e 5c58892498ce4f46a59f54b2887822d4
msgid "MILVUS_PASSWORD"
msgstr "MILVUS_PASSWORD"
#: ../../getting_started/install/environment/environment.md:80
#: ../../getting_started/install/environment/environment.md:96
#: 2f233f32b8ba408a9fbadb21fabb99ec 809b3219dd824485bc2cfc898530d708
#: 009e57d4acc5434da2146f0545911c85 bac8888dcbff47fbb0ea8ae685445aac
msgid "MILVUS_SECURE="
msgstr "MILVUS_SECURE="
#: ../../getting_started/install/environment/environment.md:82
#: ../../getting_started/install/environment/environment.md:98
#: f00603661f2b42e1bd2bca74ad1e3c31 f378e16fdec44c559e34c6929de812e8
#: a6eeb16ab5274045bee88ecc3d93e09e eb341774403d47658b9b7e94c4c16d5c
msgid "WEAVIATE"
msgstr "WEAVIATE"
#: ../../getting_started/install/environment/environment.md:83
#: da2049ebc6874cf0a6b562e0e2fd9ec7
#: fbd97522d8da4824b41b99298fd41069
msgid "VECTOR_STORE_TYPE=Weaviate"
msgstr "VECTOR_STORE_TYPE=Weaviate"
#: ../../getting_started/install/environment/environment.md:84
#: ../../getting_started/install/environment/environment.md:99
#: 25f1246629934289aad7ef01c7304097 c9fe0e413d9a4fc8abf86b3ed99e0581
#: 341785b4abfe42b5af1c2e04497261f4 a81cc2aabc8240f3ac1f674d9350bff4
msgid "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network"
msgstr "WEAVIATE_URL=https://kt-region-m8hcy0wc.weaviate.network"
#: ../../getting_started/install/environment/environment.md:102
#: ba7c9e707f6a4cd6b99e52b58da3ab2d
#: 5bb9e5daa36241d499089c1b1910f729
msgid "Multi-GPU Setting"
msgstr "Multi-GPU Setting"
#: ../../getting_started/install/environment/environment.md:103
#: 5ca75fdf2c264b2c844d77f659b4f0b3
#: 30df45b7f1f7423c9f18c6360f0b7600
msgid ""
"See https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-"
"visibility-cuda_visible_devices/ If CUDA_VISIBLE_DEVICES is not "
@@ -313,49 +315,49 @@ msgstr ""
"cuda_visible_devices/ 如果 CUDA_VISIBLE_DEVICES没有设置, 会使用所有可用的gpu"
#: ../../getting_started/install/environment/environment.md:106
#: de92eb310aff43fbbbf3c5a116c3b2c6
#: 8631ea968dfb4d90a7ae6bdb2acdfdce
msgid "CUDA_VISIBLE_DEVICES=0"
msgstr "CUDA_VISIBLE_DEVICES=0"
#: ../../getting_started/install/environment/environment.md:108
#: d2641df6123a442b8e4444ad5f01a9aa
#: 0010422280dd4fe79326ebceb2a66f0e
msgid ""
"Optionally, you can also specify the gpu ID to use before the starting "
"command"
msgstr "你也可以通过启动命令设置gpu ID"
#: ../../getting_started/install/environment/environment.md:110
#: 76c66179d11a4e5fa369421378609aae
#: 00106f7341304fbd9425721ea8e6a261
msgid "CUDA_VISIBLE_DEVICES=3,4,5,6"
msgstr "CUDA_VISIBLE_DEVICES=3,4,5,6"
#: ../../getting_started/install/environment/environment.md:112
#: 29bd0f01fdf540ad98385ea8473f7647
#: 720aa8b3478744d78e4b10dfeccb50b4
msgid "You can configure the maximum memory used by each GPU."
msgstr "可以设置GPU的最大内存"
#: ../../getting_started/install/environment/environment.md:114
#: 31e5e23838734ba7a2810e2387e6d6a0
#: f9639ac96a244296832c75bbcbdae2af
msgid "MAX_GPU_MEMORY=16Gib"
msgstr "MAX_GPU_MEMORY=16Gib"
#: ../../getting_started/install/environment/environment.md:117
#: 99aa63ab1ae049d9b94536d6a96f3443
#: fc4d955fdb3e4256af5c8f29b042dcd6
msgid "Other Setting"
msgstr "Other Setting"
#: ../../getting_started/install/environment/environment.md:118
#: 3168732183874bffb59a3575d3473d62
#: 66b14a834e884339be2d48392e884933
msgid "Language Settings(influence prompt language)"
msgstr "Language Settings(涉及prompt语言以及知识切片方式)"
#: ../../getting_started/install/environment/environment.md:119
#: 73eb0a96f29b4739bd456faa9cb5033d
#: 5c9f05174eb84edd9e1316cc0721a840
msgid "LANGUAGE=en"
msgstr "LANGUAGE=en"
#: ../../getting_started/install/environment/environment.md:120
#: c6646b78c6cf4d25a13108232f5b2046
#: 7f6d62117d024c51bba9255fa4fcf151
msgid "LANGUAGE=zh"
msgstr "LANGUAGE=zh"

View File

@@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.3.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-07-13 15:39+0800\n"
"POT-Creation-Date: 2023-11-02 10:10+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
@@ -19,103 +19,84 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"
#: ../../modules/knowledge.rst:2 ../../modules/knowledge.rst:136
#: 3cc8fa6e9fbd4d889603d99424e9529a
#: ../../modules/knowledge.md:1 436b94d3a8374ed18feb5c14893a84e6
msgid "Knowledge"
msgstr "知识"
#: ../../modules/knowledge.rst:4 0465a393d9d541958c39c1d07c885d1f
#: ../../modules/knowledge.md:3 918a3747cbed42d18b8c9c4547e67b14
#, fuzzy
msgid ""
"As the knowledge base is currently the most significant user demand "
"scenario, we natively support the construction and processing of "
"knowledge bases. At the same time, we also provide multiple knowledge "
"base management strategies in this project, such as pdf knowledge,md "
"knowledge, txt knowledge, word knowledge, ppt knowledge:"
"base management strategies in this project, such as:"
msgstr ""
"由于知识库是当前用户需求最显著的场景,我们原生支持知识库的构建和处理。同时,我们还在本项目中提供了多种知识库管理策略,如:pdf,md , "
"txt, word, ppt"
#: ../../modules/knowledge.rst:6 e670cbe14d8e4da88ba935e4120c31e0
msgid ""
"We currently support many document formats: raw text, txt, pdf, md, html,"
" doc, ppt, and url. In the future, we will continue to support more types"
" of knowledge, including audio, video, various databases, and big data "
"sources. Of course, we look forward to your active participation in "
"contributing code."
#: ../../modules/knowledge.md:4 d4d4b5d57918485aafa457bb9fdcf626
msgid "Default built-in knowledge base"
msgstr ""
#: ../../modules/knowledge.rst:9 e0bf601a1a0c458297306db6ff79f931
msgid "**Create your own knowledge repository**"
#: ../../modules/knowledge.md:5 d4d4b5d57918485aafa457bb9fdcf626
msgid "Custom addition of knowledge bases"
msgstr ""
#: ../../modules/knowledge.md:6 984361ce835c4c3492e29e1fb897348a
msgid ""
"Various usage scenarios such as constructing knowledge bases through "
"plugin capabilities and web crawling. Users only need to organize the "
"knowledge documents, and they can use our existing capabilities to build "
"the knowledge base required for the large model."
msgstr ""
#: ../../modules/knowledge.md:9 746e4fbd3212460198be51b90caee2c8
#, fuzzy
msgid "Create your own knowledge repository"
msgstr "创建你自己的知识库"
#: ../../modules/knowledge.rst:11 bb26708135d44615be3c1824668010f6
msgid "1.prepare"
msgstr "准备"
#: ../../modules/knowledge.md:11 1c46b33b0532417c824efbaa3e687c3f
msgid ""
"1.Place personal knowledge files or folders in the pilot/datasets "
"directory."
msgstr ""
#: ../../modules/knowledge.rst:13 c150a0378f3e4625908fa0d8a25860e9
#: ../../modules/knowledge.md:13 3b16f387b5354947a89d6df77bd65bdb
#, fuzzy
msgid ""
"We currently support many document formats: TEXT(raw text), "
"DOCUMENT(.txt, .pdf, .md, .doc, .ppt, .html), and URL."
"We currently support many document formats: txt, pdf, md, html, doc, ppt,"
" and url."
msgstr "当前支持txt, pdf, md, html, doc, ppt, url文档格式"
#: ../../modules/knowledge.rst:15 7f9f02a93d5d4325b3d2d976f4bb28a0
#: ../../modules/knowledge.md:15 09ec337d7da4418db854e58afb6c0980
msgid "before execution:"
msgstr "开始前"
#: ../../modules/knowledge.rst:24 59699a8385e04982a992cf0d71f6dcd5
#, fuzzy
#: ../../modules/knowledge.md:22 c09b3decb018485f8e56830ddc156194
msgid ""
"2.prepare embedding model, you can download from https://huggingface.co/."
" Notice you have installed git-lfs."
"2.Update your .env, set your vector store type, VECTOR_STORE_TYPE=Chroma "
"(now only support Chroma and Milvus, if you set Milvus, please set "
"MILVUS_URL and MILVUS_PORT)"
msgstr ""
"提前准备Embedding Model, 你可以在https://huggingface.co/进行下载注意你需要先安装git-lfs.eg:"
" git clone https://huggingface.co/THUDM/chatglm2-6b"
#: ../../modules/knowledge.rst:27 2be1a17d0b54476b9dea080d244fd747
msgid ""
"eg: git clone https://huggingface.co/sentence-transformers/all-"
"MiniLM-L6-v2"
msgstr "eg: git clone https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2"
#: ../../modules/knowledge.rst:33 d328f6e243624c9488ebd27c9324621b
msgid ""
"3.prepare vector_store instance and vector store config, now we support "
"Chroma, Milvus and Weaviate."
msgstr "提前准备向量数据库环境目前支持Chroma, Milvus and Weaviate向量数据库"
#: ../../modules/knowledge.rst:63 44f97154eff647d399fd30b6f9e3b867
msgid ""
"3.init Url Type EmbeddingEngine api and embedding your document into "
"vector store in your code."
msgstr "初始化 Url类型 EmbeddingEngine api 将url文档embedding向量化到向量数据库 "
#: ../../modules/knowledge.rst:75 e2581b414f0148bca88253c7af9cd591
msgid "If you want to add your source_reader or text_splitter, do this:"
msgstr "如果你想手动添加你自定义的source_reader和text_splitter, 请参考:"
#: ../../modules/knowledge.rst:95 74c110414f924bbfa3d512e45ba2f30f
#, fuzzy
msgid ""
"4.init Document Type EmbeddingEngine api and embedding your document into"
" vector store in your code. Document type can be .txt, .pdf, .md, .doc, "
".ppt."
#: ../../modules/knowledge.md:25 74460ec7709441d5945ce9f745a26d20
msgid "2.Run the knowledge repository initialization command"
msgstr ""
"初始化 文档型类型 EmbeddingEngine api 将文档embedding向量化到向量数据库(文档可以是.txt, .pdf, "
".md, .html, .doc, .ppt)"
#: ../../modules/knowledge.rst:108 0afd40098d5f4dfd9e44fe1d8004da25
#: ../../modules/knowledge.md:31 4498ec4e46ff4e24b45dd855e829bd32
msgid ""
"5.init TEXT Type EmbeddingEngine api and embedding your document into "
"vector store in your code."
msgstr "初始化TEXT类型 EmbeddingEngine api 将文档embedding向量化到向量数据库"
"Optionally, you can run `dbgpt knowledge load --help` command to see more"
" usage."
msgstr ""
#: ../../modules/knowledge.rst:120 a66961bf3efd41fa8ea938129446f5a5
msgid "4.similar search based on your knowledge base. ::"
msgstr "在知识库进行相似性搜索"
#: ../../modules/knowledge.md:33 5048ac3289e540f2a2b5fd0e5ed043f5
msgid ""
"3.Add the knowledge repository in the interface by entering the name of "
"your knowledge repository (if not specified, enter \"default\") so you "
"can use it for Q&A based on your knowledge base."
msgstr ""
#: ../../modules/knowledge.rst:126 b7066f408378450db26770f83fbd2716
#: ../../modules/knowledge.md:35 deeccff20f7f453dad0881b63dae2a18
msgid ""
"Note that the default vector model used is text2vec-large-chinese (which "
"is a large model, so if your personal computer configuration is not "
@@ -125,48 +106,6 @@ msgstr ""
"注意这里默认向量模型是text2vec-large-chinese(模型比较大如果个人电脑配置不够建议采用text2vec-base-"
"chinese),因此确保需要将模型download下来放到models目录中。"
#: ../../modules/knowledge.rst:128 58481d55cab74936b6e84b24c39b1674
#, fuzzy
msgid ""
"`pdf_embedding <./knowledge/pdf/pdf_embedding.html>`_: supported pdf "
"embedding."
msgstr "pdf_embedding <./knowledge/pdf_embedding.html>`_: supported pdf embedding."
#: ../../modules/knowledge.rst:129 fbb013c4f1bc46af910c91292f6690cf
#, fuzzy
msgid ""
"`markdown_embedding <./knowledge/markdown/markdown_embedding.html>`_: "
"supported markdown embedding."
msgstr "pdf_embedding <./knowledge/pdf_embedding.html>`_: supported pdf embedding."
#: ../../modules/knowledge.rst:130 59d45732f4914d16b4e01aee0992edf7
#, fuzzy
msgid ""
"`word_embedding <./knowledge/word/word_embedding.html>`_: supported word "
"embedding."
msgstr "pdf_embedding <./knowledge/pdf_embedding.html>`_: supported pdf embedding."
#: ../../modules/knowledge.rst:131 df0e6f311861423e885b38e020a7c0f0
#, fuzzy
msgid ""
"`url_embedding <./knowledge/url/url_embedding.html>`_: supported url "
"embedding."
msgstr "pdf_embedding <./knowledge/pdf_embedding.html>`_: supported pdf embedding."
#: ../../modules/knowledge.rst:132 7c550c1f5bc34fe9986731fb465e12cd
#, fuzzy
msgid ""
"`ppt_embedding <./knowledge/ppt/ppt_embedding.html>`_: supported ppt "
"embedding."
msgstr "pdf_embedding <./knowledge/pdf_embedding.html>`_: supported pdf embedding."
#: ../../modules/knowledge.rst:133 8648684cb191476faeeb548389f79050
#, fuzzy
msgid ""
"`string_embedding <./knowledge/string/string_embedding.html>`_: supported"
" raw text embedding."
msgstr "pdf_embedding <./knowledge/pdf_embedding.html>`_: supported pdf embedding."
#~ msgid "before execution: python -m spacy download zh_core_web_sm"
#~ msgstr "在执行之前请先执行python -m spacy download zh_core_web_sm"
@@ -201,3 +140,112 @@ msgstr "pdf_embedding <./knowledge/pdf_embedding.html>`_: supported pdf embeddin
#~ "and MILVUS_PORT)"
#~ msgstr "2.更新你的.env设置你的向量存储类型VECTOR_STORE_TYPE=Chroma(现在只支持Chroma和Milvus如果你设置了Milvus请设置MILVUS_URL和MILVUS_PORT)"
#~ msgid ""
#~ "We currently support many document "
#~ "formats: raw text, txt, pdf, md, "
#~ "html, doc, ppt, and url. In the"
#~ " future, we will continue to support"
#~ " more types of knowledge, including "
#~ "audio, video, various databases, and big"
#~ " data sources. Of course, we look "
#~ "forward to your active participation in"
#~ " contributing code."
#~ msgstr ""
#~ msgid "1.prepare"
#~ msgstr "准备"
#~ msgid ""
#~ "2.prepare embedding model, you can "
#~ "download from https://huggingface.co/. Notice "
#~ "you have installed git-lfs."
#~ msgstr ""
#~ "提前准备Embedding Model, 你可以在https://huggingface.co/进行下载,注意"
#~ "你需要先安装git-lfs.eg: git clone "
#~ "https://huggingface.co/THUDM/chatglm2-6b"
#~ msgid ""
#~ "eg: git clone https://huggingface.co/sentence-"
#~ "transformers/all-MiniLM-L6-v2"
#~ msgstr ""
#~ "eg: git clone https://huggingface.co/sentence-"
#~ "transformers/all-MiniLM-L6-v2"
#~ msgid ""
#~ "3.prepare vector_store instance and vector "
#~ "store config, now we support Chroma, "
#~ "Milvus and Weaviate."
#~ msgstr "提前准备向量数据库环境目前支持Chroma, Milvus and Weaviate向量数据库"
#~ msgid ""
#~ "3.init Url Type EmbeddingEngine api and"
#~ " embedding your document into vector "
#~ "store in your code."
#~ msgstr "初始化 Url类型 EmbeddingEngine api 将url文档embedding向量化到向量数据库 "
#~ msgid "If you want to add your source_reader or text_splitter, do this:"
#~ msgstr "如果你想手动添加你自定义的source_reader和text_splitter, 请参考:"
#~ msgid ""
#~ "4.init Document Type EmbeddingEngine api "
#~ "and embedding your document into vector"
#~ " store in your code. Document type"
#~ " can be .txt, .pdf, .md, .doc, "
#~ ".ppt."
#~ msgstr ""
#~ "初始化 文档型类型 EmbeddingEngine api "
#~ "将文档embedding向量化到向量数据库(文档可以是.txt, .pdf, .md, .html,"
#~ " .doc, .ppt)"
#~ msgid ""
#~ "5.init TEXT Type EmbeddingEngine api and"
#~ " embedding your document into vector "
#~ "store in your code."
#~ msgstr "初始化TEXT类型 EmbeddingEngine api 将文档embedding向量化到向量数据库"
#~ msgid "4.similar search based on your knowledge base. ::"
#~ msgstr "在知识库进行相似性搜索"
#~ msgid ""
#~ "`pdf_embedding <./knowledge/pdf/pdf_embedding.html>`_: "
#~ "supported pdf embedding."
#~ msgstr ""
#~ "pdf_embedding <./knowledge/pdf_embedding.html>`_: "
#~ "supported pdf embedding."
#~ msgid ""
#~ "`markdown_embedding "
#~ "<./knowledge/markdown/markdown_embedding.html>`_: supported "
#~ "markdown embedding."
#~ msgstr ""
#~ "pdf_embedding <./knowledge/pdf_embedding.html>`_: "
#~ "supported pdf embedding."
#~ msgid ""
#~ "`word_embedding <./knowledge/word/word_embedding.html>`_: "
#~ "supported word embedding."
#~ msgstr ""
#~ "pdf_embedding <./knowledge/pdf_embedding.html>`_: "
#~ "supported pdf embedding."
#~ msgid ""
#~ "`url_embedding <./knowledge/url/url_embedding.html>`_: "
#~ "supported url embedding."
#~ msgstr ""
#~ "pdf_embedding <./knowledge/pdf_embedding.html>`_: "
#~ "supported pdf embedding."
#~ msgid ""
#~ "`ppt_embedding <./knowledge/ppt/ppt_embedding.html>`_: "
#~ "supported ppt embedding."
#~ msgstr ""
#~ "pdf_embedding <./knowledge/pdf_embedding.html>`_: "
#~ "supported pdf embedding."
#~ msgid ""
#~ "`string_embedding <./knowledge/string/string_embedding.html>`_:"
#~ " supported raw text embedding."
#~ msgstr ""
#~ "pdf_embedding <./knowledge/pdf_embedding.html>`_: "
#~ "supported pdf embedding."

View File

@@ -9,9 +9,9 @@ sphinx_book_theme
sphinx_rtd_theme==1.0.0
sphinx-typlog-theme==0.8.0
sphinx-panels
sphinx-tabs==3.4.0
toml
myst_nb
sphinx_copybutton
pydata-sphinx-theme==0.13.1
pydantic-settings
furo