diff --git a/docs/getting_started/install/deploy/deploy.md b/docs/getting_started/install/deploy/deploy.md index 8f5192697..49f31d732 100644 --- a/docs/getting_started/install/deploy/deploy.md +++ b/docs/getting_started/install/deploy/deploy.md @@ -93,10 +93,6 @@ You can configure basic parameters in the .env file, for example setting LLM_MOD ([Vicuna-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) based on llama-2 has been released, we recommend you set `LLM_MODEL=vicuna-13b-v1.5` to try this model) ### 3. Run -You can refer to this document to obtain the Vicuna weights: [Vicuna](https://github.com/lm-sys/FastChat/blob/main/README.md#model-weights) . - -If you have difficulty with this step, you can also directly use the model from [this link](https://huggingface.co/Tribbiani/vicuna-7b) as a replacement. - 1.Run db-gpt server