From f0de20662b63d49dd7e889b33e5fcf80da0692d5 Mon Sep 17 00:00:00 2001 From: "alan.cl" <1165243776@qq.com> Date: Wed, 22 Oct 2025 16:48:26 +0800 Subject: [PATCH] docs: update benchmark doc --- docs/docs/modules/benchmark.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/modules/benchmark.md b/docs/docs/modules/benchmark.md index 2aec7d513..a367d3062 100644 --- a/docs/docs/modules/benchmark.md +++ b/docs/docs/modules/benchmark.md @@ -2,7 +2,7 @@ For Text2SQL tasks, we provide a dataset benchmarking capability. It evaluates different large language models (LLMs) and agents on Text2SQL, covering syntax correctness, semantic accuracy, and execution validity. It outputs metrics such as executability rate and accuracy rate, and provides an evaluation report. -1. DB-GPT open-source Text2SQL benchmark dataset: [Falcon](https://github.com/eosphoros-ai/Falcon) +1. DB-GPT open-source Text2SQL benchmark dataset repository: [Falcon](https://github.com/eosphoros-ai/Falcon) 2. DB-GPT supports LLM evaluation based on the Falcon benchmark dataset # Introduction