OpenSearch-SQL: Enhancing Text-to-SQL with Dynamic Few-shot and Consistency Alignment

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical bottlenecks in multi-agent Text-to-SQL—namely instruction-following failures, model hallucination, and incomplete query frameworks—this paper proposes a four-stage pipeline (preprocessing, extraction, generation, refinement) coupled with a consistency alignment mechanism. We innovatively design an SQL-like intermediate language and structured Chain-of-Thought (CoT) representations, and introduce a self-teaching Query-CoT-SQL dynamic few-shot strategy that enables direct invocation of base LLMs without post-training, significantly enhancing portability and robustness. Evaluated on the BIRD benchmark, our method achieves execution accuracy (EX) of 69.3% on the development set and 72.28% on the test set, along with an R-VES score of 69.36%—all representing state-of-the-art performance among submissions at the time of evaluation.

Technology Category

Application Category

📝 Abstract
Although multi-agent collaborative Large Language Models (LLMs) have achieved significant breakthroughs in the Text-to-SQL task, their performance is still constrained by various factors. These factors include the incompleteness of the framework, failure to follow instructions, and model hallucination problems. To address these problems, we propose OpenSearch-SQL, which divides the Text-to-SQL task into four main modules: Preprocessing, Extraction, Generation, and Refinement, along with an Alignment module based on a consistency alignment mechanism. This architecture aligns the inputs and outputs of agents through the Alignment module, reducing failures in instruction following and hallucination. Additionally, we designed an intermediate language called SQL-Like and optimized the structured CoT based on SQL-Like. Meanwhile, we developed a dynamic few-shot strategy in the form of self-taught Query-CoT-SQL. These methods have significantly improved the performance of LLMs in the Text-to-SQL task. In terms of model selection, we directly applied the base LLMs without any post-training, thereby simplifying the task chain and enhancing the framework's portability. Experimental results show that OpenSearch-SQL achieves an execution accuracy(EX) of 69.3% on the BIRD development set, 72.28% on the test set, and a reward-based validity efficiency score (R-VES) of 69.36%, with all three metrics ranking first at the time of submission. These results demonstrate the comprehensive advantages of the proposed method in both effectiveness and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Improves Text-to-SQL with dynamic few-shot strategy.
Reduces instruction failures and model hallucination.
Enhances LLM performance via consistency alignment mechanism.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic few-shot strategy
Consistency alignment mechanism
SQL-Like intermediate language
🔎 Similar Papers
X
Xiangjin Xie
Alibaba Cloud, China
Guangwei Xu
Guangwei Xu
Alibaba Group
NLP
L
Lingyan Zhao
Alibaba Cloud, China
R
Ruijie Guo
Alibaba Cloud, China