🤖 AI Summary
This work investigates the capability of large language models (LLMs) to determine semantic equivalence between SQL queries, focusing on two critical definitions—semantic equivalence and relaxed equivalence—to enhance the reliability of semantic-level evaluation in text-to-SQL and related generation tasks. We propose a dual-path prompting framework: (1) *Miniature&Mull*, which performs lightweight execution-based verification via counterexample construction; and (2) *Explain&Compare*, which generates natural-language explanations of logical discrepancies and conducts structured syntactic-semantic comparison. To our knowledge, this is the first systematic evaluation of LLMs’ effectiveness and limitations in SQL equivalence judgment without requiring large-scale query execution or human annotations. Experimental results demonstrate that our approach significantly outperforms conventional execution accuracy metrics and achieves reasonable discrimination performance on semantic equivalence tasks. It establishes a novel, interpretable, lightweight, and semantics-aware paradigm for evaluating SQL generation quality.
📝 Abstract
Judging the equivalence between two SQL queries is a fundamental problem with many practical applications in data management and SQL generation (i.e., evaluating the quality of generated SQL queries in text-to-SQL task). While the research community has reasoned about SQL equivalence for decades, it poses considerable difficulties and no complete solutions exist. Recently, Large Language Models (LLMs) have shown strong reasoning capability in conversation, question answering and solving mathematics challenges. In this paper, we study if LLMs can be used to determine the equivalence between SQL queries under two notions of SQL equivalence (semantic equivalence and relaxed equivalence). To assist LLMs in generating high quality responses, we present two prompting techniques: Miniature&Mull and Explain&Compare. The former technique is used to evaluate the semantic equivalence in which it asks LLMs to execute a query on a simple database instance and then explore if a counterexample exists by modifying the database. The latter technique is used to evaluate the relaxed equivalence in which it asks LLMs to explain the queries and then compare if they contain significant logical differences. Our experiments demonstrate using our techniques, LLMs is a promising tool to help data engineers in writing semantically equivalent SQL queries, however challenges still persist, and is a better metric for evaluating SQL generation than the popular execution accuracy.