Semantic Captioning: Benchmark Dataset and Graph-Aware Few-Shot In-Context Learning for SQL2Text

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the SQL-to-natural-language (SQL2Text) semantic translation problem to enhance the interpretability and security of database queries. To tackle the scarcity of high-quality benchmarks, we introduce the first dedicated SQL2Text dataset, constructed via reverse engineering of existing Text-to-SQL data. We propose a graph-aware few-shot in-context learning (ICL) method that leverages the abstract syntax tree (AST) of SQL as a structural graph to guide example selection, thereby improving generalization—especially for lightweight LLMs. Our approach integrates AST-based representation, iterative prompt generation using GPT-4o, and automated BLEU-based evaluation. Experiments demonstrate that our AST-guided ICL strategy achieves a 39% BLEU score improvement over random example selection, significantly enhancing both the accuracy and efficiency of natural-language explanations generated by resource-constrained LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance in various NLP tasks, including semantic parsing, which trans lates natural language into formal code representations. However, the reverse process, translating code into natural language, termed semantic captioning, has received less attention. This task is becoming increasingly important as LLMs are integrated into platforms for code generation, security analysis, and educational purposes. In this paper, we focus on the captioning of SQL query (SQL2Text) to address the critical need for understanding and explaining SQL queries in an era where LLM-generated code poses potential security risks. We repurpose Text2SQL datasets for SQL2Text by introducing an iterative ICL prompt using GPT-4o to generate multiple additional utterances, which enhances the robustness of the datasets for the reverse task. We conduct our experiments using in-context learning (ICL) based on different sample selection methods, emphasizing smaller, more computationally efficient LLMs. Our findings demonstrate that leveraging the inherent graph properties of SQL for ICL sample selection significantly outperforms random selection by up to 39% on BLEU score and provides better results than alternative methods. Dataset and codes are published: url{https://github.com/aliwister/ast-icl}.
Problem

Research questions and friction points this paper is trying to address.

SQL
Natural Language
Code Safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4o Technique
SQL to Natural Language
Efficient Model Optimization
🔎 Similar Papers
No similar papers found.