🤖 AI Summary
This work addresses the challenges of automatically generating high-quality SQL comments, which stem from the scarcity of complex, high-quality query-comment pairs and insufficient SQL semantic understanding by large language models. To overcome these limitations, the authors propose SQL-Commenter, built upon LLaMA-3.1-8B, which introduces Direct Preference Optimization (DPO) to this task for the first time. Leveraging an expert-validated dataset of complex SQL queries with reference comments, the model undergoes a multi-stage training pipeline comprising continued pretraining, supervised fine-tuning, and DPO to achieve fine-grained modeling of SQL semantics. Evaluated on the Spider and Bird benchmarks, SQL-Commenter outperforms the strongest baseline by 9.29, 4.99, and 13.23 percentage points in BLEU-4, METEOR, and ROUGE-L, respectively. Human evaluations further confirm that the generated comments significantly surpass existing methods in correctness, completeness, and fluency.
📝 Abstract
SQL query comprehension is a significant challenge due to complex syntax, diverse join types, and deep nesting. Many queries lack adequate comments, severely hindering code readability, maintainability, and knowledge transfer. Automated SQL comment generation faces two main challenges: limited datasets that inadequately represent complex real-world queries, and Large Language Models' (LLMs) insufficient understanding of SQL-specific semantics. Our empirical analysis shows that even after continual pre-training and supervised fine-tuning, LLMs struggle with complex SQL semantics, yielding inaccurate comments. To address this, we propose SQL-Commenter, an advanced method based on LLaMA-3.1-8B. We first construct a comprehensive dataset of complex SQL queries with expert-verified comments. Next, we perform continual pre-training on a large SQL corpus to enhance the LLM's syntax and semantic understanding, followed by supervised fine-tuning. Finally, we introduce Direct Preference Optimization (DPO) using human feedback. SQL-Commenter utilizes a preference-based loss function to favor preferred outputs, enhancing fine-grained semantic learning and context-dependent quality assessment. Evaluated on the Spider and Bird benchmarks, SQL-Commenter significantly outperforms state-of-the-art baselines. On average, it surpasses the strongest baseline (Qwen3-14B) by 9.29, 4.99, and 13.23 percentage points on BLEU-4, METEOR, and ROUGE-L, respectively. Moreover, human evaluation demonstrates the superior quality of comments generated by SQL-Commenter in terms of correctness, completeness, and naturalness.