SQL-Commenter: Aligning Large Language Models for SQL Comment Generation with Direct Preference Optimization

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of automatically generating high-quality SQL comments, which stem from the scarcity of complex, high-quality query-comment pairs and insufficient SQL semantic understanding by large language models. To overcome these limitations, the authors propose SQL-Commenter, built upon LLaMA-3.1-8B, which introduces Direct Preference Optimization (DPO) to this task for the first time. Leveraging an expert-validated dataset of complex SQL queries with reference comments, the model undergoes a multi-stage training pipeline comprising continued pretraining, supervised fine-tuning, and DPO to achieve fine-grained modeling of SQL semantics. Evaluated on the Spider and Bird benchmarks, SQL-Commenter outperforms the strongest baseline by 9.29, 4.99, and 13.23 percentage points in BLEU-4, METEOR, and ROUGE-L, respectively. Human evaluations further confirm that the generated comments significantly surpass existing methods in correctness, completeness, and fluency.

Technology Category

Application Category

📝 Abstract
SQL query comprehension is a significant challenge due to complex syntax, diverse join types, and deep nesting. Many queries lack adequate comments, severely hindering code readability, maintainability, and knowledge transfer. Automated SQL comment generation faces two main challenges: limited datasets that inadequately represent complex real-world queries, and Large Language Models' (LLMs) insufficient understanding of SQL-specific semantics. Our empirical analysis shows that even after continual pre-training and supervised fine-tuning, LLMs struggle with complex SQL semantics, yielding inaccurate comments. To address this, we propose SQL-Commenter, an advanced method based on LLaMA-3.1-8B. We first construct a comprehensive dataset of complex SQL queries with expert-verified comments. Next, we perform continual pre-training on a large SQL corpus to enhance the LLM's syntax and semantic understanding, followed by supervised fine-tuning. Finally, we introduce Direct Preference Optimization (DPO) using human feedback. SQL-Commenter utilizes a preference-based loss function to favor preferred outputs, enhancing fine-grained semantic learning and context-dependent quality assessment. Evaluated on the Spider and Bird benchmarks, SQL-Commenter significantly outperforms state-of-the-art baselines. On average, it surpasses the strongest baseline (Qwen3-14B) by 9.29, 4.99, and 13.23 percentage points on BLEU-4, METEOR, and ROUGE-L, respectively. Moreover, human evaluation demonstrates the superior quality of comments generated by SQL-Commenter in terms of correctness, completeness, and naturalness.
Problem

Research questions and friction points this paper is trying to address.

SQL comment generation
Large Language Models
SQL semantics
code readability
automated code documentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

SQL comment generation
Direct Preference Optimization
Large Language Models
continual pre-training
expert-verified dataset
🔎 Similar Papers
No similar papers found.
Lei Yu
Lei Yu
Insititute of Software, Chinese Academy of Sciences(ISCAS)
Large Language ModelsCode ComprehensionCode GenerationGraph Neural Network
P
Peng Wang
Institute of Software, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
J
Jingyuan Zhang
Institute of Software, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
Xin Wang
Xin Wang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Biomedical Engineering
Jia Xu
Jia Xu
PhD candidate at Northeastern University, Shenyang, China
Data management
Li Yang
Li Yang
Insititute of Software, Chinese Academy of Sciences
Software EngineeringArtificial Intelligence
C
Changzhi Deng
Institute of Software, Chinese Academy of Sciences, Beijing, China
J
Jiajia Ma
Institute of Software, Chinese Academy of Sciences, Beijing, China
F
Fengjun Zhang
Institute of Software, Chinese Academy of Sciences, Beijing, China