Evaluating Generated Commit Messages with Large Language Models

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional reference-based automatic metrics (e.g., BLEU, ROUGE-L) suffer from weak semantic modeling capability and low correlation with human judgments in commit message quality assessment. To address this, we propose the first large language model (LLM)-based automated evaluation framework for commit messages. Our method integrates chain-of-thought reasoning, few-shot prompting, and multi-strategy prompt engineering—requiring no fine-tuning—to achieve fine-grained semantic understanding. Experiments demonstrate that our approach significantly outperforms conventional metrics in accuracy, consistency, and robustness. It achieves strong agreement with human evaluations across key dimensions—including functional completeness, conciseness, and readability—with Pearson correlation coefficients exceeding 0.85. Moreover, the framework exhibits high reproducibility and fairness. By eliminating the need for training and leveraging LLMs’ inherent linguistic capabilities, our method establishes an efficient, scalable, and semantically grounded paradigm for commit message assessment.

Technology Category

Application Category

📝 Abstract
Commit messages are essential in software development as they serve to document and explain code changes. Yet, their quality often falls short in practice, with studies showing significant proportions of empty or inadequate messages. While automated commit message generation has advanced significantly, particularly with Large Language Models (LLMs), the evaluation of generated messages remains challenging. Traditional reference-based automatic metrics like BLEU, ROUGE-L, and METEOR have notable limitations in assessing commit message quality, as they assume a one-to-one mapping between code changes and commit messages, leading researchers to rely on resource-intensive human evaluation. This study investigates the potential of LLMs as automated evaluators for commit message quality. Through systematic experimentation with various prompt strategies and state-of-the-art LLMs, we demonstrate that LLMs combining Chain-of-Thought reasoning with few-shot demonstrations achieve near human-level evaluation proficiency. Our LLM-based evaluator significantly outperforms traditional metrics while maintaining acceptable reproducibility, robustness, and fairness levels despite some inherent variability. This work conducts a comprehensive preliminary study on using LLMs for commit message evaluation, offering a scalable alternative to human assessment while maintaining high-quality evaluation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating quality of auto-generated commit messages
Limitations of traditional metrics like BLEU, ROUGE
LLMs as scalable alternative to human evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for commit message evaluation
Combines Chain-of-Thought with few-shot learning
Outperforms traditional metrics like BLEU
🔎 Similar Papers
No similar papers found.
Q
Qunhong Zeng
Beijing Institute of Technology
Y
Yuxia Zhang
Beijing Institute of Technology
Zexiong Ma
Zexiong Ma
Peking University
AI4SELLMsCode Agent
B
Bo Jiang
ByteDance
N
Ningyuan Sun
ByteDance
Klaas-Jan Stol
Klaas-Jan Stol
School of Computer Science and IT, University College Cork, Lero, SINTEF
Open SourceInner SourceBehavioral Software EngineeringResearch methodology
X
Xingyu Mou
Beijing Institute of Technology
H
Hui Liu
Beijing Institute of Technology