🤖 AI Summary
Existing automated commit message generation methods suffer from high training costs and poor cross-lingual generalization. This paper pioneers a systematic investigation into leveraging large language models (LLMs) for zero-shot, context-based multilingual commit message generation via in-context learning (ICL), without fine-tuning—thus directly activating pre-trained knowledge. We propose a dual-dimensional evaluation framework integrating objective metrics and developer-centric subjective assessment, validated comprehensively on multilingual benchmarks and a newly constructed data-leakage–resistant test set. Our analysis uncovers key LLM limitations in semantic abstraction and consistency between code changes and summaries, and derives transferable insights for prompt engineering and exemplar selection. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods, achieving substantial gains in subjective developer ratings while exhibiting superior cross-lingual generalization and robustness.
📝 Abstract
Commit messages concisely describe code changes in natural language and are important for software maintenance. Several approaches have been proposed to automatically generate commit messages, but they still suffer from critical limitations, such as time-consuming training and poor generalization ability. To tackle these limitations, we propose to borrow the weapon of large language models (LLMs) and in-context learning (ICL). Our intuition is based on the fact that the training corpora of LLMs contain extensive code changes and their pairwise commit messages, which makes LLMs capture the knowledge about commits, while ICL can exploit the knowledge hidden in the LLMs and enable them to perform downstream tasks without model tuning. However, it remains unclear how well LLMs perform on commit message generation via ICL. In this paper, we conduct an empirical study to investigate the capability of LLMs to generate commit messages via ICL. Specifically, we first explore the impact of different settings on the performance of ICL-based commit message generation. We then compare ICL-based commit message generation with state-of-the-art approaches on a popular multilingual dataset and a new dataset we created to mitigate potential data leakage. The results show that ICL-based commit message generation significantly outperforms state-of-the-art approaches on subjective evaluation and achieves better generalization ability. We further analyze the root causes for LLM's underperformance and propose several implications, which shed light on future research directions for using LLMs to generate commit messages.