Retrieval-Augmented Code Review Comment Generation

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automated code review comment generation approaches face a fundamental trade-off: generative models struggle to accurately produce infrequent yet critical technical terms, while retrieval-based methods lack contextual adaptability. To address this, this paper introduces the first retrieval-augmented generation (RAG) framework for code review comment generation. Our method leverages pre-trained code language models (e.g., CodeT5, CodeLlama) and integrates dense retrievers (e.g., DPR, ColBERT) to dynamically retrieve semantically similar code-comment pairs from a historical review repository. This enables synergistic optimization of semantic generalization and lexical precision. Evaluated on the Tufano benchmark, our approach significantly outperforms both pure generative and pure retrieval baselines: exact match rate improves by 1.67%, BLEU score by 4.25%, and recall of infrequent ground-truth tokens by 24.01%. These results empirically validate the effectiveness and novelty of RAG in the code review domain.

Technology Category

Application Category

📝 Abstract
Automated code review comment generation (RCG) aims to assist developers by automatically producing natural language feedback for code changes. Existing approaches are primarily either generation-based, using pretrained language models, or information retrieval-based (IR), reusing comments from similar past examples. While generation-based methods leverage code-specific pretraining on large code-natural language corpora to learn semantic relationships between code and natural language, they often struggle to generate low-frequency but semantically important tokens due to their probabilistic nature. In contrast, IR-based methods excel at recovering such rare tokens by copying from existing examples but lack flexibility in adapting to new code contexts-for example, when input code contains identifiers or structures not found in the retrieval database. To bridge the gap between generation-based and IR-based methods, this work proposes to leverage retrieval-augmented generation (RAG) for RCG by conditioning pretrained language models on retrieved code-review exemplars. By providing relevant examples that illustrate how similar code has been previously reviewed, the model is better guided to generate accurate review comments. Our evaluation on the Tufano et al. benchmark shows that RAG-based RCG outperforms both generation-based and IR-based RCG. It achieves up to +1.67% higher exact match and +4.25% higher BLEU scores compared to generation-based RCG. It also improves the generation of low-frequency ground-truth tokens by up to 24.01%. We additionally find that performance improves as the number of retrieved exemplars increases.
Problem

Research questions and friction points this paper is trying to address.

Bridging generation and retrieval methods for code review comments
Improving accuracy of low-frequency token generation in feedback
Enhancing adaptability to new code contexts via retrieval-augmented models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-augmented generation for code review
Combines pretrained models with retrieved examples
Improves rare token generation accuracy
🔎 Similar Papers
No similar papers found.
H
Hyunsun Hong
School of Computing, KAIST, Daejeon, Republic of Korea
Jongmoon Baik
Jongmoon Baik
Professor at School of Computing, KAIST
Software Engineering