AACR-Bench: Evaluating Automatic Code Review with Holistic Repository-Level Context

πŸ“… 2026-01-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing benchmarks for automated code review evaluation are limited by their monolingual scope, lack of repository-level context, and reliance on noisy, incomplete pull request comments as ground truth, which hinders accurate assessment of large language models (LLMs). This work proposes AACR-Bench, the first multilingual evaluation benchmark that provides full cross-file contextual information and employs an innovative β€œAI-assisted + expert-validated” annotation paradigm, achieving a 285% improvement in defect coverage. Through systematic evaluation of prominent LLMs, the study reveals the critical impact of context granularity, retrieval strategies, and model architecture on review performance, correcting prior misjudgments caused by data limitations. The authors release all data and tools to establish a more rigorous standard for future research in this domain.

Technology Category

Application Category

πŸ“ Abstract
High-quality evaluation benchmarks are pivotal for deploying Large Language Models (LLMs) in Automated Code Review (ACR). However, existing benchmarks suffer from two critical limitations: first, the lack of multi-language support in repository-level contexts, which restricts the generalizability of evaluation results; second, the reliance on noisy, incomplete ground truth derived from raw Pull Request (PR) comments, which constrains the scope of issue detection. To address these challenges, we introduce AACR-Bench a comprehensive benchmark that provides full cross-file context across multiple programming languages. Unlike traditional datasets, AACR-Bench employs an"AI-assisted, Expert-verified"annotation pipeline to uncover latent defects often overlooked in original PRs, resulting in a 285% increase in defect coverage. Extensive evaluations of mainstream LLMs on AACR-Bench reveal that previous assessments may have either misjudged or only partially captured model capabilities due to data limitations. Our work establishes a more rigorous standard for ACR evaluation and offers new insights on LLM based ACR, i.e., the granularity/level of context and the choice of retrieval methods significantly impact ACR performance, and this influence varies depending on the LLM, programming language, and the LLM usage paradigm e.g., whether an Agent architecture is employed. The code, data, and other artifacts of our evaluation set are available at https://github.com/alibaba/aacr-bench .
Problem

Research questions and friction points this paper is trying to address.

Automated Code Review
Evaluation Benchmark
Repository-Level Context
Multi-Language Support
Ground Truth Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic Code Review
Repository-Level Context
AI-Assisted Annotation
Multi-Language Benchmark
LLM Evaluation
πŸ”Ž Similar Papers
No similar papers found.
Lei Zhang
Lei Zhang
Nanjing University
Automated PlanningArtificial IntelligenceMulti-agent System
Y
Yongda Yu
Software Institute, Nanjing University, Nanjing, China
M
Minghui Yu
Software Institute, Nanjing University, Nanjing, China
X
Xinxin Guo
Software Institute, Nanjing University, Nanjing, China
Z
Zhengqi Zhuang
Software Institute, Nanjing University, Nanjing, China
G
Guoping Rong
Software Institute, Nanjing University, Nanjing, China
D
Dong Shao
Software Institute, Nanjing University, Nanjing, China
Haifeng Shen
Haifeng Shen
Southern Cross University
Software EngineeringHuman Centred ComputingArtificial intelligenceCollaborative Computing
H
Hongyu Kuang
Software Institute, Nanjing University, Nanjing, China
Z
Zhengfeng Li
TRE, Alibaba Inc., Hangzhou, China
B
Boge Wang
TRE, Alibaba Inc., Hangzhou, China
G
Guoan Zhang
TRE, Alibaba Inc., Hangzhou, China
B
Bangyu Xiang
TRE, Alibaba Inc., Hangzhou, China
X
Xiaobin Xu
TRE, Alibaba Inc., Hangzhou, China