🤖 AI Summary
Existing ACR benchmarks suffer from insufficient real-world project context, overreliance on fine-grained unit-level tasks, and narrow evaluation metrics, limiting their ability to assess LLMs’ practical code review (ACR) capabilities. This paper introduces SWRBench—the first PR-centric, full-project-context ACR benchmark—comprising 1,000 manually validated GitHub pull requests. We propose an LLM-based objective evaluation method achieving high agreement with human judgments (Cohen’s κ = 0.82). Furthermore, we empirically demonstrate for the first time that multi-review aggregation significantly improves performance, boosting F1 scores by up to 43.67%. Experiments reveal that current LLMs excel at detecting functional bugs but underperform on stylistic and compliance-related issues. Our structured ground-truth construction and semantic coverage assessment enable reproducible, scalable ACR research.
📝 Abstract
Automated Code Review (ACR) is crucial for software quality, yet existing benchmarks often fail to reflect real-world complexities, hindering the evaluation of modern Large Language Models (LLMs). Current benchmarks frequently focus on fine-grained code units, lack complete project context, and use inadequate evaluation metrics. To address these limitations, we introduce SWRBench , a new benchmark comprising 1000 manually verified Pull Requests (PRs) from GitHub, offering PR-centric review with full project context. SWRBench employs an objective LLM-based evaluation method that aligns strongly with human judgment (~90 agreement) by verifying if issues from a structured ground truth are covered in generated reviews. Our systematic evaluation of mainstream ACR tools and LLMs on SWRBench reveals that current systems underperform, and ACR tools are more adept at detecting functional errors. Subsequently, we propose and validate a simple multi-review aggregation strategy that significantly boosts ACR performance, increasing F1 scores by up to 43.67%. Our contributions include the SWRBench benchmark, its objective evaluation method, a comprehensive study of current ACR capabilities, and an effective enhancement approach, offering valuable insights for advancing ACR research.