Understanding Dominant Themes in Reviewing Agentic AI-authored Code

πŸ“… 2026-01-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the empirical gap in understanding how human reviewers assess AI-generated code, particularly regarding their focal concerns and thematic patterns. By analyzing 19,450 comments across 3,177 AI-generated pull requests on GitHub, the authors present the first 12-category taxonomy of code review themes specific to AI-generated code. They employ a combination of topic modeling, LLM-assisted semantic clustering, and zero-shot prompting to enable automated annotation. Experimental results demonstrate that open-source large language models can accurately identify review themes in a zero-shot setting, achieving 78.63% exact match accuracy and a macro F1 score of 0.78 at the comment level, and 78% top-1 accuracy for dominant themes at the pull request level. The findings reveal that reviewers primarily focus on issues such as missing documentation, refactoring needs, formatting and style inconsistencies, and testing and security concerns.

Technology Category

Application Category

πŸ“ Abstract
While prior work has examined the generation capabilities of Agentic AI systems, little is known about how reviewers respond to AI-authored code in practice. In this paper, we present a large-scale empirical study of code review dynamics in agent-generated PRs. Using a curated subset of the AIDev dataset, we analyze 19,450 inline review comments spanning 3,177 agent-authored PRs from real-world GitHub repositories. We first derive a taxonomy of 12 review comment themes using topic modeling combined with large language model (LLM)-assisted semantic clustering and consolidation. According to this taxonomy, we then investigate whether zero-shot prompts to LLM can reliably annotate review comments. Our evaluation against human annotations shows that open-source LLM achieves reasonably high exact match (78.63%), macro F1 score (0.78), and substantial agreement with human annotators at the review comment level. At the PR level, the LLM also correctly identifies the dominant review theme with 78% Top-1 accuracy and achieves an average Jaccard similarity of 0.76, indicating strong alignment with human judgments. Applying this annotation pipeline at scale, we find that apart from functional correctness and logical changes, reviews of agent-authored PRs predominantly focus on documentation gaps, refactoring needs, styling and formatting issues, with testing and security-related concerns. These findings suggest that while AI agents can accelerate code production, there remain gaps requiring targeted human review oversight.
Problem

Research questions and friction points this paper is trying to address.

Agentic AI
code review
AI-authored code
review themes
GitHub
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic AI
code review
LLM-assisted annotation
topic modeling
empirical study
πŸ”Ž Similar Papers
No similar papers found.