๐ค AI Summary
This study addresses the notably lower merge rate of AI-generated pull requests (PRs) compared to those authored by human developers, a phenomenon whose underlying causes remain unclear. Leveraging a large-scale empirical dataset of 40,214 PRs, the authors extract 64-dimensional features and construct statistical regression models to systematically compare the merging outcomes and influencing mechanisms of PRs submitted by humans versus AI agents. The analysis reveals, for the first time, that contributor attributes are pivotal determinants of PR acceptance, and that code reviewโrelated features exert opposing effects on the merge likelihood of human- versus AI-submitted PRs. These findings offer critical empirical insights for optimizing human-AI collaborative software development workflows and enhancing the quality of AI-assisted code contributions.
๐ Abstract
The automatic generation of pull requests (PRs) using AI agents has become increasingly common. Although AI-generated PRs are fast and easy to create, their merge rates have been reported to be lower than those created by humans. In this study, we conduct a large-scale empirical analysis of 40,214 PRs collected from the AIDev dataset. We extract 64 features across six families and fit statistical regression models to compare PR merge outcomes for human and agentic PRs, as well as across three AI agents. Our results show that submitter attributes dominate merge outcomes for both groups, while review-related features exhibit contrasting effects between human and agentic PRs. The findings of this study provide insights into improving PR quality through human-AI collaboration.