🤖 AI Summary
This study addresses the inefficiency in code review caused by the frequent removal of AI-generated code in pull requests, which often leads reviewers to expend effort on content ultimately discarded. It presents the first systematic analysis of the characteristics of deleted AI-generated code and proposes a function-level deletion prediction model that integrates code semantics and contextual features through machine learning. The model effectively identifies functions with a high likelihood of being removed, enabling reviewers to prioritize critical code segments. Evaluated on real-world data, the approach achieves an AUC of 87.1% in predicting deletions, demonstrating its potential to significantly enhance code review efficiency.
📝 Abstract
Agentic Coding, powered by autonomous agents such as GitHub Copilot and Cursor, enables developers to generate code, tests, and pull requests from natural language instructions alone. While this accelerates implementation, it produces larger volumes of code per pull request, shifting the burden from implementers to reviewers. In practice, a notable portion of AI-generated code is eventually deleted during review, yet reviewers must still examine such code before deciding to remove it. No prior work has explored methods to help reviewers efficiently identify code that will be removed.In this paper, we propose a prediction model that identifies functions likely to be deleted during PR review. Our results show that functions deleted for different reasons exhibit distinct characteristics, and our model achieves an AUC of 87.1%. These findings suggest that predictive approaches can help reviewers prioritize their efforts on essential code.