🤖 AI Summary
This study addresses the critical issue that patches generated by automatic program repair (APR) tools often overfit to test suites, passing tests while remaining incorrect, and that the real-world effectiveness of existing overfitting detection methods remains unclear. To bridge this gap, the authors construct a patch dataset that closely mirrors the output distribution of realistic APR tools and present the first systematic evaluation of six state-of-the-art overfitting detection techniques—spanning static analysis, dynamic testing, and learning-based approaches—on data reflecting actual APR usage. They also introduce random patch selection as a new baseline. Experimental results reveal that, in 71% to 96% of cases, this simple random strategy outperforms all existing detection methods, underscoring their limited practical utility and highlighting the necessity for future research to benchmark against random baselines to validate methodological effectiveness.
📝 Abstract
Automated Program Repair (APR) can reduce the time developers spend debugging, allowing them to focus on other aspects of software development. Automatically generated bug patches are typically validated through software testing. However, this method can lead to patch overfitting, i.e., generating patches that pass the given tests but are still incorrect.
Patch correctness assessment (also known as overfitting detection) techniques have been proposed to identify patches that overfit. However, prior work often assessed the effectiveness of these techniques in isolation and on datasets that do not reflect the distribution of correct-to-overfitting patches that would be generated by APR tools in typical use; thus, we still do not know their effectiveness in practice.
This work presents the first comprehensive benchmarking study of several patch overfitting detection (POD) methods in a practical scenario. To this end, we curate datasets that reflect realistic assumptions (i.e., patches produced by tools run under the same experimental conditions). Next, we use these data to benchmark six state-of-the-art POD approaches -- spanning static analysis, dynamic testing, and learning-based approaches -- against two baselines based on random sampling (one from prior work and one proposed herein).
Our results are striking: Simple random selection outperforms all POD tools for 71% to 96% of cases, depending on the POD tool. This suggests two main takeaways: (1) current POD tools offer limited practical benefit, highlighting the need for novel techniques; (2) any POD tool must be benchmarked on realistic data and against random sampling to prove its practical effectiveness. To this end, we encourage the APR community to continue improving POD techniques and to adopt our proposed methodology for practical benchmarking; we make our data and code available to facilitate such adoption.