Are Benchmark Tests Strong Enough? Mutation-Guided Diagnosis and Augmentation of Regression Suites

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing benchmark suites such as SWE-bench, which often misclassify semantically incorrect yet superficially passing patches as valid, thereby overestimating the performance of automated program repair systems. To mitigate this issue, the authors propose STING, a novel framework that integrates semantic mutation with behavior-preserving transformations to establish a closed-loop test augmentation pipeline. By employing semantically mutated programs as diagnostic stressors, STING identifies inadequacies in existing test suites and generates targeted, high-fidelity test cases. Evaluated on SWE-bench Verified, STING reveals vulnerabilities in 77% of the instances, contributes 1,014 new effective tests, and improves line and branch coverage by 10.8% and 9.5%, respectively. Furthermore, it reduces the success rate of top repair models by 4.2%–9.0%, effectively distinguishing genuinely correct patches from those that merely pass superficially.
📝 Abstract
Benchmarks driven by test suites, notably SWE-bench, have become the de facto standard for measuring the effectiveness of automated issue-resolution agents: a generated patch is accepted whenever it passes the accompanying regression tests. In practice, however, insufficiently strong test suites can admit plausible yet semantically incorrect patches, inflating reported success rates. We introduce STING, a framework for targeted test augmentation that uses semantically altered program variants as diagnostic stressors to uncover and repair weaknesses in benchmark regression suites. Variants of the ground-truth patch that still pass the existing tests reveal under-constrained behaviors; these gaps then guide the generation of focused regression tests. A generated test is retained only if it (i) passes on the ground-truth patch, (ii) fails on at least one variant that survived the original suite, and (iii) remains valid under behavior-preserving transformations designed to guard against overfitting. Applied to SWE-bench Verified, STING finds that 77% of instances contain at least one surviving variant. STING produces 1,014 validated tests spanning 211 instances and increases patch-region line and branch coverage by 10.8% and 9.5%, respectively. Re-assessing the top-10 repair agents with the strengthened suites lowers their resolved rates by 4.2%-9.0%, revealing that a substantial share of previously passing patches exploit weaknesses in the benchmark tests rather than faithfully implementing the intended fix. These results underscore that reliable benchmark evaluation depends not only on patch generation, but equally on test adequacy.
Problem

Research questions and friction points this paper is trying to address.

benchmark tests
regression suites
test adequacy
automated program repair
patch validation
Innovation

Methods, ideas, or system contributions that make the work stand out.

test augmentation
mutation-guided diagnosis
regression testing
automated program repair
benchmark evaluation