🤖 AI Summary
This study challenges the common assumption that high code coverage implies high fault localization accuracy in spectrum-based fault localization (SBFL). We systematically evaluate the effectiveness of automatically generated test cases—produced by tools such as EvoSuite and Randoop—against manually written tests, measuring SBFL score (our primary quality metric), mutation kill rate, and branch coverage across 42 real-world defects from Defects4J. Contrary to expectations, although automatically generated tests achieve 18% higher average branch coverage, they yield 23% lower SBFL scores. Moreover, in deeply nested code regions, their fault localization precision degrades by up to 41%. These findings reveal a fundamental inconsistency between coverage metrics and actual fault localization capability. The work provides the first empirical evidence centered on SBFL score as the key evaluation criterion, offering critical support for hybrid testing strategies and advocating a paradigm shift in test quality assessment—from coverage-oriented to localization-oriented evaluation.
📝 Abstract
The testing phase is an essential part of software development, but manually creating test cases can be time-consuming. Consequently, there is a growing need for more efficient testing methods. To reduce the burden on developers, various automated test generation tools have been developed, and several studies have been conducted to evaluate the effectiveness of the tests they produce. However, most of these studies focus primarily on coverage metrics, and only a few examine how well the tests support fault localization-particularly using artificial faults introduced through mutation testing. In this study, we compare the SBFL (Spectrum-Based Fault Localization) score and code coverage of automatically generated tests with those of manually created tests. The SBFL score indicates how accurately faults can be localized using SBFL techniques. By employing SBFL score as an evaluation metric-an approach rarely used in prior studies on test generation-we aim to provide new insights into the respective strengths and weaknesses of manually created and automatically generated tests. Our experimental results show that automatically generated tests achieve higher branch coverage than manually created tests, but their SBFL score is lower, especially for code with deeply nested structures. These findings offer guidance on how to effectively combine automatically generated and manually created testing approaches.