🤖 AI Summary
Test suites in long-term evolution commonly exhibit coverage gaps; manually identifying uncovered production execution paths and augmenting test cases is prohibitively expensive.
Method: We propose the first incremental test suite optimization framework integrating large language models (LLMs) with dynamic execution analysis: it automatically mines uncovered execution paths from production monitoring logs and leverages LLMs to understand semantic context and generate high-fidelity test cases.
Contribution/Results: Our approach transcends the limitations of conventional regression and field testing by enabling end-to-end automated gap detection and test generation. Evaluated on 1,975 real-world industrial scenarios, it achieves an F1-score of 0.55—over 60% higher than the state-of-the-art—and significantly reduces test maintenance overhead.
📝 Abstract
Test suites are inherently imperfect, and testers can always enrich a suite with new test cases that improve its quality and, consequently, the reliability of the target software system. However, finding test cases that explore execution scenarios beyond the scope of an existing suite can be extremely challenging and labor-intensive, particularly when managing large test suites over extended periods.
In this paper, we propose E-Test, an approach that reduces the gap between the execution space explored with a test suite and the executions experienced after testing by augmenting the test suite with test cases that explore execution scenarios that emerge in production. E-Test (i) identifies executions that have not yet been tested from large sets of scenarios, such as those monitored during intensive production usage, and (ii) generates new test cases that enhance the test suite. E-Test leverages Large Language Models (LLMs) to pinpoint scenarios that the current test suite does not adequately cover, and augments the suite with test cases that execute these scenarios.
Our evaluation on a dataset of 1,975 scenarios, collected from highly-starred open-source Java projects already in production and Defects4J, demonstrates that E-Test retrieves not-yet-tested execution scenarios significantly better than state-of-the-art approaches. While existing regression testing and field testing approaches for this task achieve a maximum F1-score of 0.34, and vanilla LLMs achieve a maximum F1-score of 0.39, E-Test reaches 0.55. These results highlight the impact of E-Test in enhancing test suites by effectively targeting not-yet-tested execution scenarios and reducing manual effort required for maintaining test suites.