🤖 AI Summary
To address the challenges of unrealistic input generation and hard-to-verify assertions in Java automated testing, this paper proposes a semantics-driven test generation approach grounded in code annotations. We systematically define seven categories of business-semantic annotations and four composition rules, mapping them to object construction constraints and verifiable assertion logic. Our method integrates Java annotation parsing, symbolic execution with heuristic solving, and an annotation-aware joint input/assertion generation framework, enabling high-fidelity test data synthesis both within and beyond requirement boundaries. Evaluation across multiple open-source Java projects demonstrates that our annotation design is compatible with mainstream frameworks (e.g., Spring, JSR-303); boundary coverage improves by 23.6% on average; and 11 real-world defects are successfully identified.
📝 Abstract
Automated test case generation is important. However, the automatically generated test input does not always make sense, and the automated assertion is difficult to validate against the program under test. In this paper, we propose JustinANN, a flexible and scalable tool to generate test cases for Java programs, providing realistic test inputs and assertions. We have observed that, in practice, Java programs contain a large number of annotations from programs, which can be considered as part of the user specification. We design a systematic annotation set with 7 kinds of annotations and 4 combination rules based on them to modify complex Java objects. Annotations that modify the fields or return variables of methods can be used to generate assertions that represent the true intent of the program, and the ones that modify the input parameters can be used to generate test inputs that match the real business requirement. We have conducted experiments to evaluate the approach on open source Java programs. The results show that the annotations and their combinations designed in this paper are compatible with existing annotations; our approach is easier to generate test data in, on and outside the boundaries of the requirement domain; and it also helps to find program defects.