🤖 AI Summary
This paper addresses the challenge of manually writing effective assertions in unit testing. We propose an automated assertion generation method based on fine-tuning the CodeT5 code language model. To our knowledge, this is the first work to apply a pre-trained code large language model (LLM) to assertion generation; it employs identifier abstraction and focal-method contextual modeling during supervised fine-tuning. Our contributions are threefold: (1) We empirically validate that source-code abstraction and context augmentation significantly improve LLM fine-tuning for assertion generation; (2) we demonstrate that syntactic correctness of generated assertions does not guarantee defect-detection capability; and (3) our approach achieves a 59.5% exact-match accuracy on standard benchmarks—doubling the performance of prior methods—and successfully generates 33 assertions that detect real-world bugs in the Defects4J dataset, covering 24% of the 138 detectable defects.
📝 Abstract
Writing good software tests can be challenging, therefore approaches that support developers are desirable. While generating complete tests automatically is such an approach commonly proposed in research, developers may already have specific test scenarios in mind and thus just require help in selecting the most suitable test assertions for these scenarios. This can be done using deep learning models to predict assertions for given test code. Prior research on assertion generation trained these models specifically for the task, raising the question how much the use of larger models pre-trained on code that have emerged since then can improve their performance. In particular, while abstracting identifiers has been shown to improve specifically trained models, it remains unclear whether this also generalises to models pre-trained on non-abstracted code. Finally, even though prior work demonstrated high accuracy it remains unclear how this translates into the effectiveness of the assertions at their intended application -- finding faults. To shed light on these open questions, in this paper we propose AsserT5, a new model based on the pre-trained CodeT5 model, and use this to empirically study assertion generation. We find that the abstraction and the inclusion of the focal method are useful also for a fine-tuned pre-trained model, resulting in test assertions that match the ground truth assertions precisely in up to 59.5% of cases, more than twice as precise as prior models. However, evaluation on real bugs from the Defects4J dataset shows that out of 138 bugs detectable with assertions in real-world projects, AsserT5 was only able to suggest fault-finding assertions for 33, indicating the need for further improvements.