๐ค AI Summary
To address the low coverage and poor quality of exception-behavior tests (EBTs) generated by existing tools, this paper proposes the first LLM-based framework for EBT generation. We perform fine-grained instruction tuning on CodeLlama, integrating program execution trace analysis, guard condition identification, and joint reasoning with non-exceptional test cases. Our novel contribution is a co-modeling mechanism that jointly captures exception-triggering paths and associated guard conditions, enabling end-to-end, interpretable EBT generation. Evaluated on multiple benchmarks, our approach significantly outperforms CAT-LM, GPT-4o, Randoop, and EvoSuite. Moreover, 23 of our generated EBTs have been accepted and merged into mainstream open-source projects, with corresponding pull requests publicly available. This work bridges a critical research gap in LLM-augmented exception testing and establishes a new paradigm for robustness verification.
๐ Abstract
Many popular programming languages, including C#, Java, and Python, support exceptions. Exceptions are thrown during program execution if an unwanted event happens, e.g., a method is invoked with an illegal argument value. Software developers write exceptional behavior tests (EBTs) to check that their code detects unwanted events and throws appropriate exceptions. Prior research studies have shown the importance of EBTs, but those studies also highlighted that developers put most of their efforts on"happy paths", e.g., paths without unwanted events. To help developers fill the gap, we present the first framework, dubbed exLong, that automatically generates EBTs. exLong is a large language model instruction fine-tuned from CodeLlama and embeds reasoning about traces that lead to throw statements, conditional expressions that guard throw statements, and non-exceptional behavior tests that execute similar traces. We compare exLong with the state-of-the-art models for test generation (CAT-LM) and one of the strongest foundation models (GPT-4o), as well as with analysis-based tools for test generation (Randoop and EvoSuite). Our results show that exLong outperforms existing models and tools. Furthermore, we contributed several pull requests to open-source projects and 23 EBTs generated by exLong were already accepted.