A Tool for Generating Exceptional Behavior Tests With Large Language Models

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In software testing, developers predominantly focus on “happy-path” scenarios, leading to severe under-testing of exception behaviors (EBTs). To address this, we propose the first LLM-driven EBT generation framework that jointly leverages exception-throwing trace inference, guard-condition analysis of `throw` statements, and guidance from similar non-exceptional tests. Built upon a fine-tuned CodeLlama model, our approach integrates static program analysis with multi-stage, context-aware prompting to generate semantically meaningful and highly executable exception-path tests. Evaluated across multiple open-source Java and Python projects, our tool—exLong—improves real-world exception coverage by 3.2× over baseline methods, achieves an average test pass rate exceeding 78%, and is positively assessed by developers as a practical, production-ready testing aid.

Technology Category

Application Category

📝 Abstract
Exceptional behavior tests (EBTs) are crucial in software development for verifying that code correctly handles unwanted events and throws appropriate exceptions. However, prior research has shown that developers often prioritize testing"happy paths", e.g., paths without unwanted events over exceptional scenarios. We present exLong, a framework that automatically generates EBTs to address this gap. exLong leverages a large language model (LLM) fine-tuned from CodeLlama and incorporates reasoning about exception-throwing traces, conditional expressions that guard throw statements, and non-exceptional behavior tests that execute similar traces. Our demonstration video illustrates how exLong can effectively assist developers in creating comprehensive EBTs for their project (available at https://youtu.be/Jro8kMgplZk).
Problem

Research questions and friction points this paper is trying to address.

Generates exceptional behavior tests for software
Addresses lack of testing for unwanted events
Uses LLM to automate EBT creation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM fine-tuned from CodeLlama
Generates tests via exception-throwing traces
Combines conditional and non-exceptional test analysis
🔎 Similar Papers
No similar papers found.
L
Linghan Zhong
UT Austin (USA)
S
Samuel Yuan
UT Austin (USA)
J
Jiyang Zhang
UT Austin (USA)
Y
Yu Liu
UT Austin (USA)
Pengyu Nie
Pengyu Nie
University of Waterloo
Software EngineeringNatural Language ProcessingProgramming Languages
Junyi Jessy Li
Junyi Jessy Li
Associate Professor, The University of Texas at Austin
Computational LinguisticsNatural Language Processing
M
Miloš Gligorić
UT Austin (USA)