Towards More Effective Fault Detection in LLM-Based Unit Test Generation

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM-based unit test generation overrelies on code coverage—a metric weakly correlated with actual fault detection capability—while mutation score, though more reliable, remains unexploited for LLM optimization due to the absence of dedicated mutation-aware feedback mechanisms. To address this gap, we propose MUTGEN: the first iterative test generation framework that directly incorporates mutation feedback into LLM prompts, systematically uncovering LLMs’ failure modes and operator-specific sensitivities in mutation testing. Evaluated on 204 benchmark programs, MUTGEN significantly outperforms EvoSuite and baseline prompt-based approaches, achieving up to a 20× improvement in mutation score (e.g., rising from 4% to state-of-the-art levels). Moreover, it provides the first quantitative analysis of root causes for surviving and uncovered mutants. MUTGEN establishes a new paradigm for LLM-driven, high-reliability test generation grounded in rigorous mutation-based evaluation.

Technology Category

Application Category

📝 Abstract
Unit tests play a vital role in uncovering potential faults in software. While tools like EvoSuite focus on maximizing code coverage, recent advances in large language models (LLMs) have shifted attention toward LLM-based test generation. However, code coverage metrics -- such as line and branch coverage -- remain overly emphasized in reported research, despite being weak indicators of a test suite's fault-detection capability. In contrast, extit{mutation score} offers a more reliable and stringent measure, as demonstrated in our findings where some test suites achieve 100% coverage but only 4% mutation score. Although a few studies consider mutation score, the effectiveness of LLMs in killing mutants remains underexplored. In this paper, we propose MUTGEN, a mutation-guided, LLM-based test generation approach that incorporates mutation feedback directly into the prompt. Evaluated on 204 subjects from two benchmarks, MUTGEN significantly outperforms both EvoSuite and vanilla prompt-based strategies in terms of mutation score. Furthermore, MUTGEN introduces an iterative generation mechanism that pushes the limits of LLMs in killing additional mutants. Our study also provide insights into the limitations of LLM-based generation, analyzing the reasons for live and uncovered mutants, and the impact of different mutation operators on generation effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Improving fault detection in LLM-based unit test generation
Addressing weak correlation between code coverage and fault detection
Exploring LLM effectiveness in mutation score improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutation-guided LLM test generation
Incorporates mutation feedback in prompt
Iterative generation kills more mutants
🔎 Similar Papers
No similar papers found.
Guancheng Wang
Guancheng Wang
the Research Ireland Centre for Software, University of Limerick
Software Testing and Debugging
Qinghua Xu
Qinghua Xu
Lero Research Centre
Cyber-physical SystemsTestingLarge Language ModelDigital Twin
L
Lionel C. Briand
Research Ireland Lero Centre, University of Limerick, Ireland, University of Ottawa, Canada
K
Kui Liu
Software Engineering Application Technology Lab, Huawei, Hangzhou, China