Mutation Testing via Iterative Large Language Model-Driven Scientific Debugging

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to generate semantically precise “killing” tests—i.e., tests that reliably distinguish non-equivalent mutants—in mutation testing. Method: This paper proposes the first LLM-based iterative test generation framework grounded in the scientific debugging paradigm (hypothesis–experiment–reflection): the model autonomously formulates defect hypotheses, generates targeted tests, analyzes execution failures, and dynamically refines its strategy. The approach integrates program mutation analysis, iterative prompt engineering, and interpretable reasoning. Contribution/Results: Evaluated on the Pynguin benchmark, our method surpasses Pynguin in mutation score, code coverage, and fault detection rate, demonstrating the critical impact of iterative refinement on test quality. It is the first LLM-driven approach enabling explainable, self-reflective, and high-precision mutation testing—marking a significant advance in automated, semantics-aware test generation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) can generate plausible test code. Intuitively they generate this by imitating tests seen in their training data, rather than reasoning about execution semantics. However, such reasoning is important when applying mutation testing, where individual tests need to demonstrate differences in program behavior between a program and specific artificial defects (mutants). In this paper, we evaluate whether Scientific Debugging, which has been shown to help LLMs when debugging, can also help them to generate tests for mutants. In the resulting approach, LLMs form hypotheses about how to kill specific mutants, and then iteratively generate and refine tests until they succeed, all with detailed explanations for each step. We compare this method to three baselines: (1) directly asking the LLM to generate tests, (2) repeatedly querying the LLM when tests fail, and (3) search-based test generation with Pynguin. Our experiments evaluate these methods based on several factors, including mutation score, code coverage, success rate, and the ability to identify equivalent mutants. The results demonstrate that LLMs, although requiring higher computation cost, consistently outperform Pynguin in generating tests with better fault detection and coverage. Importantly, we observe that the iterative refinement of test cases is important for achieving high-quality test suites.
Problem

Research questions and friction points this paper is trying to address.

Evaluating if Scientific Debugging improves LLM-generated mutation tests
Comparing iterative LLM-driven test refinement with baseline methods
Assessing test quality via mutation score, coverage, and fault detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven iterative test refinement
Scientific Debugging for mutant testing
Hypothesis-based test generation