🤖 AI Summary
This work addresses the limitations of existing large language model–based mutation testing approaches, which often produce low-quality, redundant, or uncompilable mutants due to reliance on fixed or absent few-shot examples, and struggle to align with real-world fault semantics. To overcome these challenges, the authors propose SMART, a novel framework that uniquely integrates adaptive retrieval-augmented generation with supervised fine-tuning. By constructing a vector database of real defects, focusing on code chunking, and applying targeted fine-tuning, SMART substantially enhances both the validity and diversity of generated mutants. Experimental results demonstrate that SMART achieves a 65.6% mutant validity rate, 95.62% non-redundancy rate, and 92.61% real-fault detection rate on small-scale models, while improving the Ochiai coefficient to 38.44%. Moreover, it significantly boosts Top-1 accuracy in fault localization, matching or even surpassing the performance of GPT-4o.
📝 Abstract
LLM-based mutation testing is a promising testing technology, but existing approaches typically rely on a fixed set of mutations as few-shot examples or none at all. This can result in generic low-quality mutations, missed context-specific mutation patterns, substantial numbers of redundant and uncompilable mutants, and limited semantic similarity to real bugs. To overcome these limitations, we introduce SMART (Semantic Mutation with Adaptive Retrieval and Tuning). SMART integrates retrieval-augmented generation (RAG) on a vectorized dataset of real-world bugs, focused code chunking, and supervised fine-tuning using mutations coupled with real-world bugs. We conducted an extensive empirical study of SMART using 1,991 real-world Java bugs from the Defects4J and ConDefects datasets, comparing SMART to the state-of-the-art LLM-based approaches, LLMut and LLMorpheus. The results reveal that SMART substantially improves mutation validity, effectiveness, and efficiency (even enabling small-scale 7B-scale models to match or even surpass large models like GPT-4o). We also demonstrate that SMART significantly improves downstream software engineering applications, including test case prioritization and fault localization. More specifically, SMART improves validity (weighted average generation rate) from 42.89% to 65.6%. It raises the non-duplicate rate from 87.38% to 95.62%, and the compilable rate from 88.85% to 90.21%. In terms of effectiveness, it achieves a real bug detection rate of 92.61% (vs. 57.86% for LLMut) and improves the average Ochiai coefficient from 25.61% to 38.44%. For fault localization, SMART ranks 64 more bugs as Top-1 under MUSE and 57 more under Metallaxis.