π€ AI Summary
This work addresses the challenge that existing unit test generation methods struggle to simultaneously ensure assertion correctness and interpretability, largely due to the scarcity of high-quality training data with chain-of-thought reasoning. To overcome this limitation, the authors propose a self-debugging-based data distillation approach that uniquely integrates error diagnosis, iterative guided repair (guided by errors, failures, or coverage), and chain-of-thought compression to synthesize a high-quality dataset comprising 74,518 samples. This dataset is then employed for supervised fine-tuning of large language models. The resulting model significantly outperforms state-of-the-art commercial counterparts such as o4-mini, achieving superior performance in assertion pass rate (36.17%), branch coverage (43.90%), and mutation score (88.66%), thereby demonstrating the effectiveness and novelty of the proposed methodology.
π Abstract
Automatic unit test (UT) generation is essential for software quality assurance, but existing approaches--including symbolic execution, search-based approaches, and recent LLM-based generators--struggle to produce human-quality tests with correct, meaningful assertions and reliable chain-of-thought (CoT) explanations. We identify a gap in UT training data: repository-mined tests lack developer CoTs, while LLM-distilled CoTs are often incorrect or incomplete. To address this issue, we propose a novel data-distillation approach that uses self-debugging to produce high-quality UT training examples paired with faithful CoTs. Our approach combines (1) guided test repair, a heuristic loop (error-, failure-, and coverage-focused steps) that asks the used model to diagnose and iteratively fix generated tests, and (2) CoT compression, which compacts original and debugging CoTs into concise explanations that directly justify correct tests. We apply this pipeline to a large corpus of open-source projects to construct a dataset of 74,518 high-qualityexamples, and then use it for supervised fine-tuning of a base model. An empirical evaluation shows that the fine-tuned model achieves high UT generation effectiveness: it attains a pass rate of 36.17% on test assertions, a branch coverage of 43.90%, and a mutation score of 88.66%, substantially higher than state-of-the-art commercial models like o4-mini.