🤖 AI Summary
This paper addresses the problem that large language models (LLMs) generate unrealistic test inputs and unreliable assertions for unit testing due to insufficient semantic understanding. To tackle this, the authors systematically survey 115 studies and propose the first unified, lifecycle-spanning taxonomy for LLM-based unit test generation. Methodologically, they model LLMs as stochastic generators requiring engineering constraints, categorizing techniques into five dimensions: context enhancement, prompt engineering, assertion synthesis, iterative validation and repair, and pre-/post-processing quality assurance. Results show that 89% of existing works rely on prompt engineering; iterative validation significantly improves compilation and execution pass rates; yet defect detection capability remains weak, and standardized evaluation benchmarks are lacking. The study further identifies key research trajectories toward autonomous testing agents and hybrid testing systems.
📝 Abstract
Unit testing is an essential yet laborious technique for verifying software and mitigating regression risks. Although classic automated methods effectively explore program structures, they often lack the semantic information required to produce realistic inputs and assertions. Large Language Models (LLMs) address this limitation by utilizing by leveraging their data-driven knowledge of code semantics and programming patterns. To analyze the state of the art in this domain, we conducted a systematic literature review of 115 publications published between May 2021 and August 2025. We propose a unified taxonomy based on the unit test generation lifecycle that treats LLMs as stochastic generators requiring systematic engineering constraints. This framework analyzes the literature regarding core generative strategies and a set of enhancement techniques ranging from pre-generation context enrichment to post-generation quality assurance. Our analysis reveals that prompt engineering has emerged as the dominant utilization strategy and accounts for 89% of the studies due to its flexibility. We find that iterative validation and repair loops have become the standard mechanism to ensure robust usability and lead to significant improvements in compilation and execution pass rates. However, critical challenges remain regarding the weak fault detection capabilities of generated tests and the lack of standardized evaluation benchmarks. We conclude with a roadmap for future research that emphasizes the progression towards autonomous testing agents and hybrid systems combining LLMs with traditional software engineering tools. This survey provides researchers and practitioners with a comprehensive perspective on converting the potential of LLMs into industrial-grade testing solutions.