🤖 AI Summary
Rust unit test coverage remains low; existing search-based software testing (SBST) and symbolic execution techniques struggle with complex branching and external dependencies; and large language model (LLM)-generated tests suffer from high compilation failure rates and limited coverage due to static prompting. Method: This paper proposes an automated test generation approach integrating program analysis and LLMs, featuring a novel lightweight path constraint extraction mechanism and dynamic context-aware prompt construction. It combines static-dynamic hybrid analysis, Rust AST parsing, coverage-guided feedback loops, and iterative test generation using fine-tuned CodeLlama and DeepSeek-Coder models. Contribution/Results: Evaluated on 10 open-source Rust crates, the method achieves a mean line coverage of 75.77%—significantly surpassing the human-written baseline (71.30%)—with per-project improvements exceeding 50% in the best case. It contributed 91 test cases, 80 of which were merged into upstream repositories, demonstrating strong industrial applicability and high-quality output.
📝 Abstract
Unit testing is essential for ensuring software reliability and correctness. Classic Search-Based Software Testing (SBST) methods and concolic execution-based approaches for generating unit tests often fail to achieve high coverage due to difficulties in handling complex program units, such as branching conditions and external dependencies. Recent work has increasingly utilized large language models (LLMs) to generate test cases, improving the quality of test generation by providing better context and correcting errors in the model's output. However, these methods rely on fixed prompts, resulting in relatively low compilation success rates and coverage. This paper presents PALM, an approach that leverages large language models (LLMs) to enhance the generation of high-coverage unit tests. PALM performs program analysis to identify branching conditions within functions, which are then combined into path constraints. These constraints and relevant contextual information are used to construct prompts that guide the LLMs in generating unit tests. We implement the approach and evaluate it in 10 open-source Rust crates. Experimental results show that within just two or three hours, PALM can significantly improves test coverage compared to classic methods, with increases in overall project coverage exceeding 50% in some instances and its generated tests achieving an average coverage of 75.77%, comparable to human effort (71.30%), highlighting the potential of LLMs in automated test generation. We submitted 91 PALM-generated unit tests targeting new code. Of these submissions, 80 were accepted, 5 were rejected, and 6 remain pending review. The results demonstrate the effectiveness of integrating program analysis with AI and open new avenues for future research in automated software testing.