🤖 AI Summary
To address the low correctness and poor maintainability of unit tests generated by large language models (LLMs), this paper proposes a collaborative generation method integrating project context with domain-specific testing knowledge. Our approach features: (1) decoupling test case design from implementation via a structured, template-driven two-stage generation framework; and (2) leveraging static analysis to extract project structural knowledge, coupled with a multi-perspective prompting strategy that guides LLMs to accurately model testing intent and boundary conditions. Experiments across multiple mainstream open-source projects demonstrate that our method improves test execution pass rate by 5.69% and line coverage by 8.83%. Generated tests are more concise and faster to produce, and human evaluation confirms significant improvements over baseline methods in readability, maintainability, and functional correctness.
📝 Abstract
Automated unit test generation using large language models (LLMs) holds great promise but often struggles with generating tests that are both correct and maintainable in real-world projects. This paper presents KTester, a novel framework that integrates project-specific knowledge and testing domain knowledge to enhance LLM-based test generation. Our approach first extracts project structure and usage knowledge through static analysis, which provides rich context for the model. It then employs a testing-domain-knowledge-guided separation of test case design and test method generation, combined with a multi-perspective prompting strategy that guides the LLM to consider diverse testing heuristics. The generated tests follow structured templates, improving clarity and maintainability. We evaluate KTester on multiple open-source projects, comparing it against state-of-the-art LLM-based baselines using automatic correctness and coverage metrics, as well as a human study assessing readability and maintainability. Results demonstrate that KTester significantly outperforms existing methods across six key metrics, improving execution pass rate by 5.69% and line coverage by 8.83% over the strongest baseline, while requiring less time and generating fewer test cases. Human evaluators also rate the tests produced by KTester significantly higher in terms of correctness, readability, and maintainability, confirming the practical advantages of our knowledge-driven framework.