KTester: Leveraging Domain and Testing Knowledge for More Effective LLM-based Test Generation

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low correctness and poor maintainability of unit tests generated by large language models (LLMs), this paper proposes a collaborative generation method integrating project context with domain-specific testing knowledge. Our approach features: (1) decoupling test case design from implementation via a structured, template-driven two-stage generation framework; and (2) leveraging static analysis to extract project structural knowledge, coupled with a multi-perspective prompting strategy that guides LLMs to accurately model testing intent and boundary conditions. Experiments across multiple mainstream open-source projects demonstrate that our method improves test execution pass rate by 5.69% and line coverage by 8.83%. Generated tests are more concise and faster to produce, and human evaluation confirms significant improvements over baseline methods in readability, maintainability, and functional correctness.

Technology Category

Application Category

📝 Abstract
Automated unit test generation using large language models (LLMs) holds great promise but often struggles with generating tests that are both correct and maintainable in real-world projects. This paper presents KTester, a novel framework that integrates project-specific knowledge and testing domain knowledge to enhance LLM-based test generation. Our approach first extracts project structure and usage knowledge through static analysis, which provides rich context for the model. It then employs a testing-domain-knowledge-guided separation of test case design and test method generation, combined with a multi-perspective prompting strategy that guides the LLM to consider diverse testing heuristics. The generated tests follow structured templates, improving clarity and maintainability. We evaluate KTester on multiple open-source projects, comparing it against state-of-the-art LLM-based baselines using automatic correctness and coverage metrics, as well as a human study assessing readability and maintainability. Results demonstrate that KTester significantly outperforms existing methods across six key metrics, improving execution pass rate by 5.69% and line coverage by 8.83% over the strongest baseline, while requiring less time and generating fewer test cases. Human evaluators also rate the tests produced by KTester significantly higher in terms of correctness, readability, and maintainability, confirming the practical advantages of our knowledge-driven framework.
Problem

Research questions and friction points this paper is trying to address.

Improving correctness and maintainability of LLM-generated unit tests
Integrating project-specific knowledge with testing domain expertise
Addressing limitations of automated test generation in real-world projects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates project-specific and testing domain knowledge
Uses static analysis to extract project structure context
Employs multi-perspective prompting with testing heuristics
🔎 Similar Papers
No similar papers found.
Anji Li
Anji Li
Sun Yat-sen University
AI4SEsoftware testing
Mingwei Liu
Mingwei Liu
Rutgers University
China laborhigh performance work systems
Z
Zhenxi Chen
Sun Yat-sen University
Z
Zheng Pei
Sun Yat-sen University
Z
Zike Li
Sun Yat-sen University
D
Dekun Dai
Sun Yat-sen University
Y
Yanlin Wang
Sun Yat-sen University
Zibin Zheng
Zibin Zheng
IEEE Fellow, Highly Cited Researcher, Sun Yat-sen University, China
BlockchainSmart ContractServices ComputingSoftware Reliability