CoverUp: Coverage-Guided LLM-Based Test Generation

📅 2024-03-24
🏛️ arXiv.org
📈 Citations: 13
Influential: 2
📄 PDF
🤖 AI Summary
This work addresses the low line and branch coverage commonly observed in automated regression testing for Python programs. We propose the first coverage-guided large language model (LLM) test generation paradigm. Methodologically, we integrate dynamic coverage feedback into prompt engineering via a closed-loop framework that combines static code analysis, runtime coverage instrumentation, context-aware prompt construction, and iterative LLM-based test generation and refinement—using models such as CodeLlama. Our core contribution lies in leveraging coverage signals to drive iterative test synthesis and optimization, enabling targeted improvement toward specified coverage objectives. Evaluated on an open-source Python benchmark, our approach achieves 90% combined line-and-branch coverage—substantially outperforming CodaMosa (47%) and MuTAP (77%). These results empirically validate the effectiveness and advancement of coupling coverage feedback with LLM-driven test generation.

Technology Category

Application Category

📝 Abstract
Testing is an essential part of software development. Test generation tools attempt to automate the otherwise labor-intensive task of test creation, but generating high-coverage tests remains challenging. This paper proposes CoverUp, a novel approach to driving the generation of high-coverage Python regression tests. CoverUp combines coverage analysis, code context, and feedback in prompts that iteratively guide the LLM to generate tests that improve line and branch coverage. We evaluate our prototype CoverUp implementation across a benchmark of challenging code derived from open-source Python projects and show that CoverUp substantially improves on the state of the art. Compared to CodaMosa, a hybrid search/LLM-based test generator, CoverUp achieves a per-module median line+branch coverage of 80% (vs. 47%). Compared to MuTAP, a mutation- and LLM-based test generator, CoverUp achieves an overall line+branch coverage of 90% (vs. 77%). We also demonstrate that CoverUp's performance stems not only from the LLM used but from the combined effectiveness of its components.
Problem

Research questions and friction points this paper is trying to address.

Automate high-coverage Python test generation
Combine coverage analysis with LLM guidance
Improve line and branch coverage benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coverage-guided test generation
LLM with iterative feedback
Combined coverage and code context
🔎 Similar Papers
No similar papers found.