Consistency Meets Verification: Enhancing Test Generation Quality in Large Language Models Without Ground-Truth Solutions

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current large language models in automated test case generation, which typically rely on ground-truth code for validation and thus struggle in test-driven development scenarios where no implementation exists, while also being prone to hallucination. To overcome these challenges, we propose ConVerTest, a two-stage framework that generates reliable test cases without access to real code. ConVerTest uniquely integrates self-consistent generation, chain-of-verification reasoning, and a dual consensus mechanism based on code-test co-execution, enabling unsupervised synthesis of high-fidelity tests. Experimental results on the BIGCODEBENCH and LBPP benchmarks demonstrate substantial improvements over existing methods, with gains of 39% in test effectiveness, 28% in line coverage, and 18% in mutation score, while significantly mitigating model hallucination.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have significantly advanced automated test generation, yet existing methods often rely on ground-truth code for verification, risking bug propagation and limiting applicability in test-driven development. We present ConVerTest, a novel two-stage pipeline for synthesizing reliable tests without requiring prior code implementations. ConVerTest integrates three core strategies: (i) Self-Consistency(SC) to generate convergent test cases via majority voting; (ii) Chain-of-Verification (CoVe) for iterative, reasoning-guided code refinement; and (iii) a Dual Execution Agreement to crossvalidate code and tests through consensus. Experiments on BIGCODEBENCH and LESS BASIC PYTHON PROBLEMS (LBPP) benchmarks demonstrate that ConVerTest improves test validity, line coverage, and mutation scores by up to 39%, 28%, and 18% respectively over baselines. Our findings highlight ConVerTest as a robust solution for mitigating hallucinations and enhancing the reliability of autonomous software testing agents.
Problem

Research questions and friction points this paper is trying to address.

test generation
large language models
ground-truth code
bug propagation
test-driven development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Consistency
Chain-of-Verification
Dual Execution Agreement
Test Generation
LLM Hallucination Mitigation
🔎 Similar Papers
No similar papers found.