Rethinking Verification for LLM Code Generation: From Generation to Testing

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation benchmarks (e.g., HumanEval, LiveCodeBench) suffer from homogeneous and insufficient test cases, leading to overestimation of LLM performance and undermining the reliability of verifiable rewards in reinforcement learning (RLVR). This work identifies inadequate test sufficiency as the root cause of reward signal distortion and proposes a novel “generate–verify” collaborative evaluation paradigm. We introduce multidimensional test sufficiency metrics and design SAGA, a human–machine collaborative test generation framework that integrates programming expertise, adversarial sample construction, and dynamic coverage feedback. Evaluated on our newly constructed benchmark TCGBench, SAGA achieves a 90.62% defect detection rate and a 32.58% verification accuracy. Moreover, the test suites generated by SAGA improve evaluation discriminability by 10.78% over LiveCodeBench-v6.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have recently achieved notable success in code-generation benchmarks such as HumanEval and LiveCodeBench. However, a detailed examination reveals that these evaluation suites often comprise only a limited number of homogeneous test cases, resulting in subtle faults going undetected. This not only artificially inflates measured performance but also compromises accurate reward estimation in reinforcement learning frameworks utilizing verifiable rewards (RLVR). To address these critical shortcomings, we systematically investigate the test-case generation (TCG) task by proposing multi-dimensional metrics designed to rigorously quantify test-suite thoroughness. Furthermore, we introduce a human-LLM collaborative method (SAGA), leveraging human programming expertise with LLM reasoning capability, aimed at significantly enhancing both the coverage and the quality of generated test cases. In addition, we develop a TCGBench to facilitate the study of the TCG task. Experiments show that SAGA achieves a detection rate of 90.62% and a verifier accuracy of 32.58% on TCGBench. The Verifier Accuracy (Verifier Acc) of the code generation evaluation benchmark synthesized by SAGA is 10.78% higher than that of LiveCodeBench-v6. These results demonstrate the effectiveness of our proposed method. We hope this work contributes to building a scalable foundation for reliable LLM code evaluation, further advancing RLVR in code generation, and paving the way for automated adversarial test synthesis and adaptive benchmark integration.
Problem

Research questions and friction points this paper is trying to address.

Improving test-case generation for reliable LLM code evaluation
Addressing limited test cases in current code-generation benchmarks
Enhancing verifier accuracy and fault detection in LLM-generated code
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional metrics for test-suite thoroughness
Human-LLM collaboration method (SAGA)
TCGBench for test-case generation study
🔎 Similar Papers
No similar papers found.