VeRPO: Verifiable Dense Reward Policy Optimization for Code Generation

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of sparse reward signals in reinforcement learning for code generation, where traditional unit-test-based rewards offer only binary pass/fail feedback and existing external reward models suffer from alignment bias and high computational overhead. The authors propose a novel reinforcement learning framework that introduces, for the first time, a dense reward mechanism based entirely on verifiable execution feedback. By dynamically weighting the difficulty of passed tests and integrating global execution outcomes, the method constructs a reward signal that is dense, well-aligned with correctness, and computationally efficient. Notably, it eliminates the need for external reward models, achieving significant performance gains across multiple benchmarks—up to an 8.83% improvement in pass@1—while incurring less than 0.02% additional runtime overhead and no extra GPU memory usage.

Technology Category

Application Category

📝 Abstract
Effective reward design is a central challenge in Reinforcement Learning (RL) for code generation. Mainstream pass/fail outcome rewards enforce functional correctness via executing unit tests, but the resulting sparsity limits potential performance gains. While recent work has explored external Reward Models (RM) to generate richer, continuous rewards, the learned RMs suffer from reward misalignment and prohibitive computational cost. In this paper, we introduce \textbf{VeRPO} (\textbf{V}erifiable D\textbf{e}nse \textbf{R}eward \textbf{P}olicy \textbf{O}ptimization), a novel RL framework for code generation that synthesizes \textit{robust and dense rewards fully grounded in verifiable execution feedback}. The core idea of VeRPO is constructing dense rewards from weighted partial success: by dynamically estimating the difficulty weight of each unit test based on the execution statistics during training, a dense reward is derived from the sum of weights of the passed unit tests. To solidify the consistency between partial success and end-to-end functional correctness, VeRPO further integrates the dense signal with global execution outcomes, establishing a robust and dense reward paradigm relying solely on verifiable execution feedback. Extensive experiments across diverse benchmarks and settings demonstrate that VeRPO consistently outperforms outcome-driven and RM-based baselines, achieving up to +8.83\% gain in pass@1 with negligible time cost (<0.02\%) and zero GPU memory overhead.
Problem

Research questions and friction points this paper is trying to address.

reward design
code generation
reinforcement learning
sparse reward
reward misalignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

dense reward
verifiable execution feedback
policy optimization
code generation
reinforcement learning
🔎 Similar Papers
No similar papers found.
L
Longwen Wang
Institute of Artificial Intelligence, China Telecom (TeleAI)
X
Xuan'er Wu
Institute of Artificial Intelligence, China Telecom (TeleAI)
X
Xiaohui Hu
Institute of Artificial Intelligence, China Telecom (TeleAI)
Y
Yirui Liu
Institute of Artificial Intelligence, China Telecom (TeleAI)
Y
Yuankai Fan
Institute of Artificial Intelligence, China Telecom (TeleAI)
K
Kaidong Yu
Institute of Artificial Intelligence, China Telecom (TeleAI)
Qizhen Weng
Qizhen Weng
Hong Kong University of Science and Technology
Machine Learning SystemsAI InfrastructureCloud Computing
W
Wei Xi
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi’an Jiaotong University
X
Xuelong Li
Institute of Artificial Intelligence, China Telecom (TeleAI)