🤖 AI Summary
This work addresses the challenge of sparse reward signals in reinforcement learning for code generation, where traditional unit-test-based rewards offer only binary pass/fail feedback and existing external reward models suffer from alignment bias and high computational overhead. The authors propose a novel reinforcement learning framework that introduces, for the first time, a dense reward mechanism based entirely on verifiable execution feedback. By dynamically weighting the difficulty of passed tests and integrating global execution outcomes, the method constructs a reward signal that is dense, well-aligned with correctness, and computationally efficient. Notably, it eliminates the need for external reward models, achieving significant performance gains across multiple benchmarks—up to an 8.83% improvement in pass@1—while incurring less than 0.02% additional runtime overhead and no extra GPU memory usage.
📝 Abstract
Effective reward design is a central challenge in Reinforcement Learning (RL) for code generation. Mainstream pass/fail outcome rewards enforce functional correctness via executing unit tests, but the resulting sparsity limits potential performance gains. While recent work has explored external Reward Models (RM) to generate richer, continuous rewards, the learned RMs suffer from reward misalignment and prohibitive computational cost. In this paper, we introduce \textbf{VeRPO} (\textbf{V}erifiable D\textbf{e}nse \textbf{R}eward \textbf{P}olicy \textbf{O}ptimization), a novel RL framework for code generation that synthesizes \textit{robust and dense rewards fully grounded in verifiable execution feedback}. The core idea of VeRPO is constructing dense rewards from weighted partial success: by dynamically estimating the difficulty weight of each unit test based on the execution statistics during training, a dense reward is derived from the sum of weights of the passed unit tests. To solidify the consistency between partial success and end-to-end functional correctness, VeRPO further integrates the dense signal with global execution outcomes, establishing a robust and dense reward paradigm relying solely on verifiable execution feedback. Extensive experiments across diverse benchmarks and settings demonstrate that VeRPO consistently outperforms outcome-driven and RM-based baselines, achieving up to +8.83\% gain in pass@1 with negligible time cost (<0.02\%) and zero GPU memory overhead.