🤖 AI Summary
This work addresses the limitations of existing supervised fine-tuning approaches in code generation, which suffer from data scarcity, high failure rates, and low inference efficiency, as well as the inability of conventional reinforcement learning to adequately explore challenging code branches. To overcome these issues, the authors propose a difficulty-aware reinforcement learning framework that jointly models branch coverage and sample difficulty into a unified reward signal. The framework integrates static analysis with constraints on syntactic and functional correctness, augmented by an exponential reward shaping mechanism. Remarkably, the method achieves superior performance using only a 0.6B-parameter model, outperforming GPT-3.5 by 28.97% in pass rate, improving branch coverage by 15.08%, and accelerating inference by over 20×.
📝 Abstract
Code verifiers play a critical role in post-verification for LLM-based code generation, yet existing supervised fine-tuning methods suffer from data scarcity, high failure rates, and poor inference efficiency. While reinforcement learning (RL) offers a promising alternative by optimizing models through execution-driven rewards without labeled supervision, our preliminary results show that naive RL with only functionality rewards fails to generate effective unit tests for difficult branches and samples. We first theoretically analyze showing that branch coverage, sample difficulty, syntactic and functional correctness can be jointly modeled as RL rewards, where optimizing these signals can improve the reliability of unit-test-based verification. Guided by this analysis, we design syntax- and functionality-aware rewards and further propose branch- and sample-difficulty--aware RL using exponential reward shaping and static analysis metrics. With this formulation, CVeDRL achieves state-of-the-art performance with only 0.6B parameters, yielding up to 28.97% higher pass rate and 15.08% higher branch coverage than GPT-3.5, while delivering over $20\times$ faster inference than competitive baselines. Code is available at https://github.com/LIGHTCHASER1/CVeDRL.git