🤖 AI Summary
This work addresses the limitations of existing process reward models in code generation, which suffer from coarse-grained step decomposition and noisy intermediate rewards derived from partial solutions. To overcome these issues, the authors propose a modular generation paradigm that treats functions as fundamental reasoning units, framing function calls as structured reasoning steps. They further introduce a meta-learning-driven reward correction mechanism that leverages unit tests—providing ground-truth signals on final program correctness—to denoise intermediate rewards. This approach significantly enhances the accuracy and practical utility of process reward models, outperforming current test-time scaling methods on LiveCodeBench and BigCodeBench. When integrated with O4-mini, it achieves state-of-the-art performance, generating code that is not only more correct but also more readable and reusable.
📝 Abstract
Code generation is a core application of large language models (LLMs), yet LLMs still frequently fail on complex programming tasks. Given its success in mathematical reasoning, test-time scaling approaches such as Process Reward Model (PRM)-based Best-of-N selection offer a promising way to improve performance. However, existing PRMs remain ineffective for code generation due to the lack of meaningful step decomposition in code and the noise of Monte Carlo-estimated partial-solution correctness scores (rewards). To address these challenges, we propose FunPRM. FunPRM prompts LLMs to encourage modular code generation organized into functions, with functions treated as PRM reasoning steps. Furthermore, FunPRM introduces a novel meta-learning-based reward correction mechanism that leverages clean final-solution rewards obtained via a unit-test-based evaluation system to purify noisy partial-solution rewards. Experiments on LiveCodeBench and BigCodeBench demonstrate that FunPRM consistently outperforms existing test-time scaling methods across five base LLMs, notably achieving state-of-the-art performance on LiveCodeBench when combined with O4-mini. Furthermore, FunPRM produces code that is more readable and reusable for developers.