FunPRM: Function-as-Step Process Reward Model with Meta Reward Correction for Code Generation

📅 2026-01-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing process reward models in code generation, which suffer from coarse-grained step decomposition and noisy intermediate rewards derived from partial solutions. To overcome these issues, the authors propose a modular generation paradigm that treats functions as fundamental reasoning units, framing function calls as structured reasoning steps. They further introduce a meta-learning-driven reward correction mechanism that leverages unit tests—providing ground-truth signals on final program correctness—to denoise intermediate rewards. This approach significantly enhances the accuracy and practical utility of process reward models, outperforming current test-time scaling methods on LiveCodeBench and BigCodeBench. When integrated with O4-mini, it achieves state-of-the-art performance, generating code that is not only more correct but also more readable and reusable.

Technology Category

Application Category

📝 Abstract
Code generation is a core application of large language models (LLMs), yet LLMs still frequently fail on complex programming tasks. Given its success in mathematical reasoning, test-time scaling approaches such as Process Reward Model (PRM)-based Best-of-N selection offer a promising way to improve performance. However, existing PRMs remain ineffective for code generation due to the lack of meaningful step decomposition in code and the noise of Monte Carlo-estimated partial-solution correctness scores (rewards). To address these challenges, we propose FunPRM. FunPRM prompts LLMs to encourage modular code generation organized into functions, with functions treated as PRM reasoning steps. Furthermore, FunPRM introduces a novel meta-learning-based reward correction mechanism that leverages clean final-solution rewards obtained via a unit-test-based evaluation system to purify noisy partial-solution rewards. Experiments on LiveCodeBench and BigCodeBench demonstrate that FunPRM consistently outperforms existing test-time scaling methods across five base LLMs, notably achieving state-of-the-art performance on LiveCodeBench when combined with O4-mini. Furthermore, FunPRM produces code that is more readable and reusable for developers.
Problem

Research questions and friction points this paper is trying to address.

code generation
process reward model
step decomposition
reward noise
test-time scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Function-as-Step
Process Reward Model
Meta Reward Correction
Modular Code Generation
Test-Time Scaling
🔎 Similar Papers
R
Ruiyi Zhang
University of California, San Diego
P
Peijia Qin
University of California, San Diego
Qi Cao
Qi Cao
PhD@UCSD ECE
Large Language ModelMachine LearningReinforcement Learning
E
Eric Xue
University of California, San Diego
Pengtao Xie
Pengtao Xie
Associate Professor, UC San Diego; Adjunct Faculty, MBZUAI
Machine Learning