🤖 AI Summary
Process-level reward modeling (PRM) for mathematical reasoning suffers from high annotation costs and low-efficiency data construction. To address this, we propose an uncertainty-driven framework for automated process reward data generation. Our method leverages model-output uncertainty to guide active sampling and enables lightweight human annotation. Key contributions include: (1) an uncertainty-aware active sampling strategy coupled with efficient annotation protocols; and (2) two novel multi-model output aggregation mechanisms—Hybrid Majority Reward Vote and Weighted Reward Frequency Vote—that jointly harness the robustness of majority voting and the discriminative power of PRMs. Evaluated on ProcessBench, MATH, and GSMPlus benchmarks, our approach improves PRM training efficiency—reducing annotation cost by ~40%—and boosts inference accuracy by +3.2% on average, while demonstrating strong cross-task generalization.
📝 Abstract
Large language models have demonstrated remarkable capabilities in complex mathematical reasoning tasks, but they inevitably generate errors throughout multi-step solutions. Process-level Reward Models (PRMs) have shown great promise by providing supervision and evaluation at each intermediate step, thereby effectively improving the models' reasoning abilities. However, training effective PRMs requires high-quality process reward data, yet existing methods for constructing such data are often labour-intensive or inefficient. In this paper, we propose an uncertainty-driven framework for automated process reward data construction, encompassing both data generation and annotation processes for PRMs. Additionally, we identify the limitations of both majority vote and PRMs, and introduce two generic uncertainty-aware output aggregation methods: Hybrid Majority Reward Vote and Weighted Reward Frequency Vote, which combine the strengths of majority vote with PRMs. Extensive experiments on ProcessBench, MATH, and GSMPlus show the effectiveness and efficiency of the proposed PRM data construction framework, and demonstrate that the two output aggregation methods further improve the mathematical reasoning abilities across diverse PRMs. The code and data will be publicly available at https://github.com/Jiuzhouh/UnPRM.