SCAN: Self-Denoising Monte Carlo Annotation for Robust Process Reward Learning

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Process Reward Modeling (PRM) training is hindered by two key bottlenecks: high human annotation costs and substantial noise in synthetic data. To address these, we propose Self-Denoising Monte Carlo (SD-MC), a novel labeling framework that synergistically combines lightweight model self-iteration for denoising with robust learning to significantly suppress annotation noise inherent in Monte Carlo (MC) estimation. SD-MC incurs only a 6% inference overhead yet outperforms standard MC-based strategies, enabling efficient fine-grained reasoning evaluation under weak supervision. On the ProcessBench benchmark, our method lifts the F1 score from 19.9 to 59.1—a dramatic improvement—while relying solely on a small synthetic dataset. Remarkably, it surpasses state-of-the-art baselines trained on large-scale human-annotated datasets such as PRM800K. This work establishes, for the first time, the feasibility of low-cost, high-robustness PRM training.

Technology Category

Application Category

📝 Abstract
Process reward models (PRMs) offer fine-grained, step-level evaluations that facilitate deeper reasoning processes in large language models (LLMs), proving effective in complex tasks like mathematical reasoning. However, developing PRMs is challenging due to the high cost and limited scalability of human-annotated data. Synthetic data from Monte Carlo (MC) estimation is a promising alternative but suffers from a high noise ratio, which can cause overfitting and hinder large-scale training. In this work, we conduct a preliminary study on the noise distribution in synthetic data from MC estimation, identifying that annotation models tend to both underestimate and overestimate step correctness due to limitations in their annotation capabilities. Building on these insights, we propose Self-Denoising Monte Carlo Annotation (SCAN), an efficient data synthesis and noise-tolerant learning framework. Our key findings indicate that: (1) Even lightweight models (e.g., 1.5B parameters) can produce high-quality annotations through a self-denoising strategy, enabling PRMs to achieve superior performance with only 6% the inference cost required by vanilla MC estimation. (2) With our robust learning strategy, PRMs can effectively learn from this weak supervision, achieving a 39.2 F1 score improvement (from 19.9 to 59.1) in ProcessBench. Despite using only a compact synthetic dataset, our models surpass strong baselines, including those trained on large-scale human-annotated datasets such as PRM800K. Furthermore, performance continues to improve as we scale up the synthetic data, highlighting the potential of SCAN for scalable, cost-efficient, and robust PRM training.
Problem

Research questions and friction points this paper is trying to address.

Addressing high noise in synthetic data from Monte Carlo estimation
Reducing costly human annotation for process reward models
Preventing overfitting and enabling scalable PRM training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-denoising strategy for Monte Carlo annotation
Efficient data synthesis with noise-tolerant learning
Lightweight models achieving high-quality annotations
🔎 Similar Papers
No similar papers found.