From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing inference-time alignment methods rely on outcome reward models (ORMs), which output scalar rewards for entire responses—insufficient for reward-guided search (RGS) requiring fine-grained process-level guidance, leading to misalignment between scores and human preferences. This work first formalizes the dual objectives of *score consistency* and *preference consistency*, and proposes SP-PRM—a fully automated framework that eliminates the need for human annotations. SP-PRM distills an ORM into a process reward model (PRM), introduces a partial-sequence consistency evaluation module, and incorporates preference-consistency regularization to bridge outcome and process rewards. Seamlessly integrated into RGS, SP-PRM achieves consistent improvements across dialogue, summarization, and reasoning tasks: GPT-4 automated evaluations show absolute gains of 3.6–10.3 percentage points in alignment scores, demonstrating substantial enhancements in both alignment fidelity and search efficiency.

Technology Category

Application Category

📝 Abstract
Inference-time alignment methods have gained significant attention for their efficiency and effectiveness in aligning large language models (LLMs) with human preferences. However, existing dominant approaches using reward-guided search (RGS) primarily rely on outcome reward models (ORMs), which suffer from a critical granularity mismatch: ORMs are designed to provide outcome rewards for complete responses, while RGS methods rely on process rewards to guide the policy, leading to inconsistent scoring and suboptimal alignment. To address this challenge, we introduce process reward models (PRMs) into RGS and argue that an ideal PRM should satisfy two objectives: Score Consistency, ensuring coherent evaluation across partial and complete responses, and Preference Consistency, aligning partial sequence assessments with human preferences. Based on these, we propose SP-PRM, a novel dual-consistency framework integrating score consistency-based and preference consistency-based partial evaluation modules without relying on human annotation. Extensive experiments on dialogue, summarization, and reasoning tasks demonstrate that SP-PRM substantially enhances existing RGS methods, achieving a 3.6%-10.3% improvement in GPT-4 evaluation scores across all tasks.
Problem

Research questions and friction points this paper is trying to address.

Address granularity mismatch in reward-guided search methods
Develop process reward models for consistent partial response evaluation
Enhance alignment of language models with human preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces process reward models (PRMs) for alignment
Proposes SP-PRM dual-consistency framework
Enhances reward-guided search without human annotation
🔎 Similar Papers
No similar papers found.
Bin Xie
Bin Xie
InfoBeyond Technology LLC
Mobile ComuptingSecurityBig Data Streaming
Bingbing Xu
Bingbing Xu
Associate professor, Institute of Computing Technology, Chinese Academy of Sciences
Graph Neural NetworksNetwork Embedding
Yige Yuan
Yige Yuan
Ph.D. student, Institute of Computing Technology, Chinese Academy of Sciences
Machine LearningReinforcement Learning
S
Shengmao Zhu
State Key Laboratory of AI Safety, Institute of Computing Technology, CAS; University of Chinese Academy of Sciences
H
Huawei Shen
State Key Laboratory of AI Safety, Institute of Computing Technology, CAS; University of Chinese Academy of Sciences