R-Align: Enhancing Generative Reward Models through Rationale-Centric Meta-Judging

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing generative reward models (GenRMs), which optimize solely for preference label accuracy while neglecting the consistency between generated rationales and reference judgments, leading to unreliable reasoning and degradation in RLHF policy performance. To mitigate this, we propose R-Align, the first approach that explicitly treats reasoning fidelity as a core evaluation dimension. R-Align introduces supervision from gold-standard judgments and enhances model reliability by explicitly aligning generated rationales with reference rationales. We design a rationale alignment loss and introduce a novel metric, Spurious Correctness (S-Corr), to quantify instances of misleadingly correct predictions. Experiments demonstrate that R-Align significantly reduces S-Corr, improves reasoning consistency across multiple reward modeling benchmarks, and consistently boosts downstream policy performance in STEM, programming, instruction following, and general-purpose tasks.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) remains indispensable for aligning large language models (LLMs) in subjective domains. To enhance robustness, recent work shifts toward Generative Reward Models (GenRMs) that generate rationales before predicting preferences. Yet in GenRM training and evaluation, practice remains outcome-label-only, leaving reasoning quality unchecked. We show that reasoning fidelity-the consistency between a GenRM's preference decision and reference decision rationales-is highly predictive of downstream RLHF outcomes, beyond standard label accuracy. Specifically, we repurpose existing reward-model benchmarks to compute Spurious Correctness (S-Corr)-the fraction of label-correct decisions with rationales misaligned with golden judgments. Our empirical evaluation reveals substantial S-Corr even for competitive GenRMs, and higher S-Corr is associated with policy degeneration under optimization. To improve fidelity, we propose Rationale-Centric Alignment, R-Align, which augments training with gold judgments and explicitly supervises rationale alignment. R-Align reduces S-Corr on RM benchmarks and yields consistent gains in actor performance across STEM, coding, instruction following, and general tasks.
Problem

Research questions and friction points this paper is trying to address.

Generative Reward Models
reasoning fidelity
Reinforcement Learning from Human Feedback
rationale alignment
Spurious Correctness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Reward Models
Reasoning Fidelity
Rationale-Centric Alignment
Spurious Correctness
RLHF