Robust Reward Modeling via Causal Rubrics

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reward models (RMs) are vulnerable to spurious correlations with superficial features—such as response length or formatting—leading to reward hacking: mistaking these artifacts for genuine causal signals of quality (e.g., factual accuracy, relevance), thereby undermining out-of-distribution robustness. To address this, we propose Crome, a causally grounded framework for robust reward modeling. First, we introduce an unsupervised causal ablation augmentation paradigm that generates causal–neutral data pairs without requiring domain-specific prior knowledge. Second, we leverage LLMs-as-oracle to identify interpretable, intervenable causal rules for reward modeling. Third, we jointly train RMs using both causal and neutral augmentations to disentangle true quality factors from spurious correlates. On RewardBench, Crome achieves an average accuracy gain of +5.4% (up to +13.2%), and demonstrates significant robustness improvements across diverse benchmarks—including WildGuardTest and GSM8k.

Technology Category

Application Category

📝 Abstract
Reward models (RMs) are fundamental to aligning Large Language Models (LLMs) via human feedback, yet they often suffer from reward hacking. They tend to latch on to superficial or spurious attributes, such as response length or formatting, mistaking these cues learned from correlations in training data for the true causal drivers of quality (e.g., factuality, relevance). This occurs because standard training objectives struggle to disentangle these factors, leading to brittle RMs and misaligned policies. We introduce Crome (Causally Robust Reward Modeling), a novel framework grounded in an explicit causal model designed to mitigate reward hacking. Crome employs the following synthetic targeted augmentations during training: (1) Causal Augmentations, which are pairs that differ along specific causal attributes, to enforce sensitivity along each causal attribute individually, and (2) Neutral Augmentations, which are tie-label pairs varying primarily in spurious attributes, to enforce invariance along spurious attributes. Notably, our augmentations are produced without any knowledge of spurious factors, via answer interventions only along causal rubrics, that are identified by querying an oracle LLM. Empirically, Crome significantly outperforms standard baselines on RewardBench, improving average accuracy by up to 5.4% and achieving gains of up to 13.2% and 7.2% in specific categories. The robustness of Crome is further testified by the consistent gains obtained in a Best-of-N inference setting across increasing N, across various benchmarks, including the popular RewardBench (covering chat, chat-hard, safety, and reasoning tasks), the safety-focused WildGuardTest, and the reasoning-specific GSM8k.
Problem

Research questions and friction points this paper is trying to address.

Reward models often suffer from reward hacking by focusing on superficial attributes.
Standard training struggles to disentangle causal and spurious quality drivers.
Crome framework mitigates reward hacking via causal and neutral augmentations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Augmentations for sensitivity to attributes
Neutral Augmentations for spurious attribute invariance
Oracle LLM-guided interventions without spurious knowledge
🔎 Similar Papers
No similar papers found.