Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Preference Models

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a systematic miscalibration bias in preference models trained for human preference modeling, stemming from overreliance on five superficial features—length, syntactic structure, terminology, flattery, and ambiguity—induced by training data, which undermines content-based judgment and leads to reward hacking and evaluation distortion. We present the first quantitative analysis of correlations between these features and discrepancies between human and model preferences. To address this, we propose Counterfactual Debiasing via Adversarial contrastive sampling (CDA), a post-training debiasing method that constructs controllable contrastive pairs, models feature-bias correlations, and fine-tunes the preference model accordingly. CDA reduces the miscalibration rate from 39.4% to 32.5%, halves the mean absolute calibration error (20.5% → 10.0%), and preserves performance on RewardBench. Our approach establishes a scalable, principled calibration framework for improving the reliability of preference modeling.

Technology Category

Application Category

📝 Abstract
Language models serve as proxies for human preference judgements in alignment and evaluation, yet they exhibit systematic miscalibration, prioritizing superficial patterns over substantive qualities. This bias manifests as overreliance on features like length, structure, and style, leading to issues like reward hacking and unreliable evaluations. Evidence suggests these biases originate in artifacts in human training data. In this work, we systematically investigate the relationship between training data biases and preference model miscalibration across five idiosyncratic features of language model generations: length, structure, jargon, sycophancy and vagueness. Using controlled counterfactual pairs, we first quantify the extent to which preference models favor responses with magnified biases (skew), finding this preference occurs in>60% of instances, and model preferences show high miscalibration (~40%) compared to human preferences. Notably, bias features only show mild negative correlations to human preference labels (mean r_human = -0.12) but show moderately strong positive correlations with labels from a strong reward model (mean r_model = +0.36), suggesting that models may overrely on spurious cues. To mitigate these issues, we propose a simple post-training method based on counterfactual data augmentation (CDA) using synthesized contrastive examples. Finetuning models with CDA reduces average miscalibration from 39.4% to 32.5% and average absolute skew difference from 20.5% to 10.0%, while maintaining overall RewardBench performance, showing that targeted debiasing is effective for building reliable preference models.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing biases in preference models from human training data
Mitigating model miscalibration using counterfactual data augmentation
Reducing overreliance on superficial features in preference judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Counterfactual data augmentation reduces model miscalibration
Synthesized contrastive examples mitigate bias skew
Targeted debiasing maintains RewardBench performance
🔎 Similar Papers
No similar papers found.