Judging with Confidence: Calibrating Autoraters to Preference Distributions

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based automatic scorers rely on discrete preference labels, failing to capture the continuous, subjective, and ambiguous nature of human preferences—leading to positional bias and poor calibration. This paper introduces the first general framework for modeling preference distributions: it reformulates scoring as a probabilistic distribution learning task, designs a distribution-matching objective, and unifies supervised fine-tuning (for dense probabilistic labels) with reinforcement learning (for sparse binary comparisons). The method significantly improves calibration and group fairness while mitigating positional bias, without compromising performance on objective tasks. Experiments demonstrate that the resulting probabilistic predictions more faithfully reflect real-world population-level preference distributions. By enabling scalable, interpretable, and value-aligned automated evaluation, this work establishes a new paradigm for preference-aware assessment in LLMs.

Technology Category

Application Category

📝 Abstract
The alignment of large language models (LLMs) with human values increasingly relies on using other LLMs as automated judges, or ``autoraters''. However, their reliability is limited by a foundational issue: they are trained on discrete preference labels, forcing a single ground truth onto tasks that are often subjective, ambiguous, or nuanced. We argue that a reliable autorater must learn to model the full distribution of preferences defined by a target population. In this paper, we propose a general framework for calibrating probabilistic autoraters to any given preference distribution. We formalize the problem and present two learning methods tailored to different data conditions: 1) a direct supervised fine-tuning for dense, probabilistic labels, and 2) a reinforcement learning approach for sparse, binary labels. Our empirical results show that finetuning autoraters with a distribution-matching objective leads to verbalized probability predictions that are better aligned with the target preference distribution, with improved calibration and significantly lower positional bias, all while preserving performance on objective tasks.
Problem

Research questions and friction points this paper is trying to address.

Calibrating autoraters to model full preference distributions of populations
Addressing limitations of discrete labels on subjective ambiguous tasks
Improving alignment calibration and reducing bias in automated judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Calibrating autoraters to preference distributions
Using supervised fine-tuning for probabilistic labels
Applying reinforcement learning for binary labels