🤖 AI Summary
Existing LLM-based automatic scorers rely on discrete preference labels, failing to capture the continuous, subjective, and ambiguous nature of human preferences—leading to positional bias and poor calibration. This paper introduces the first general framework for modeling preference distributions: it reformulates scoring as a probabilistic distribution learning task, designs a distribution-matching objective, and unifies supervised fine-tuning (for dense probabilistic labels) with reinforcement learning (for sparse binary comparisons). The method significantly improves calibration and group fairness while mitigating positional bias, without compromising performance on objective tasks. Experiments demonstrate that the resulting probabilistic predictions more faithfully reflect real-world population-level preference distributions. By enabling scalable, interpretable, and value-aligned automated evaluation, this work establishes a new paradigm for preference-aware assessment in LLMs.
📝 Abstract
The alignment of large language models (LLMs) with human values increasingly relies on using other LLMs as automated judges, or ``autoraters''. However, their reliability is limited by a foundational issue: they are trained on discrete preference labels, forcing a single ground truth onto tasks that are often subjective, ambiguous, or nuanced. We argue that a reliable autorater must learn to model the full distribution of preferences defined by a target population. In this paper, we propose a general framework for calibrating probabilistic autoraters to any given preference distribution. We formalize the problem and present two learning methods tailored to different data conditions: 1) a direct supervised fine-tuning for dense, probabilistic labels, and 2) a reinforcement learning approach for sparse, binary labels. Our empirical results show that finetuning autoraters with a distribution-matching objective leads to verbalized probability predictions that are better aligned with the target preference distribution, with improved calibration and significantly lower positional bias, all while preserving performance on objective tasks.