🤖 AI Summary
This paper addresses the challenge of quantifying quality disparities between response pairs in preference optimization (PO). We propose the Distance–Reward Coupled Metric (DCRM), the first framework to jointly model response embedding distance (e.g., L2) and reward model score differences, thereby characterizing pair-wise learnability and training utility. Leveraging DCRM, we design a best-of-N² pairing strategy for high-quality sample selection and resampling, and establish a systematic framework for preference dataset analysis. Experiments demonstrate that DCRM strongly correlates with final LLM performance; datasets curated under DCRM guidance significantly outperform existing benchmarks on AlpacaEval, MT-Bench, and Arena-Hard—validating its effectiveness in improving model alignment. Our core contributions are: (1) coupled modeling of embedding distance calibration and reward margin, and (2) a scalable, quality-driven paradigm for preference data construction.
📝 Abstract
Recent research has attempted to associate preference optimization (PO) performance with the underlying preference datasets. In this work, our observation is that the differences between the preferred response $y^+$ and dispreferred response $y^-$ influence what LLMs can learn, which may not match the desirable differences to learn. Therefore, we use distance and reward margin to quantify these differences, and combine them to get Distance Calibrated Reward Margin (DCRM), a metric that measures the quality of a response pair for PO. Intuitively, DCRM encourages minimal noisy differences and maximal desired differences. With this, we study 3 types of commonly used preference datasets, classified along two axes: the source of the responses and the preference labeling function. We establish a general correlation between higher DCRM of the training set and better learning outcome. Inspired by this, we propose a best-of-$N^2$ pairing method that selects response pairs with the highest DCRM. Empirically, in various settings, our method produces training datasets that can further improve models' performance on AlpacaEval, MT-Bench, and Arena-Hard over the existing training sets.