DCRM: A Heuristic to Measure Response Pair Quality in Preference Optimization

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of quantifying quality disparities between response pairs in preference optimization (PO). We propose the Distance–Reward Coupled Metric (DCRM), the first framework to jointly model response embedding distance (e.g., L2) and reward model score differences, thereby characterizing pair-wise learnability and training utility. Leveraging DCRM, we design a best-of-N² pairing strategy for high-quality sample selection and resampling, and establish a systematic framework for preference dataset analysis. Experiments demonstrate that DCRM strongly correlates with final LLM performance; datasets curated under DCRM guidance significantly outperform existing benchmarks on AlpacaEval, MT-Bench, and Arena-Hard—validating its effectiveness in improving model alignment. Our core contributions are: (1) coupled modeling of embedding distance calibration and reward margin, and (2) a scalable, quality-driven paradigm for preference data construction.

Technology Category

Application Category

📝 Abstract
Recent research has attempted to associate preference optimization (PO) performance with the underlying preference datasets. In this work, our observation is that the differences between the preferred response $y^+$ and dispreferred response $y^-$ influence what LLMs can learn, which may not match the desirable differences to learn. Therefore, we use distance and reward margin to quantify these differences, and combine them to get Distance Calibrated Reward Margin (DCRM), a metric that measures the quality of a response pair for PO. Intuitively, DCRM encourages minimal noisy differences and maximal desired differences. With this, we study 3 types of commonly used preference datasets, classified along two axes: the source of the responses and the preference labeling function. We establish a general correlation between higher DCRM of the training set and better learning outcome. Inspired by this, we propose a best-of-$N^2$ pairing method that selects response pairs with the highest DCRM. Empirically, in various settings, our method produces training datasets that can further improve models' performance on AlpacaEval, MT-Bench, and Arena-Hard over the existing training sets.
Problem

Research questions and friction points this paper is trying to address.

Quantify differences between preferred and dispreferred responses
Measure response pair quality for preference optimization
Improve model performance using high-quality training datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses distance and reward margin metrics
Introduces DCRM for response pair quality
Proposes best-of-N² high-DCRM pairing
🔎 Similar Papers
No similar papers found.