Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap

๐Ÿ“… 2025-08-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the low utilization efficiency of preference data and heavy reliance on large-scale human annotations in aligning large language models (LLMs), this paper proposes a difficulty-aware preference data selection method grounded in the implicit reward gap derived from Direct Preference Optimization (DPO). The core innovation lies in the first use of the reward difference implicitly learned during DPO training as a sample difficulty metric, enabling selection of highly discriminative, high-difficulty preference pairs. Crucially, the method requires no additional annotation or model fine-tuning, facilitating efficient, high-quality data curation. Experiments demonstrate that, using only 10% of the original preference data, our approach consistently outperforms five strong baselines across multiple benchmark datasets and alignment tasksโ€”yielding substantial improvements in both data efficiency and alignment performance.

Technology Category

Application Category

๐Ÿ“ Abstract
Aligning large language models (LLMs) with human preferences is a critical challenge in AI research. While methods like Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) are widely used, they often rely on large, costly preference datasets. The current work lacks methods for high-quality data selection specifically for preference data. In this work, we introduce a novel difficulty-based data selection strategy for preference datasets, grounded in the DPO implicit reward mechanism. By selecting preference data examples with smaller DPO implicit reward gaps, which are indicative of more challenging cases, we improve data efficiency and model alignment. Our approach consistently outperforms five strong baselines across multiple datasets and alignment tasks, achieving superior performance with only 10% of the original data. This principled, efficient selection method offers a promising solution for scaling LLM alignment with limited resources.
Problem

Research questions and friction points this paper is trying to address.

Selecting high-quality preference data for LLM alignment
Reducing reliance on large costly preference datasets
Improving model alignment efficiency with limited resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Difficulty-based data selection strategy
Uses DPO implicit reward gaps
Improves efficiency with 10% data