Small-Margin Preferences Still Matter-If You Train Them Right

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and performance degradation in preference optimization methods like Direct Preference Optimization (DPO) when handling small-margin (ambiguous) preference pairs, which are often mistakenly discarded as noise. To mitigate this issue, the authors propose MixDPO, a difficulty-aware hybrid training strategy that constructs a curriculum based on the margin-based difficulty of preference pairs. MixDPO employs a dynamic routing mechanism to apply the DPO loss to easy samples while converting hard samples into supervised fine-tuning (SFT) targets, thereby effectively integrating preference learning with SFT to leverage ambiguous preference signals. Experimental results demonstrate that MixDPO consistently outperforms DPO and its variants across three large language model automatic evaluation benchmarks, achieving notably higher win rates on AlpacaEval 2 under length-controlled settings.

Technology Category

Application Category

📝 Abstract
Preference optimization methods such as DPO align large language models (LLMs) using paired comparisons, but their effectiveness can be highly sensitive to the quality and difficulty of preference pairs. A common heuristic treats small-margin (ambiguous) pairs as noisy and filters them out. In this paper, we revisit this assumption and show that pair difficulty interacts strongly with the optimization objective: when trained with preference-based losses, difficult pairs can destabilize training and harm alignment, yet these same pairs still contain useful supervision signals when optimized with supervised fine-tuning (SFT). Motivated by this observation, we propose MixDPO, a simple yet effective difficulty-aware training strategy that (i) orders preference data from easy to hard (a curriculum over margin-defined difficulty), and (ii) routes difficult pairs to an SFT objective while applying a preference loss to easy pairs. This hybrid design provides a practical mechanism to leverage ambiguous pairs without incurring the optimization failures often associated with preference losses on low-margin data. Across three LLM-judge benchmarks, MixDPO consistently improves alignment over DPO and a range of widely-used variants, with particularly strong gains on AlpacaEval~2 length-controlled (LC) win rate.
Problem

Research questions and friction points this paper is trying to address.

preference optimization
small-margin preferences
large language models
alignment
preference pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

preference optimization
small-margin preferences
curriculum learning
supervised fine-tuning (SFT)
MixDPO
🔎 Similar Papers
No similar papers found.