🤖 AI Summary
Existing open-source DPO datasets lack systematic comparative analysis, hindering understanding of their preference construction mechanisms, task coverage, and alignment with human judgments.
Method: We propose the first data-centric analytical framework for DPO datasets, featuring a fine-grained annotation schema. Leveraging Magpie, we automatically classify task types, assess input quality, and identify preference signals; reward modeling enables unsupervised preference validation.
Contribution/Results: We uncover structural disparities in reward margins across datasets—previously unreported—and design a quality-aware mixing strategy to construct UltraMix: a lightweight, high-efficiency dataset 30% smaller than the best-performing single dataset yet achieving statistically significant alignment improvements across multiple benchmarks. All annotations, metadata, and mixing recipes are publicly released to advance data-driven LLM alignment research.
📝 Abstract
Aligning large language models (LLMs) is a central objective of post-training, often achieved through reward modeling and reinforcement learning methods. Among these, direct preference optimization (DPO) has emerged as a widely adopted technique that fine-tunes LLMs on preferred completions over less favorable ones. While most frontier LLMs do not disclose their curated preference pairs, the broader LLM community has released several open-source DPO datasets, including TuluDPO, ORPO, UltraFeedback, HelpSteer, and Code-Preference-Pairs. However, systematic comparisons remain scarce, largely due to the high computational cost and the lack of rich quality annotations, making it difficult to understand how preferences were selected, which task types they span, and how well they reflect human judgment on a per-sample level. In this work, we present the first comprehensive, data-centric analysis of popular open-source DPO corpora. We leverage the Magpie framework to annotate each sample for task category, input quality, and preference reward, a reward-model-based signal that validates the preference order without relying on human annotations. This enables a scalable, fine-grained inspection of preference quality across datasets, revealing structural and qualitative discrepancies in reward margins. Building on these insights, we systematically curate a new DPO mixture, UltraMix, that draws selectively from all five corpora while removing noisy or redundant samples. UltraMix is 30% smaller than the best-performing individual dataset yet exceeds its performance across key benchmarks. We publicly release all annotations, metadata, and our curated mixture to facilitate future research in data-centric preference optimization.