When Data is the Algorithm: A Systematic Study and Curation of Preference Optimization Datasets

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-source DPO datasets lack systematic comparative analysis, hindering understanding of their preference construction mechanisms, task coverage, and alignment with human judgments. Method: We propose the first data-centric analytical framework for DPO datasets, featuring a fine-grained annotation schema. Leveraging Magpie, we automatically classify task types, assess input quality, and identify preference signals; reward modeling enables unsupervised preference validation. Contribution/Results: We uncover structural disparities in reward margins across datasets—previously unreported—and design a quality-aware mixing strategy to construct UltraMix: a lightweight, high-efficiency dataset 30% smaller than the best-performing single dataset yet achieving statistically significant alignment improvements across multiple benchmarks. All annotations, metadata, and mixing recipes are publicly released to advance data-driven LLM alignment research.

Technology Category

Application Category

📝 Abstract
Aligning large language models (LLMs) is a central objective of post-training, often achieved through reward modeling and reinforcement learning methods. Among these, direct preference optimization (DPO) has emerged as a widely adopted technique that fine-tunes LLMs on preferred completions over less favorable ones. While most frontier LLMs do not disclose their curated preference pairs, the broader LLM community has released several open-source DPO datasets, including TuluDPO, ORPO, UltraFeedback, HelpSteer, and Code-Preference-Pairs. However, systematic comparisons remain scarce, largely due to the high computational cost and the lack of rich quality annotations, making it difficult to understand how preferences were selected, which task types they span, and how well they reflect human judgment on a per-sample level. In this work, we present the first comprehensive, data-centric analysis of popular open-source DPO corpora. We leverage the Magpie framework to annotate each sample for task category, input quality, and preference reward, a reward-model-based signal that validates the preference order without relying on human annotations. This enables a scalable, fine-grained inspection of preference quality across datasets, revealing structural and qualitative discrepancies in reward margins. Building on these insights, we systematically curate a new DPO mixture, UltraMix, that draws selectively from all five corpora while removing noisy or redundant samples. UltraMix is 30% smaller than the best-performing individual dataset yet exceeds its performance across key benchmarks. We publicly release all annotations, metadata, and our curated mixture to facilitate future research in data-centric preference optimization.
Problem

Research questions and friction points this paper is trying to address.

Systematically analyzes quality and structure of open-source DPO datasets
Identifies discrepancies in preference quality across different datasets
Creates optimized DPO mixture by removing noisy and redundant samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically annotates DPO datasets using Magpie framework
Curates UltraMix by selectively combining five corpora
Removes noisy samples to enhance preference optimization performance
🔎 Similar Papers
No similar papers found.