Scalable Valuation of Human Feedback through Provably Robust Model Alignment

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human preference feedback is often noisy, undermining the robustness of existing alignment methods; their loss functions lack the redescending property—i.e., convergence to the same solution as under clean labels despite severe label corruption. Method: We provide the first theoretical proof that mainstream alignment objectives (e.g., DPO) fail this property, and propose Holder-DPO—the first redescending alignment loss—built upon the Hölder divergence to enable robust preference optimization. Our framework requires neither gradients nor a validation set to automatically assess feedback quality and accurately identify mislabeled samples. Contribution/Results: On synthetic and real-world data, Holder-DPO precisely detects erroneous preferences; applied to standard alignment datasets (e.g., UltraFeedback), it uncovers substantial noise, and removing such samples significantly improves performance across multiple alignment methods—including DPO and KTO—demonstrating both diagnostic utility and practical efficacy.

Technology Category

Application Category

📝 Abstract
Despite the importance of aligning language models with human preferences, crowd-sourced human feedback is often noisy -- for example, preferring less desirable responses -- posing a fundamental challenge to alignment. A truly robust alignment objective should yield identical model parameters even under severe label noise, a property known as redescending. We prove that no existing alignment methods satisfy this property. To address this, we propose H""older-DPO, the first principled alignment loss with a provable redescending property, enabling estimation of the clean data distribution from noisy feedback. The aligned model estimates the likelihood of clean data, providing a theoretically grounded metric for dataset valuation that identifies the location and fraction of mislabels. This metric is gradient-free, enabling scalable and automated human feedback valuation without costly manual verification or clean validation dataset. H""older-DPO achieves state-of-the-art robust alignment performance while accurately detecting mislabels in controlled datasets. Finally, we apply H""older-DPO to widely used alignment datasets, revealing substantial noise levels and demonstrating that removing these mislabels significantly improves alignment performance across methods.
Problem

Research questions and friction points this paper is trying to address.

Address noisy human feedback in model alignment
Prove no existing methods handle severe label noise
Develop robust loss for clean data estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hölder-DPO enables robust alignment with noisy feedback
Gradient-free metric identifies mislabels automatically
Theoretical clean data likelihood estimation improves alignment