CURATRON: Complete and Robust Preference Data for Rigorous Alignment of Large Language Models

📅 2024-03-05
🏛️ DASH
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two pervasive challenges in large language model (LLM) preference alignment: incomplete preference data and adversarial contamination. To tackle these, we propose a robust recalibration framework centered on a novel, falsifiable ε-optimal ranking algorithm—the first of its kind with provable guarantees. Theoretically, it recovers the true ranking even when each response tolerates O(n) adversarial perturbations and only a partial set of pairwise comparisons is observed. Grounded in polynomial-time sorting theory, our method robustly extends classical preference models—including Bradley–Terry–Luce—to withstand noise and sparsity. Empirical results demonstrate substantial improvements in preference data quality under high noise and large-scale comparison missingness. Consequently, LLM value alignment becomes more robust and ethically consistent. Our framework provides both a new theoretical tool and a practical solution for trustworthy preference-based alignment.

Technology Category

Application Category

📝 Abstract
This paper addresses the challenges of aligning large language models (LLMs) with human values via preference learning (PL), focusing on incomplete and corrupted data in preference datasets. We propose a novel method for robustly and completely recalibrating values within these datasets to enhance LLMs’ resilience against the issues. In particular, we devise a guaranteed polynomial time ranking algorithm that robustifies several existing models, such as the classic Bradley–Terry–Luce (BTL) model and certain generalizations of it. To the best of our knowledge, our present work is the first to propose an algorithm that provably recovers an epsilon-optimal ranking with high probability while allowing as large as O(n) perturbed pairwise comparison results per model response. Furthermore, we show robust recovery results in the partially observed setting. Our experiments confirm that our algorithms handle adversarial noise and unobserved comparisons well in LLM preference dataset settings. This work contributes to the development and scaling of more reliable and ethically aligned AI models by equipping the dataset curation pipeline with the ability to handle missing and maliciously manipulated inputs.
Problem

Research questions and friction points this paper is trying to address.

Addresses incomplete and corrupted data in preference learning datasets
Proposes robust recalibration method for aligning LLMs with human values
Develops guaranteed polynomial-time ranking algorithm for adversarial noise handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robust preference data recalibration for LLM alignment
Polynomial time ranking algorithm for BTL models
Handles adversarial noise and missing comparisons
🔎 Similar Papers
No similar papers found.