Strong Preferences Affect the Robustness of Preference Models and Value Alignment

📅 2024-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the robustness of preference modeling in value alignment for large language models, focusing on how strong human preferences—those with probabilities approaching 0 or 1—induce severe prediction instability in Bradley–Terry and Plackett–Luce models. Using theoretical sensitivity analysis and formal verification, we characterize, for the first time, the critical conditions under which preference perturbations propagate across these two dominant preference models. We prove that strong preferences trigger nonlinear amplification of errors across candidate options, revealing an inherent fragility in standard preference modeling. Our results deliver a key theoretical warning for value alignment: existing preference-based methods may catastrophically destabilize under the heavy-tailed, skewed distributions prevalent in real-world human preference data—thereby compromising AI safety and trustworthiness. The study establishes necessary theoretical constraints and principled directions for designing robust alignment algorithms resilient to preference extremity.

Technology Category

Application Category

📝 Abstract
Value alignment, which aims to ensure that large language models (LLMs) and other AI agents behave in accordance with human values, is critical for ensuring safety and trustworthiness of these systems. A key component of value alignment is the modeling of human preferences as a representation of human values. In this paper, we investigate the robustness of value alignment by examining the sensitivity of preference models. Specifically, we ask: how do changes in the probabilities of some preferences affect the predictions of these models for other preferences? To answer this question, we theoretically analyze the robustness of widely used preference models by examining their sensitivities to minor changes in preferences they model. Our findings reveal that, in the Bradley-Terry and the Placket-Luce model, the probability of a preference can change significantly as other preferences change, especially when these preferences are dominant (i.e., with probabilities near 0 or 1). We identify specific conditions where this sensitivity becomes significant for these models and discuss the practical implications for the robustness and safety of value alignment in AI systems.
Problem

Research questions and friction points this paper is trying to address.

Examines sensitivity of preference models in value alignment.
Analyzes robustness of Bradley-Terry and Placket-Luce models.
Identifies conditions affecting AI system safety and trustworthiness.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes sensitivity of preference models
Examines robustness of Bradley-Terry and Placket-Luce models
Identifies conditions affecting value alignment robustness
🔎 Similar Papers
No similar papers found.
Ziwei Xu
Ziwei Xu
National University of Singapore
Machine LearningKnowledge RepresentationAI Safety
M
Mohan Kankanhalli
Department of Computer Science, National University of Singapore