Correcting the Mythos of KL-Regularization: Direct Alignment without Overoptimization via Chi-Squared Preference Optimization

📅 2024-07-18
🏛️ arXiv.org
📈 Citations: 8
Influential: 1
📄 PDF
🤖 AI Summary
Offline alignment methods (e.g., RLHF, DPO) suffer from over-optimization, causing models to deviate from human preferences and degrade generation quality; KL regularization offers only limited mitigation. To address this, we propose χ²-Preference Optimization (χPO), which replaces only the log-link function within the DPO framework, implicitly introducing χ²-divergence regularization. This yields the first theoretically provable over-optimization robustness guarantee for offline preference optimization. Grounded in a pessimism principle and a single-policy concentration assumption, our sample complexity analysis establishes rigorous theoretical guarantees—matching the gold standard of offline reinforcement learning. Empirically, χPO significantly improves alignment stability and generation quality while preserving algorithmic simplicity and implementation ease.

Technology Category

Application Category

📝 Abstract
Language model alignment methods such as reinforcement learning from human feedback (RLHF) have led to impressive advances in language model capabilities, but are limited by a widely observed phenomenon known as overoptimization, where the quality of the language model degrades over the course of the alignment process. As the model optimizes performance with respect to an offline reward model, it overfits to inaccuracies and drifts away from preferred responses covered by the data. To discourage such distribution shift, KL-regularization is widely employed in existing offline alignment methods, but overoptimization continues to harm performance. Lending theoretical insight into the source of these empirical observations, we first show that the KL-regularization is too weak to prevent overfitting, then raise the following question: is it possible to design an efficient algorithm that is provably robust to overoptimization? We address this question with a new algorithm for offline alignment, $chi^2$-Preference Optimization ($chi$PO). $chi$PO is a one-line change to Direct Preference Optimization (DPO; Rafailov et al., 2023), which only involves modifying the logarithmic link function in the DPO objective. Despite this minimal change, $chi$PO implicitly implements the principle of pessimism in the face of uncertainty via regularization with the $chi^2$-divergence -- which quantifies uncertainty more effectively than KL-regularization -- and provably alleviates overoptimization, achieving sample-complexity guarantees based on single-policy concentrability -- the gold standard in offline reinforcement learning. $chi$PO's simplicity and strong guarantees make it the first practical and general-purpose offline alignment algorithm that is provably robust to overoptimization.
Problem

Research questions and friction points this paper is trying to address.

Addresses overoptimization in language model alignment.
Proposes χ²-Preference Optimization for robust offline alignment.
Ensures sample-complexity guarantees in offline reinforcement learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chi-Squared Preference Optimization algorithm
Replaces KL-regularization with Chi-squared divergence
Prevents overoptimization in offline alignment
🔎 Similar Papers
No similar papers found.