COMAL: A Convergent Meta-Algorithm for Aligning LLMs with General Preferences

📅 2024-10-30
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing alignment methods (e.g., RLHF) rely on the Bradley–Terry reward assumption, which fails to capture complex human preferences; self-play algorithms often diverge in the original game and cannot guarantee a stable 50% win rate against arbitrary opponents. Method: We formulate alignment as a two-player zero-sum game with the Nash equilibrium as the theoretical objective, and propose COMAL—the first meta-algorithm with provable convergence to the exact Nash equilibrium of the original game. COMAL requires only minimal adaptation to mainstream preference optimization frameworks (e.g., RLHF, DPO, KTO). Contribution/Results: We provide rigorous theoretical convergence guarantees, overcoming prior limitations where methods either diverge or converge only in modified games. Empirical results demonstrate significant improvements in model robustness to preference noise (e.g., “Qilu” perturbations) and cross-policy generalization win rates.

Technology Category

Application Category

📝 Abstract
Many alignment methods, including reinforcement learning from human feedback (RLHF), rely on the Bradley-Terry reward assumption, which is insufficient to capture the full range of general human preferences. To achieve robust alignment with general preferences, we model the alignment problem as a two-player zero-sum game, where the Nash equilibrium policy guarantees a 50% win rate against any competing policy. However, previous algorithms for finding the Nash policy either diverge or converge to a Nash policy in a modified game, even in a simple synthetic setting, thereby failing to maintain the 50% win rate guarantee against all other policies. We propose a meta-algorithm, Convergent Meta Alignment Algorithm (COMAL), for language model alignment with general preferences, inspired by convergent algorithms in game theory. Theoretically, we prove that our meta-algorithm converges to an exact Nash policy in the last iterate. Additionally, our meta-algorithm is simple and can be integrated with many existing methods designed for RLHF and preference optimization with minimal changes. Experimental results demonstrate the effectiveness of the proposed framework when combined with existing preference policy optimization methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of Bradley-Terry reward model in capturing human preferences
Proposes convergent meta-algorithm for finding Nash equilibrium in LLM alignment
Ensures policy maintains competitive win rates against all competing algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models alignment as two-player zero-sum game
Converges to exact Nash policy in last iterate
Integrates with existing preference optimization methods
🔎 Similar Papers
No similar papers found.