Improving LLM General Preference Alignment via Optimistic Online Mirror Descent

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RLHF methods rely on the Bradley–Terry (BT) model, which imposes restrictive structural assumptions and limits expressiveness in capturing complex human preferences. Method: We propose a general preference alignment framework that does not assume BT structure, formulating preference learning as a two-player zero-sum game and approximating the Nash equilibrium strategy via optimistic online mirror descent (Optimistic OMD). Contribution/Results: Our approach is the first to eliminate the BT assumption entirely in LLM alignment. Theoretically, it achieves an $O(T^{-1})$ convergence rate for the duality gap—strictly faster than the $O(T^{-1/2})$ rate of conventional RLHF. Empirically, it significantly outperforms state-of-the-art RLHF methods across multiple mainstream benchmarks, demonstrating that general preference modeling substantially enhances generalization and alignment capability.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) has demonstrated remarkable effectiveness in aligning large language models (LLMs) with human preferences. Many existing alignment approaches rely on the Bradley-Terry (BT) model assumption, which assumes the existence of a ground-truth reward for each prompt-response pair. However, this assumption can be overly restrictive when modeling complex human preferences. In this paper, we drop the BT model assumption and study LLM alignment under general preferences, formulated as a two-player game. Drawing on theoretical insights from learning in games, we integrate optimistic online mirror descent into our alignment framework to approximate the Nash policy. Theoretically, we demonstrate that our approach achieves an $O(T^{-1})$ bound on the duality gap, improving upon the previous $O(T^{-1/2})$ result. More importantly, we implement our method and show through experiments that it outperforms state-of-the-art RLHF algorithms across multiple representative benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs with human preferences
General preferences without BT model
Optimistic online mirror descent integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimistic Online Mirror Descent
General Preference Alignment
Nash Policy Approximation
🔎 Similar Papers
No similar papers found.
Yuheng Zhang
Yuheng Zhang
University of Illinois Urbana-Champaign
Machine LearningReinforcement LearningOnline LearningBanditsLearning Theory
D
Dian Yu
Tencent AI Lab, Bellevue
Tao Ge
Tao Ge
Microsoft Research
Natural Language ProcessingLarge Language ModelsGenerative AI
L
Linfeng Song
Tencent AI Lab, Bellevue
Z
Zhichen Zeng
University of Illinois Urbana-Champaign
Haitao Mi
Haitao Mi
Principal Researcher, Tencent US
Large Language Models
N
Nan Jiang
University of Illinois Urbana-Champaign
D
Dong Yu
Tencent AI Lab, Bellevue