Multiplayer Nash Preference Optimization

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RLHF methods—e.g., Bradley–Terry-based reward modeling—fail to capture the non-transitivity and heterogeneity inherent in real-world human preferences. Although Nash Learning from Human Feedback (NLHF) frames alignment as a two-player Nash game, its restrictive single-opponent assumption cannot represent genuine multi-source preference structures. Method: We propose the first extension of NLHF to a *multi-player Nash game* framework, introducing a multi-adversary competition mechanism and a novel *multi-player duality gap* metric to mitigate biases inherent in pairwise formulations. Our approach integrates Nash equilibrium optimization, multi-strategy regularization, and duality gap expansion. Contribution/Results: On instruction-following tasks, our method significantly outperforms NLHF baselines—including INPO and ONPO—under both heterogeneous annotator settings and mixed-strategy evaluation, achieving more accurate modeling of complex, non-transitive human preferences.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) has emerged as the standard paradigm for aligning large language models (LLMs) with human preferences. However, reward-based methods built on the Bradley-Terry assumption struggle to capture the non-transitive and heterogeneous nature of real-world preferences. To address this, recent studies have reframed alignment as a two-player Nash game, giving rise to Nash learning from human feedback (NLHF). While this perspective has inspired algorithms such as INPO, ONPO, and EGPO with strong theoretical and empirical guarantees, they remain fundamentally restricted to two-player interactions, creating a single-opponent bias that fails to capture the full complexity of realistic preference structures. In this work, we introduce Multiplayer Nash Preference Optimization (MNPO), a novel framework that generalizes NLHF to the multiplayer regime. It formulates alignment as an $n$-player game, where each policy competes against a population of opponents while being regularized toward a reference model. Our framework establishes well-defined Nash equilibria in multiplayer settings and extends the concept of duality gap to quantify approximation quality. We demonstrate that MNPO inherits the equilibrium guarantees of two-player methods while enabling richer competitive dynamics and improved coverage of diverse preference structures. Through comprehensive empirical evaluation, we show that MNPO consistently outperforms existing NLHF baselines on instruction-following benchmarks, achieving superior alignment quality under heterogeneous annotator conditions and mixed-policy evaluation scenarios. Together, these results establish MNPO as a principled and scalable framework for aligning LLMs with complex, non-transitive human preferences. Code is available at https://github.com/smiles724/MNPO.
Problem

Research questions and friction points this paper is trying to address.

Extends Nash learning from two-player to multiplayer preference alignment games
Addresses limitations of reward models in capturing complex human preferences
Enables better alignment with non-transitive and heterogeneous preference structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalizes Nash learning to multiplayer game setting
Formulates alignment as n-player policy competition game
Establishes Nash equilibria for complex preference structures
🔎 Similar Papers
No similar papers found.