Accelerating Nash Learning from Human Feedback via Mirror Prox

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reward modeling in RLHF—e.g., via Bradley–Terry models—struggles to capture complex human preference structures such as non-transitivity. Method: This paper proposes Nash Learning from Human Feedback (NLHF), framing preference learning as a non-zero-sum game and directly solving for Nash equilibria. Contribution/Results: NLHF innovatively introduces the Mirror Prox algorithm, with theoretical guarantees of last-iterate linear convergence independent of action-space dimensionality. To enhance scalability, we further propose a β-regularized game formulation and a stochastic gradient approximation. Experiments demonstrate that KL error decays at rate $(1+2eta)^{-N/2}$, while exploitability and log-prob span both converge linearly. In LLM fine-tuning tasks, NLHF matches state-of-the-art performance and exhibits strong compatibility with existing pipelines.

Technology Category

Application Category

📝 Abstract
Traditional Reinforcement Learning from Human Feedback (RLHF) often relies on reward models, frequently assuming preference structures like the Bradley-Terry model, which may not accurately capture the complexities of real human preferences (e.g., intransitivity). Nash Learning from Human Feedback (NLHF) offers a more direct alternative by framing the problem as finding a Nash equilibrium of a game defined by these preferences. In this work, we introduce Nash Mirror Prox ($mathtt{Nash-MP}$), an online NLHF algorithm that leverages the Mirror Prox optimization scheme to achieve fast and stable convergence to the Nash equilibrium. Our theoretical analysis establishes that Nash-MP exhibits last-iterate linear convergence towards the $eta$-regularized Nash equilibrium. Specifically, we prove that the KL-divergence to the optimal policy decreases at a rate of order $(1+2eta)^{-N/2}$, where $N$ is a number of preference queries. We further demonstrate last-iterate linear convergence for the exploitability gap and uniformly for the span semi-norm of log-probabilities, with all these rates being independent of the size of the action space. Furthermore, we propose and analyze an approximate version of Nash-MP where proximal steps are estimated using stochastic policy gradients, making the algorithm closer to applications. Finally, we detail a practical implementation strategy for fine-tuning large language models and present experiments that demonstrate its competitive performance and compatibility with existing methods.
Problem

Research questions and friction points this paper is trying to address.

Improving Nash Learning from Human Feedback convergence speed
Addressing limitations of traditional reward models in RLHF
Developing practical Nash-MP algorithm for large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nash-MP algorithm for fast Nash equilibrium convergence
Mirror Prox optimization ensures stable policy learning
Stochastic policy gradients enable practical large model tuning
🔎 Similar Papers
No similar papers found.