Nash Policy Gradient: A Policy Gradient Method with Iteratively Refined Regularization for Finding Nash Equilibria

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In incomplete-information games, existing regularization methods require diminishing regularization strength to approach Nash equilibria, often causing training instability. This paper proposes a monotonic convergence framework featuring fixed strong regularization and iterative updates of a reference policy—achieving, for the first time without uniqueness assumptions, provably monotonic convergence to an exact Nash equilibrium. The method operates via policy gradients, optimizing solely with respect to the current policy and a fixed reference policy, thereby eliminating the need for regularization decay. Theoretical analysis establishes guaranteed convergence under mild conditions. Empirically, the approach achieves exploitability on par with or better than state-of-the-art model-free methods in canonical games; in large-scale domains—including Battleship and Texas Hold’em—it yields substantial Elo improvements, demonstrating scalability and robustness.

Technology Category

Application Category

📝 Abstract
Finding Nash equilibria in imperfect-information games remains a central challenge in multi-agent reinforcement learning. While regularization-based methods have recently achieved last-iteration convergence to a regularized equilibrium, they require the regularization strength to shrink toward zero to approximate a Nash equilibrium, often leading to unstable learning in practice. Instead, we fix the regularization strength at a large value for robustness and achieve convergence by iteratively refining the reference policy. Our main theoretical result shows that this procedure guarantees strictly monotonic improvement and convergence to an exact Nash equilibrium in two-player zero-sum games, without requiring a uniqueness assumption. Building on this framework, we develop a practical algorithm, Nash Policy Gradient (NashPG), which preserves the generalizability of policy gradient methods while relying solely on the current and reference policies. Empirically, NashPG achieves comparable or lower exploitability than prior model-free methods on classic benchmark games and scales to large domains such as Battleship and No-Limit Texas Hold'em, where NashPG consistently attains higher Elo ratings.
Problem

Research questions and friction points this paper is trying to address.

Finding Nash equilibria in imperfect-information multi-agent reinforcement learning games
Addressing instability in regularization methods by fixing regularization strength
Developing NashPG algorithm for scalable convergence without uniqueness assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fixed regularization strength for robust learning
Iteratively refined reference policy for convergence
Policy gradient method using current and reference policies
🔎 Similar Papers
No similar papers found.