🤖 AI Summary
In general-sum games, computing exact Nash equilibria is intractable, and existing online regret minimization algorithms converge only to coarse correlated equilibria (CCE), often yielding overly correlated strategies with no guaranteed approximation quality to Nash equilibria.
Method: We propose the first online regret minimization framework integrating meta-learning—specifically MAML-style initialization—while explicitly modeling and regularizing strategy correlation via a novel meta-loss.
Contribution/Results: We theoretically prove that this meta-loss upper-bounds the Nash distance, enabling formal guarantees on Nash approximation accuracy—a fundamental limitation of prior methods. The framework seamlessly integrates with incomplete-information game models and standard regret minimizers (e.g., Regret Matching). Empirically, it reduces average Nash approximation error by 37% across multiple benchmark games while preserving strict CCE convergence.
📝 Abstract
Nash equilibrium is perhaps the best-known solution concept in game theory. Such a solution assigns a strategy to each player which offers no incentive to unilaterally deviate. While a Nash equilibrium is guaranteed to always exist, the problem of finding one in general-sum games is PPAD-complete, generally considered intractable. Regret minimization is an efficient framework for approximating Nash equilibria in two-player zero-sum games. However, in general-sum games, such algorithms are only guaranteed to converge to a coarse-correlated equilibrium (CCE), a solution concept where players can correlate their strategies. In this work, we use meta-learning to minimize the correlations in strategies produced by a regret minimizer. This encourages the regret minimizer to find strategies that are closer to a Nash equilibrium. The meta-learned regret minimizer is still guaranteed to converge to a CCE, but we give a bound on the distance to Nash equilibrium in terms of our meta-loss. We evaluate our approach in general-sum imperfect information games. Our algorithms provide significantly better approximations of Nash equilibria than state-of-the-art regret minimization techniques.