The Power of Regularization in Solving Extensive-Form Games

📅 2022-06-19
🏛️ International Conference on Learning Representations
📈 Citations: 23
Influential: 2
📄 PDF
🤖 AI Summary
This work investigates the theoretical and algorithmic roles of regularization in computing equilibria of extensive-form games (EFGs). Addressing key challenges—including non-uniqueness of Nash equilibria, slow convergence, and difficulty approximating extensive-form perfect equilibria (EFPE)—we propose two novel algorithms: sparse optimistic mirror descent (DOMD) and regularized counterfactual regret minimization (Reg-CFR). DOMD achieves, for the first time among CFR-type algorithms, last-iterate convergence at rate Õ(1/T). Reg-CFR attains optimal O(1/T^{1/4}) iteration complexity and O(1/T^{3/4}) average-iterate convergence in unperturbed EFGs; in perturbed EFGs, it achieves O(1/T) average-iterate convergence and asymptotic last-iterate convergence, while enabling EFPE approximation. Collectively, these methods unify improvements in convergence rate, robustness to perturbations, and solution quality—advancing both theoretical guarantees and practical performance in equilibrium computation for EFGs.
📝 Abstract
In this paper, we investigate the power of {it regularization}, a common technique in reinforcement learning and optimization, in solving extensive-form games (EFGs). We propose a series of new algorithms based on regularizing the payoff functions of the game, and establish a set of convergence results that strictly improve over the existing ones, with either weaker assumptions or stronger convergence guarantees. In particular, we first show that dilated optimistic mirror descent (DOMD), an efficient variant of OMD for solving EFGs, with adaptive regularization can achieve a fast $ ilde O(1/T)$ last-iterate convergence in terms of duality gap and distance to the set of Nash equilibrium (NE) without uniqueness assumption of the NE. Second, we show that regularized counterfactual regret minimization ( exttt{Reg-CFR}), with a variant of optimistic mirror descent algorithm as regret-minimizer, can achieve $O(1/T^{1/4})$ best-iterate, and $O(1/T^{3/4})$ average-iterate convergence rate for finding NE in EFGs. Finally, we show that exttt{Reg-CFR} can achieve asymptotic last-iterate convergence, and optimal $O(1/T)$ average-iterate convergence rate, for finding the NE of perturbed EFGs, which is useful for finding approximate extensive-form perfect equilibria (EFPE). To the best of our knowledge, they constitute the first last-iterate convergence results for CFR-type algorithms, while matching the state-of-the-art average-iterate convergence rate in finding NE for non-perturbed EFGs. We also provide numerical results to corroborate the advantages of our algorithms.
Problem

Research questions and friction points this paper is trying to address.

Improving convergence rates in extensive-form games using regularization.
Achieving last-iterate convergence without Nash equilibrium uniqueness.
Enhancing CFR algorithms for faster equilibrium computation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

DOMD with adaptive regularization improves convergence
Reg-CFR optimizes regret for NE in EFGs
Reg-CFR achieves asymptotic last-iterate convergence
🔎 Similar Papers
No similar papers found.