🤖 AI Summary
This paper investigates the robustness of Nash equilibria in continuous games under strategic and dynamic uncertainty. Specifically, it addresses strategic robustness—resilience to payoff perturbations—and dynamic robustness—stability under Follow-the-Regularized-Leader (FTRL) learning dynamics. The authors introduce a geometric characterization of *strategically robust equilibria* and establish, for the first time, their strict equivalence: an equilibrium is strategically robust if and only if it is an asymptotically stable limit point of entropy-regularized FTRL dynamics. Leveraging tools from differential games, convex analysis, and stochastic dynamical systems, they derive geometric convergence guarantees for entropy-regularized learning under affine constraints, precisely quantifying the trade-off between convergence rate and regularization strength. These results unify strategic and dynamic robustness into a single analytical framework, providing a novel paradigm for stability analysis in game-theoretic learning.
📝 Abstract
In this paper, we examine the robustness of Nash equilibria in continuous games, under both strategic and dynamic uncertainty. Starting with the former, we introduce the notion of a robust equilibrium as those equilibria that remain invariant to small -- but otherwise arbitrary -- perturbations to the game's payoff structure, and we provide a crisp geometric characterization thereof. Subsequently, we turn to the question of dynamic robustness, and we examine which equilibria may arise as stable limit points of the dynamics of "follow the regularized leader" (FTRL) in the presence of randomness and uncertainty. Despite their very distinct origins, we establish a structural correspondence between these two notions of robustness: strategic robustness implies dynamic robustness, and, conversely, the requirement of strategic robustness cannot be relaxed if dynamic robustness is to be maintained. Finally, we examine the rate of convergence to robust equilibria as a function of the underlying regularizer, and we show that entropically regularized learning converges at a geometric rate in games with affinely constrained action spaces.