🤖 AI Summary
This work investigates the time complexity of the Follow the Regularized Leader (FTRL) algorithm in converging to Nash equilibria in potential games. By constructing explicit instances, it establishes—for the first time—an exponential lower bound on the convergence time of FTRL in two-player potential games under any permutation-invariant regularizer, and a doubly exponential lower bound in the multi-player setting. The study further reveals that the FTRL dynamics admit a potential function structure and demonstrates their equivalence to mirror descent and fictitious play under appropriate conditions. These results imply that FTRL and its variants, such as multiplicative weights update, require exponential time to converge, whereas lazy alternating no-regret dynamics achieve an upper bound of $\exp(O(1/\varepsilon^2))$, matching the lower bounds up to exponential order.
📝 Abstract
Follow the regularized leader FTRL is the premier algorithm for online optimization. However, despite decades of research on its convergence in constrained optimization -- and potential games in particular -- its behavior remained hitherto poorly understood. In this paper, we establish that FTRL can take exponential time to converge to a Nash equilibrium in two-player potential games for any (permutation-invariant) regularizer and potentially vanishing learning rate. By known equivalences, this translates to an exponential lower bound for certain mirror descent counterparts, most notably multiplicative weights update. On the positive side, we establish the potential property for FTRL and obtain an exponential upper bound $\exp(O_{\epsilon}(1/\epsilon^2))$ for any no-regret dynamics executed in a lazy, alternating fashion, matching our lower bound up to factors in the exponent. Finally, in multi-player potential games, we show that fictitious play -- the extreme version of FTRL -- can take doubly exponential time to reach a Nash equilibrium. This constitutes an exponentially stronger lower bound for the foundational learning algorithm in games.