🤖 AI Summary
This work addresses the lack of finite-time convergence guarantees for Q-value iteration in both general and Stackelberg games. Adopting a control-theoretic perspective, the authors model the learning dynamics as a switched system and analyze its convergence by constructing upper and lower comparison systems. They establish, for the first time, a finite-time error bound for Stackelberg Q-learning and prove convergence of the Q-function under substantially milder policy conditions than previously required. This contribution not only fills a critical theoretical gap but also provides a rigorous convergence guarantee for Stackelberg games in multi-agent reinforcement learning settings.
📝 Abstract
Reinforcement learning has been successful both empirically and theoretically in single-agent settings, but extending these results to multi-agent reinforcement learning in general-sum Markov games remains challenging. This paper studies the convergence of Stackelberg Q-value iteration in two-player general-sum Markov games from a control-theoretic perspective. We introduce a relaxed policy condition tailored to the Stackelberg setting and model the learning dynamics as a switching system. By constructing upper and lower comparison systems, we establish finite-time error bounds for the Q-functions and characterize their convergence properties. Our results provide a novel control-theoretic perspective on Stackelberg learning. Moreover, to the best of the authors' knowledge, this paper offers the first finite-time convergence guarantees for Q-value iteration in general-sum Markov games under Stackelberg interactions.