🤖 AI Summary
This paper studies policy optimization in Stackelberg mean-field games (MFGs), modeling hierarchical interactions between a single leader and an infinite population of homogeneous followers. Existing methods suffer from strong independence assumptions between leader and follower objectives, low sample efficiency, and lack of finite-time convergence guarantees. To address these limitations, we propose the first single-loop actor-critic algorithm for Stackelberg MFGs with non-asymptotic convergence guarantees. Our method weakens the objective independence assumption via a novel gradient alignment condition and alternately updates the leader’s policy, a representative follower’s policy, and the mean-field distribution using Markovian sample streams. We establish rigorous finite-time convergence to a stationary point under mild regularity conditions. Experiments across multiple economic environments demonstrate that our algorithm significantly outperforms existing baselines in both policy quality and convergence speed.
📝 Abstract
We study policy optimization in Stackelberg mean field games (MFGs), a hierarchical framework for modeling the strategic interaction between a single leader and an infinitely large population of homogeneous followers. The objective can be formulated as a structured bi-level optimization problem, in which the leader needs to learn a policy maximizing its reward, anticipating the response of the followers. Existing methods for solving these (and related) problems often rely on restrictive independence assumptions between the leader's and followers' objectives, use samples inefficiently due to nested-loop algorithm structure, and lack finite-time convergence guarantees. To address these limitations, we propose AC-SMFG, a single-loop actor-critic algorithm that operates on continuously generated Markovian samples. The algorithm alternates between (semi-)gradient updates for the leader, a representative follower, and the mean field, and is simple to implement in practice. We establish the finite-time and finite-sample convergence of the algorithm to a stationary point of the Stackelberg objective. To our knowledge, this is the first Stackelberg MFG algorithm with non-asymptotic convergence guarantees. Our key assumption is a "gradient alignment" condition, which requires that the full policy gradient of the leader can be approximated by a partial component of it, relaxing the existing leader-follower independence assumption. Simulation results in a range of well-established economics environments demonstrate that AC-SMFG outperforms existing multi-agent and MFG learning baselines in policy quality and convergence speed.