🤖 AI Summary
This paper addresses the challenge of computing Nash equilibria in networked aggregative games, where agents lack access to the global aggregate variable and cannot rely on centralized coordination. We propose a fully distributed algorithm wherein each agent performs local gradient updates while simultaneously tracking the aggregate variable via dynamic consensus—requiring no central node or global information. Methodologically, we model the algorithm as a singularly perturbed system, uncovering its intrinsic two-timescale (fast/slow) dynamical structure. A key innovation is the support for generalized aggregate functions beyond arithmetic averages, substantially enhancing modeling flexibility. Under strong monotonicity and constraint qualification assumptions, we establish linear convergence with constant step sizes using variational inequality and constrained optimization theory. The algorithm’s efficiency and robustness are empirically validated on a voltage-support problem in smart grids.
📝 Abstract
We present a fully-distributed algorithm for Nash equilibrium seeking in aggregative games over networks. The proposed scheme endows each agent with a gradient-based scheme equipped with a tracking mechanism to locally reconstruct the aggregative variable, which is not available to the agents. We show that our method falls into the framework of singularly perturbed systems, as it involves the interconnection between a fast subsystem – the global information reconstruction dynamics – with a slow one concerning the optimization of the local strategies. This perspective plays a key role in analyzing the scheme with a constant stepsize, and in proving its linear convergence to the Nash equilibrium in strongly monotone games with local constraints. By exploiting the flexibility of our aggregative variable definition (not necessarily the arithmetic average of the agents’ strategy), we show the efficacy of our algorithm on a realistic voltage support case study for the smart grid.