🤖 AI Summary
This work addresses the multi-agent cooperative area coverage problem and establishes, for the first time, its theoretical equivalence to a Markov potential game (MPG). This equivalence enables reformulating distributed Nash equilibrium computation as a single-objective closed-loop optimal control problem. Leveraging this insight, we propose a parameterized closed-loop Nash equilibrium learning framework: policy optimization is guided by the MPG’s potential function, ensuring both global optimality and fully decentralized execution. The method integrates MPG analysis, parameterized closed-loop policy representation, and multi-agent reinforcement learning. Experiments demonstrate a tenfold acceleration in training speed, faster policy convergence, and substantial improvements over conventional game-theoretic baselines. The core contribution lies in establishing a rigorous theoretical connection between cooperative coverage and MPG, and—based on this—designing the first closed-loop equilibrium learning paradigm that simultaneously provides theoretical guarantees and high scalability.
📝 Abstract
Multi-agent reinforcement learning is a challenging and active field of research due to the inherent nonstationary property and coupling between agents. A popular approach to modeling the multi-agent interactions underlying the multi-agent RL problem is the Markov Game. There is a special type of Markov Game, termed Markov Potential Game, which allows us to reduce the Markov Game to a single-objective optimal control problem where the objective function is a potential function. In this work, we prove that a multi-agent collaborative field coverage problem, which is found in many engineering applications, can be formulated as a Markov Potential Game, and we can learn a parameterized closed-loop Nash Equilibrium by solving an equivalent single-objective optimal control problem. As a result, our algorithm is 10x faster during training compared to a game-theoretic baseline and converges faster during policy execution.