đ€ AI Summary
This paper addresses the coupled multi-agent reinforcement learning (CMARL) problem with individual safety constraints, where agent policies depend on both local states and neighboring states as well as parameters. To tackle the challenges of decentralized cooperative optimization and safety-critical control, we propose a distributed primal-dual algorithm: it employs a Îșâ-hop coupled policy and an independent time-varying communication network to avoid direct sharing of policy parameters and Lagrange multipliers; integrates local variable estimation, truncated neighborhood rewards, incomplete information exchange, and multi-step neighbor aggregation. Theoretically, the algorithm converges with high probability to an Δ-first-order stationary point, with approximation error bounded by đȘ(Îł^((Îș+1)/Îșâ)). Experiments on GridWorld demonstrate its effectiveness in achieving safe, scalable, and cooperative optimization under safety constraints.
đ Abstract
In this work, we investigate constrained multi-agent reinforcement learning (CMARL), where agents collaboratively maximize the sum of their local objectives while satisfying individual safety constraints. We propose a framework where agents adopt coupled policies that depend on both local states and parameters, as well as those of their $kappa_p$-hop neighbors, with $kappa_p>0$ denoting the coupling distance. A distributed primal-dual algorithm is further developed under this framework, wherein each agent has access only to state-action pairs within its $2kappa_p$-hop neighborhood and to reward information within its $kappa + 2kappa_p$-hop neighborhood, with $kappa>0$ representing the truncation distance. Moreover, agents are not permitted to directly share their true policy parameters or Lagrange multipliers. Instead, each agent constructs and maintains local estimates of these variables for other agents and employs such estimates to execute its policy. Additionally, these estimates are further updated and exchanged exclusively through an independent, time-varying networks, which enhances the overall system security. We establish that, with high probability, our algorithm can achieve an $epsilon$-first-order stationary convergence with an approximation error of $mathcal{O}(gamma^{frac{kappa+1}{kappa_{p}}})$ for discount factor $gammain(0,1)$. Finally, simulations in GridWorld environment are conducted to demonstrate the effectiveness of the proposed algorithm.