Distributed primal-dual algorithm for constrained multi-agent reinforcement learning under coupled policies

📅 2025-11-19
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
This paper addresses the coupled multi-agent reinforcement learning (CMARL) problem with individual safety constraints, where agent policies depend on both local states and neighboring states as well as parameters. To tackle the challenges of decentralized cooperative optimization and safety-critical control, we propose a distributed primal-dual algorithm: it employs a Îșₚ-hop coupled policy and an independent time-varying communication network to avoid direct sharing of policy parameters and Lagrange multipliers; integrates local variable estimation, truncated neighborhood rewards, incomplete information exchange, and multi-step neighbor aggregation. Theoretically, the algorithm converges with high probability to an Δ-first-order stationary point, with approximation error bounded by đ’Ș(Îł^((Îș+1)/Îșₚ)). Experiments on GridWorld demonstrate its effectiveness in achieving safe, scalable, and cooperative optimization under safety constraints.

Technology Category

Application Category

📝 Abstract
In this work, we investigate constrained multi-agent reinforcement learning (CMARL), where agents collaboratively maximize the sum of their local objectives while satisfying individual safety constraints. We propose a framework where agents adopt coupled policies that depend on both local states and parameters, as well as those of their $kappa_p$-hop neighbors, with $kappa_p>0$ denoting the coupling distance. A distributed primal-dual algorithm is further developed under this framework, wherein each agent has access only to state-action pairs within its $2kappa_p$-hop neighborhood and to reward information within its $kappa + 2kappa_p$-hop neighborhood, with $kappa>0$ representing the truncation distance. Moreover, agents are not permitted to directly share their true policy parameters or Lagrange multipliers. Instead, each agent constructs and maintains local estimates of these variables for other agents and employs such estimates to execute its policy. Additionally, these estimates are further updated and exchanged exclusively through an independent, time-varying networks, which enhances the overall system security. We establish that, with high probability, our algorithm can achieve an $epsilon$-first-order stationary convergence with an approximation error of $mathcal{O}(gamma^{frac{kappa+1}{kappa_{p}}})$ for discount factor $gammain(0,1)$. Finally, simulations in GridWorld environment are conducted to demonstrate the effectiveness of the proposed algorithm.
Problem

Research questions and friction points this paper is trying to address.

Maximizing local objectives while satisfying individual safety constraints in multi-agent systems
Developing distributed algorithms with limited neighbor information access under coupled policies
Achieving secure coordination without direct sharing of policy parameters or multipliers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed primal-dual algorithm for constrained multi-agent reinforcement learning
Agents use coupled policies with local and neighbor information
Local estimates of parameters exchanged via time-varying networks
🔎 Similar Papers
No similar papers found.
P
Pengcheng Dai
Engineering Systems and Design Pillar, Singapore University of Technology and Design, Singapore 487372
H
He Wang
School of Mathematics, Southeast University, Nanjing 210096, China
D
Dongming Wang
Department of Electrical and Computer Engineering, University of California, Riverside, CA 92521, USA
Wenwu Yu
Wenwu Yu
Endowed Chair Professor, Southeast University, Nanjing China
complex networksmulti-agent systemsnetworked collective intelligencemachine learningUAVs