Distributed scalable coupled policy algorithm for networked multi-agent reinforcement learning

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the joint optimization of reward interdependence and policy coupling in networked multi-agent reinforcement learning (NMARL): each agent’s reward depends on its own and κₚ-hop neighbors’ state-action pairs, while its policy is parameterized jointly by its own and κₚ-hop neighbors’ parameters. To this end, we propose Distributed Coupled Policy Optimization (DCPO), a scalable decentralized algorithm that introduces neighbor-averaged Q-functions and coupled policy gradients, and employs a geometric two-step sampling scheme—obviating explicit Q-tables. DCPO integrates push-sum consensus for fully decentralized coordination. We establish theoretical convergence to first-order stationary points of the objective function under mild assumptions. Empirical evaluation on robotic path planning tasks demonstrates that DCPO significantly outperforms existing NMARL methods in terms of sample efficiency, scalability, and practical applicability, while preserving robustness to network topology changes.

Technology Category

Application Category

📝 Abstract
This paper studies networked multi-agent reinforcement learning (NMARL) with interdependent rewards and coupled policies. In this setting, each agent's reward depends on its own state-action pair as well as those of its direct neighbors, and each agent's policy is parameterized by its local parameters together with those of its $κ_{p}$-hop neighbors, with $κ_{p}geq 1$ denoting the coupled radius. The objective of the agents is to collaboratively optimize their policies to maximize the discounted average cumulative reward. To address the challenge of interdependent policies in collaborative optimization, we introduce a novel concept termed the neighbors' averaged $Q$-function and derive a new expression for the coupled policy gradient. Based on these theoretical foundations, we develop a distributed scalable coupled policy (DSCP) algorithm, where each agent relies only on the state-action pairs of its $κ_{p}$-hop neighbors and the rewards its their $(κ_{p}+1)$-hop neighbors. Specially, in the DSCP algorithm, we employ a geometric 2-horizon sampling method that does not require storing a full $Q$-table to obtain an unbiased estimate of the coupled policy gradient. Moreover, each agent interacts exclusively with its direct neighbors to obtain accurate policy parameters, while maintaining local estimates of other agents' parameters to execute its local policy and collect samples for optimization. These estimates and policy parameters are updated via a push-sum protocol, enabling distributed coordination of policy updates across the network. We prove that the joint policy produced by the proposed algorithm converges to a first-order stationary point of the objective function. Finally, the effectiveness of DSCP algorithm is demonstrated through simulations in a robot path planning environment, showing clear improvement over state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Develops a distributed algorithm for multi-agent reinforcement learning with interdependent rewards and policies.
Addresses collaborative policy optimization using local neighbor interactions and scalable gradient estimation.
Ensures convergence to optimal policies in networked agents without full global information.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed scalable coupled policy algorithm for multi-agent learning
Neighbors' averaged Q-function and coupled policy gradient derivation
Geometric 2-horizon sampling without full Q-table storage
🔎 Similar Papers
No similar papers found.
P
Pengcheng Dai
Engineering Systems and Design Pillar, Singapore University of Technology and Design, Singapore 487372
D
Dongming Wang
Department of Electrical and Computer Engineering, University of California, Riverside, CA 92521, USA
Wenwu Yu
Wenwu Yu
Endowed Chair Professor, Southeast University, Nanjing China
complex networksmulti-agent systemsnetworked collective intelligencemachine learningUAVs
W
Wei Ren
Department of Electrical and Computer Engineering, University of California, Riverside, CA 92521, USA