🤖 AI Summary
This paper addresses the joint optimization of reward interdependence and policy coupling in networked multi-agent reinforcement learning (NMARL): each agent’s reward depends on its own and κₚ-hop neighbors’ state-action pairs, while its policy is parameterized jointly by its own and κₚ-hop neighbors’ parameters. To this end, we propose Distributed Coupled Policy Optimization (DCPO), a scalable decentralized algorithm that introduces neighbor-averaged Q-functions and coupled policy gradients, and employs a geometric two-step sampling scheme—obviating explicit Q-tables. DCPO integrates push-sum consensus for fully decentralized coordination. We establish theoretical convergence to first-order stationary points of the objective function under mild assumptions. Empirical evaluation on robotic path planning tasks demonstrates that DCPO significantly outperforms existing NMARL methods in terms of sample efficiency, scalability, and practical applicability, while preserving robustness to network topology changes.
📝 Abstract
This paper studies networked multi-agent reinforcement learning (NMARL) with interdependent rewards and coupled policies. In this setting, each agent's reward depends on its own state-action pair as well as those of its direct neighbors, and each agent's policy is parameterized by its local parameters together with those of its $κ_{p}$-hop neighbors, with $κ_{p}geq 1$ denoting the coupled radius. The objective of the agents is to collaboratively optimize their policies to maximize the discounted average cumulative reward. To address the challenge of interdependent policies in collaborative optimization, we introduce a novel concept termed the neighbors' averaged $Q$-function and derive a new expression for the coupled policy gradient. Based on these theoretical foundations, we develop a distributed scalable coupled policy (DSCP) algorithm, where each agent relies only on the state-action pairs of its $κ_{p}$-hop neighbors and the rewards its their $(κ_{p}+1)$-hop neighbors. Specially, in the DSCP algorithm, we employ a geometric 2-horizon sampling method that does not require storing a full $Q$-table to obtain an unbiased estimate of the coupled policy gradient. Moreover, each agent interacts exclusively with its direct neighbors to obtain accurate policy parameters, while maintaining local estimates of other agents' parameters to execute its local policy and collect samples for optimization. These estimates and policy parameters are updated via a push-sum protocol, enabling distributed coordination of policy updates across the network. We prove that the joint policy produced by the proposed algorithm converges to a first-order stationary point of the objective function. Finally, the effectiveness of DSCP algorithm is demonstrated through simulations in a robot path planning environment, showing clear improvement over state-of-the-art methods.