🤖 AI Summary
To address the coordination challenges in decentralized multi-agent reinforcement learning (MARL) arising from partial observability and non-stationarity, this paper proposes the Hierarchical Message Passing (HMP) framework—the first to tightly integrate feudal hierarchical RL with graph-structured message passing. Methodologically, HMP introduces a high-level advantage-based reward allocation mechanism and jointly models communication, planning, and policy optimization via a hierarchical graph neural network coupled with multi-level policy gradient algorithms. Evaluated on standard MARL benchmarks—including SMAC and MPE—HMP significantly enhances long-horizon planning capability and cooperative efficiency, achieving an average win rate 12.6% higher than state-of-the-art methods. These results demonstrate HMP’s effectiveness and advancement in tackling hierarchical coordination and non-stationary training dynamics.
📝 Abstract
Decentralized Multi-Agent Reinforcement Learning (MARL) methods allow for learning scalable multi-agent policies, but suffer from partial observability and induced non-stationarity. These challenges can be addressed by introducing mechanisms that facilitate coordination and high-level planning. Specifically, coordination and temporal abstraction can be achieved through communication (e.g., message passing) and Hierarchical Reinforcement Learning (HRL) approaches to decision-making. However, optimization issues limit the applicability of hierarchical policies to multi-agent systems. As such, the combination of these approaches has not been fully explored. To fill this void, we propose a novel and effective methodology for learning multi-agent hierarchies of message-passing policies. We adopt the feudal HRL framework and rely on a hierarchical graph structure for planning and coordination among agents. Agents at lower levels in the hierarchy receive goals from the upper levels and exchange messages with neighboring agents at the same level. To learn hierarchical multi-agent policies, we design a novel reward-assignment method based on training the lower-level policies to maximize the advantage function associated with the upper levels. Results on relevant benchmarks show that our method performs favorably compared to the state of the art.