Asynchronous Cooperative Multi-Agent Reinforcement Learning with Limited Communication

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional multi-agent navigation and task execution approaches rely on synchronous, high-frequency communication—rendering them impractical under communication constraints and unknown environments. Method: We propose an asynchronous, low-frequency, on-demand communication framework that integrates dynamic graph-structured modeling with Graph Transformers: agents dynamically establish edges only upon actual interaction, enabling efficient collaboration under sparse communication. The method unifies graph neural networks, asynchronous multi-agent reinforcement learning, and dynamic topology modeling. Results: Experiments demonstrate that our approach achieves task success rates and collision rates comparable to state-of-the-art baselines while reducing total communication messages by 26%. This significantly improves communication efficiency and environmental adaptability without compromising performance.

Technology Category

Application Category

📝 Abstract
We consider the problem setting in which multiple autonomous agents must cooperatively navigate and perform tasks in an unknown, communication-constrained environment. Traditional multi-agent reinforcement learning (MARL) approaches assume synchronous communications and perform poorly in such environments. We propose AsynCoMARL, an asynchronous MARL approach that uses graph transformers to learn communication protocols from dynamic graphs. AsynCoMARL can accommodate infrequent and asynchronous communications between agents, with edges of the graph only forming when agents communicate with each other. We show that AsynCoMARL achieves similar success and collision rates as leading baselines, despite 26% fewer messages being passed between agents.
Problem

Research questions and friction points this paper is trying to address.

Multi-agent learning
Limited communication
Collaborative decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

AsynCoMARL
Dynamic Graph Model
Limited and Asynchronous Communication
🔎 Similar Papers
No similar papers found.
S
Sydney Dolan
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology
Siddharth Nayak
Siddharth Nayak
Waymo
reinforcement learningroboticsautonomous control
J
J. J. Aloor
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology
Hamsa Balakrishnan
Hamsa Balakrishnan
Massachusetts Institute of Technology
ControlsOptimizationTransportationAir Traffic Management