🤖 AI Summary
Conventional multi-agent navigation and task execution approaches rely on synchronous, high-frequency communication—rendering them impractical under communication constraints and unknown environments.
Method: We propose an asynchronous, low-frequency, on-demand communication framework that integrates dynamic graph-structured modeling with Graph Transformers: agents dynamically establish edges only upon actual interaction, enabling efficient collaboration under sparse communication. The method unifies graph neural networks, asynchronous multi-agent reinforcement learning, and dynamic topology modeling.
Results: Experiments demonstrate that our approach achieves task success rates and collision rates comparable to state-of-the-art baselines while reducing total communication messages by 26%. This significantly improves communication efficiency and environmental adaptability without compromising performance.
📝 Abstract
We consider the problem setting in which multiple autonomous agents must cooperatively navigate and perform tasks in an unknown, communication-constrained environment. Traditional multi-agent reinforcement learning (MARL) approaches assume synchronous communications and perform poorly in such environments. We propose AsynCoMARL, an asynchronous MARL approach that uses graph transformers to learn communication protocols from dynamic graphs. AsynCoMARL can accommodate infrequent and asynchronous communications between agents, with edges of the graph only forming when agents communicate with each other. We show that AsynCoMARL achieves similar success and collision rates as leading baselines, despite 26% fewer messages being passed between agents.