Goal-Oriented Multi-Agent Reinforcement Learning for Decentralized Agent Teams

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to balance collaboration efficiency and communication overhead in dynamic, partially observable, communication-constrained, and decentralized multi-agent environments—such as heterogeneous land-air-water unmanned systems. This paper proposes a goal-oriented decentralized multi-agent reinforcement learning framework featuring a novel goal-aware sparse communication mechanism. Each agent autonomously decides whether to communicate and which task-relevant features to share, based solely on its local observations and current sub-goal. This design significantly reduces bandwidth requirements while preserving collaborative performance. Experiments on multi-agent navigation tasks demonstrate that our approach substantially improves task success rates and reduces average time-to-goal. Crucially, performance remains stable as the number of agents scales up, confirming the method’s effectiveness, robustness, and scalability.

Technology Category

Application Category

📝 Abstract
Connected and autonomous vehicles across land, water, and air must often operate in dynamic, unpredictable environments with limited communication, no centralized control, and partial observability. These real-world constraints pose significant challenges for coordination, particularly when vehicles pursue individual objectives. To address this, we propose a decentralized Multi-Agent Reinforcement Learning (MARL) framework that enables vehicles, acting as agents, to communicate selectively based on local goals and observations. This goal-aware communication strategy allows agents to share only relevant information, enhancing collaboration while respecting visibility limitations. We validate our approach in complex multi-agent navigation tasks featuring obstacles and dynamic agent populations. Results show that our method significantly improves task success rates and reduces time-to-goal compared to non-cooperative baselines. Moreover, task performance remains stable as the number of agents increases, demonstrating scalability. These findings highlight the potential of decentralized, goal-driven MARL to support effective coordination in realistic multi-vehicle systems operating across diverse domains.
Problem

Research questions and friction points this paper is trying to address.

Decentralized multi-vehicle coordination with limited communication
Partial observability and dynamic environments challenge agent cooperation
Individual objectives conflict with team performance in navigation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized multi-agent reinforcement learning framework
Goal-aware selective communication strategy
Scalable performance with dynamic agent populations
🔎 Similar Papers
No similar papers found.
Hung Du
Hung Du
Applied Artificial Intelligence Institute - Deakin University
Deep Reinforcement LearningMulti-agent SystemsContext-aware SystemsTranslational Research
H
Hy Nguyen
Applied Artificial Intelligence Initiative (A2I2), Deakin University, Geelong, VIC, Australia
S
Srikanth Thudumu
Institute of Applied Artificial Intelligence and Robotics (IAAIR), Germantown, TN, USA
Rajesh Vasa
Rajesh Vasa
Head of Translational Research, Applied Artificial Intelligence Institute, Deakin University
Artificial IntelligenceSoftware EvolutionAutomated Software EngineeringTools
K
Kon Mouzakis
Applied Artificial Intelligence Initiative (A2I2), Deakin University, Geelong, VIC, Australia