Graph Based Deep Reinforcement Learning Aided by Transformers for Multi-Agent Cooperation

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the distributed target servicing problem for multi-UAV systems in post-disaster response scenarios, characterized by partial observability, communication constraints, and environmental uncertainty. We propose a collaborative planning framework integrating Graph Neural Networks (GNNs), Transformer-enhanced message passing, and Double Deep Q-Networks (Double DQN). The framework supports adaptive dynamic graph construction, edge-feature-aware attention mechanisms, and prioritized experience replay to improve learning stability. Compared against baselines—including Particle Swarm Optimization (PSO), greedy heuristics, and standard DQN—our approach achieves a 90% target servicing rate and 100% area coverage in simulation, while reducing the average steps per episode to 200—a 67% reduction. The method demonstrates superior scalability, robustness to environmental perturbations and communication dropouts, and enhanced task efficiency under realistic operational constraints.

Technology Category

Application Category

📝 Abstract
Mission planning for a fleet of cooperative autonomous drones in applications that involve serving distributed target points, such as disaster response, environmental monitoring, and surveillance, is challenging, especially under partial observability, limited communication range, and uncertain environments. Traditional path-planning algorithms struggle in these scenarios, particularly when prior information is not available. To address these challenges, we propose a novel framework that integrates Graph Neural Networks (GNNs), Deep Reinforcement Learning (DRL), and transformer-based mechanisms for enhanced multi-agent coordination and collective task execution. Our approach leverages GNNs to model agent-agent and agent-goal interactions through adaptive graph construction, enabling efficient information aggregation and decision-making under constrained communication. A transformer-based message-passing mechanism, augmented with edge-feature-enhanced attention, captures complex interaction patterns, while a Double Deep Q-Network (Double DQN) with prioritized experience replay optimizes agent policies in partially observable environments. This integration is carefully designed to address specific requirements of multi-agent navigation, such as scalability, adaptability, and efficient task execution. Experimental results demonstrate superior performance, with 90% service provisioning and 100% grid coverage (node discovery), while reducing the average steps per episode to 200, compared to 600 for benchmark methods such as particle swarm optimization (PSO), greedy algorithms and DQN.
Problem

Research questions and friction points this paper is trying to address.

Mission planning for cooperative drones in uncertain environments
Overcoming partial observability and limited communication in multi-agent systems
Enhancing scalability and adaptability in autonomous fleet coordination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Neural Networks model agent interactions
Transformer-based message-passing captures complex patterns
Double DQN optimizes policies in partial observability
🔎 Similar Papers
No similar papers found.