MARVEL: Multi-Agent Reinforcement Learning for constrained field-of-View multi-robot Exploration in Large-scale environments

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses collaborative exploration by multi-robot systems equipped with narrow-field-of-view directional sensors (e.g., airborne cameras) in large-scale, unknown environments. Method: We propose an end-to-end approach jointly optimizing robot poses and sensor orientations within a multi-agent reinforcement learning framework. Contribution/Results: Key innovations include: (1) a novel graph attention network (GAT) architecture integrating regional semantics and orientation-aware features; (2) an information-entropy-driven viewpoint action space pruning strategy that significantly reduces search complexity; and (3) zero-shot generalization to dynamic team sizes and heterogeneous sensor configurations. Evaluated in a real-world 90 m × 90 m environment, our method outperforms state-of-the-art approaches in mapping efficiency and coverage. It has been successfully deployed on multiple physical UAVs, enabling online collaborative mapping and real-time decision-making.

Technology Category

Application Category

📝 Abstract
In multi-robot exploration, a team of mobile robot is tasked with efficiently mapping an unknown environments. While most exploration planners assume omnidirectional sensors like LiDAR, this is impractical for small robots such as drones, where lightweight, directional sensors like cameras may be the only option due to payload constraints. These sensors have a constrained field-of-view (FoV), which adds complexity to the exploration problem, requiring not only optimal robot positioning but also sensor orientation during movement. In this work, we propose MARVEL, a neural framework that leverages graph attention networks, together with novel frontiers and orientation features fusion technique, to develop a collaborative, decentralized policy using multi-agent reinforcement learning (MARL) for robots with constrained FoV. To handle the large action space of viewpoints planning, we further introduce a novel information-driven action pruning strategy. MARVEL improves multi-robot coordination and decision-making in challenging large-scale indoor environments, while adapting to various team sizes and sensor configurations (i.e., FoV and sensor range) without additional training. Our extensive evaluation shows that MARVEL's learned policies exhibit effective coordinated behaviors, outperforming state-of-the-art exploration planners across multiple metrics. We experimentally demonstrate MARVEL's generalizability in large-scale environments, of up to 90m by 90m, and validate its practical applicability through successful deployment on a team of real drone hardware.
Problem

Research questions and friction points this paper is trying to address.

Multi-robot exploration with constrained field-of-view
Decentralized policy using multi-agent reinforcement learning
Handling large action space in viewpoint planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Attention Networks
Multi-Agent Reinforcement Learning
Information-Driven Action Pruning
🔎 Similar Papers
No similar papers found.