Efficient Submap-based Autonomous MAV Exploration using Visual-Inertial SLAM Configurable for LiDARs or Depth Cameras

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in long-duration autonomous exploration of micro air vehicles (MAVs) in unknown environments—including degradation of global consistency, insufficient safety guarantees, and poor adaptability to heterogeneous sensors—this paper proposes a subgraph-based efficient large-scale exploration framework. The method integrates visual-inertial SLAM, loop closure detection, pose-graph optimization, and sampling-based next-best-view planning, while introducing three key contributions: (1) a local subgraph model that fuses subgraph-level frontier points to generate globally consistent exploration goals; (2) an abstract multi-sensor frontend interface enabling seamless switching between LiDAR and depth cameras; and (3) tight coupling of perception, state estimation, and planning. Simulation results demonstrate a 23% improvement in exploration efficiency and an 18% gain in reconstruction accuracy over state-of-the-art methods. Real-world experiments across complex indoor and outdoor environments validate full autonomy and high trajectory consistency on two distinct MAV platforms.

Technology Category

Application Category

📝 Abstract
Autonomous exploration of unknown space is an essential component for the deployment of mobile robots in the real world. Safe navigation is crucial for all robotics applications and requires accurate and consistent maps of the robot's surroundings. To achieve full autonomy and allow deployment in a wide variety of environments, the robot must rely on on-board state estimation which is prone to drift over time. We propose a Micro Aerial Vehicle (MAV) exploration framework based on local submaps to allow retaining global consistency by applying loop-closure corrections to the relative submap poses. To enable large-scale exploration we efficiently compute global, environment-wide frontiers from the local submap frontiers and use a sampling-based next-best-view exploration planner. Our method seamlessly supports using either a LiDAR sensor or a depth camera, making it suitable for different kinds of MAV platforms. We perform comparative evaluations in simulation against a state-of-the-art submap-based exploration framework to showcase the efficiency and reconstruction quality of our approach. Finally, we demonstrate the applicability of our method to real-world MAVs, one equipped with a LiDAR and the other with a depth camera. Video available at https://youtu.be/Uf5fwmYcuq4 .
Problem

Research questions and friction points this paper is trying to address.

Autonomous exploration of unknown spaces using MAVs.
Accurate mapping and safe navigation for mobile robots.
Support for LiDAR or depth camera in MAV platforms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses local submaps for global consistency
Supports LiDAR or depth camera integration
Implements sampling-based next-best-view planner