🤖 AI Summary
To address the challenge of balancing multi-view observation quality and traversal redundancy in autonomous semantic exploration and dense semantic mapping for ground robots operating in complex, unknown environments, this paper proposes a decoupled hierarchical planning framework. Our method introduces three key contributions: (1) a novel decoupled priority-based local sampler that explicitly models multi-view semantic observation requirements; (2) a safety-aggressive dual-mode exploration state machine coupled with a voxel-level complete coverage strategy; and (3) a plug-and-play semantic mapping module enabling LiDAR-panoramic camera fusion perception and SLAM-integrated point-cloud-level semantic mapping. Evaluated in both simulation and real-world unstructured environments, the approach significantly improves exploration efficiency, reduces path length, and ensures high-accuracy dense semantic object reconstruction with comprehensive multi-view coverage.
📝 Abstract
This paper presents a system for autonomous semantic exploration and dense semantic target mapping of a complex unknown environment using a ground robot equipped with a LiDAR-panoramic camera suite. Existing approaches often struggle to balance collecting high-quality observations from multiple view angles and avoiding unnecessary repetitive traversal. To fill this gap, we propose a complete system combining mapping and planning. We first redefine the task as completing both geometric coverage and semantic viewpoint observation. We then manage semantic and geometric viewpoints separately and propose a novel Priority-driven Decoupled Local Sampler to generate local viewpoint sets. This enables explicit multi-view semantic inspection and voxel coverage without unnecessary repetition. Building on this, we develop a hierarchical planner to ensure efficient global coverage. In addition, we propose a Safe Aggressive Exploration State Machine, which allows aggressive exploration behavior while ensuring the robot's safety. Our system includes a plug-and-play semantic target mapping module that integrates seamlessly with state-of-the-art SLAM algorithms for pointcloud-level dense semantic target mapping. We validate our approach through extensive experiments in both realistic simulations and complex real-world environments. Simulation results show that our planner achieves faster exploration and shorter travel distances while guaranteeing a specified number of multi-view inspections. Real-world experiments further confirm the system's effectiveness in achieving accurate dense semantic object mapping of unstructured environments.