Semantic Exploration and Dense Mapping of Complex Environments using Ground Robots Equipped with LiDAR and Panoramic Camera

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing multi-view observation quality and traversal redundancy in autonomous semantic exploration and dense semantic mapping for ground robots operating in complex, unknown environments, this paper proposes a decoupled hierarchical planning framework. Our method introduces three key contributions: (1) a novel decoupled priority-based local sampler that explicitly models multi-view semantic observation requirements; (2) a safety-aggressive dual-mode exploration state machine coupled with a voxel-level complete coverage strategy; and (3) a plug-and-play semantic mapping module enabling LiDAR-panoramic camera fusion perception and SLAM-integrated point-cloud-level semantic mapping. Evaluated in both simulation and real-world unstructured environments, the approach significantly improves exploration efficiency, reduces path length, and ensures high-accuracy dense semantic object reconstruction with comprehensive multi-view coverage.

Technology Category

Application Category

📝 Abstract
This paper presents a system for autonomous semantic exploration and dense semantic target mapping of a complex unknown environment using a ground robot equipped with a LiDAR-panoramic camera suite. Existing approaches often struggle to balance collecting high-quality observations from multiple view angles and avoiding unnecessary repetitive traversal. To fill this gap, we propose a complete system combining mapping and planning. We first redefine the task as completing both geometric coverage and semantic viewpoint observation. We then manage semantic and geometric viewpoints separately and propose a novel Priority-driven Decoupled Local Sampler to generate local viewpoint sets. This enables explicit multi-view semantic inspection and voxel coverage without unnecessary repetition. Building on this, we develop a hierarchical planner to ensure efficient global coverage. In addition, we propose a Safe Aggressive Exploration State Machine, which allows aggressive exploration behavior while ensuring the robot's safety. Our system includes a plug-and-play semantic target mapping module that integrates seamlessly with state-of-the-art SLAM algorithms for pointcloud-level dense semantic target mapping. We validate our approach through extensive experiments in both realistic simulations and complex real-world environments. Simulation results show that our planner achieves faster exploration and shorter travel distances while guaranteeing a specified number of multi-view inspections. Real-world experiments further confirm the system's effectiveness in achieving accurate dense semantic object mapping of unstructured environments.
Problem

Research questions and friction points this paper is trying to address.

Autonomous semantic exploration of complex unknown environments
Balancing high-quality multi-view observations with efficient traversal
Dense semantic target mapping using LiDAR and panoramic cameras
Innovation

Methods, ideas, or system contributions that make the work stand out.

Priority-driven Decoupled Local Sampler for viewpoints
Hierarchical planner ensures efficient global coverage
Safe Aggressive Exploration State Machine balances safety
🔎 Similar Papers
No similar papers found.
Xiaoyang Zhan
Xiaoyang Zhan
Department of Mechanical Engineering, Carnegie Mellon University
S
Shixin Zhou
Department of Mechanical Engineering, Carnegie Mellon University
Qianqian Yang
Qianqian Yang
Zhejiang University
Information TheoryWireless AISemantic CommunicationMachine Learning
Y
Yixuan Zhao
Department of Mechanical Engineering, Carnegie Mellon University
H
Hao Liu
Department of Mechanical Engineering, Carnegie Mellon University
S
Srinivas Chowdary Ramineni
Department of Mechanical Engineering, Carnegie Mellon University
Kenji Shimada
Kenji Shimada
Carnegie Mellon University
RoboticsCAD/CAECV/CGAIMLBME