M2UD: A Multi-model, Multi-scenario, Uneven-terrain Dataset for Ground Robot with Localization and Mapping Evaluation

πŸ“… 2025-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing ground-robot SLAM datasets are largely confined to flat terrains and small-scale scenes, limiting rigorous evaluation of algorithmic robustness in complex, unstructured environments. To address this, we introduce M2UDβ€”the first multimodal, multi-scene, non-planar terrain dataset specifically designed for ground-robot SLAM evaluation. It encompasses diverse environments including urban, wilderness, corridor, and hybrid settings, with sequences captured under extreme weather conditions and aggressive robot motion. We innovatively employ RTK-GNSS-smoothed ground-truth poses and propose a novel evaluation metric balancing accuracy and computational efficiency. The dataset includes high-fidelity 3D LiDAR-scanned reference maps, enabling rigorous SLAM assessment under multi-degree-of-freedom motion and non-planar terrain. We release 12 localization and 2 mapping benchmark sequences, and validate state-of-the-art methods (e.g., LOAM, LeGO-LOAM), significantly advancing the capability to evaluate SLAM performance at operational limits.

Technology Category

Application Category

πŸ“ Abstract
Ground robots play a crucial role in inspection, exploration, rescue, and other applications. In recent years, advancements in LiDAR technology have made sensors more accurate, lightweight, and cost-effective. Therefore, researchers increasingly integrate sensors, for SLAM studies, providing robust technical support for ground robots and expanding their application domains. Public datasets are essential for advancing SLAM technology. However, existing datasets for ground robots are typically restricted to flat-terrain motion with 3 DOF and cover only a limited range of scenarios. Although handheld devices and UAV exhibit richer and more aggressive movements, their datasets are predominantly confined to small-scale environments due to endurance limitations. To fill these gap, we introduce M2UD, a multi-modal, multi-scenario, uneven-terrain SLAM dataset for ground robots. This dataset contains a diverse range of highly challenging environments, including cities, open fields, long corridors, and mixed scenarios. Additionally, it presents extreme weather conditions. The aggressive motion and degradation characteristics of this dataset not only pose challenges for testing and evaluating existing SLAM methods but also advance the development of more advanced SLAM algorithms. To benchmark SLAM algorithms, M2UD provides smoothed ground truth localization data obtained via RTK and introduces a novel localization evaluation metric that considers both accuracy and efficiency. Additionally, we utilize a high-precision laser scanner to acquire ground truth maps of two representative scenes, facilitating the development and evaluation of mapping algorithms. We select 12 localization sequences and 2 mapping sequences to evaluate several classical SLAM algorithms, verifying usability of the dataset. To enhance usability, the dataset is accompanied by a suite of development kits.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of diverse, uneven-terrain SLAM datasets for ground robots.
Introduces M2UD for multi-scenario, extreme-condition SLAM algorithm evaluation.
Provides ground truth data and metrics for localization and mapping accuracy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal, multi-scenario SLAM dataset
RTK-based ground truth localization data
High-precision laser scanner for mapping
πŸ”Ž Similar Papers
No similar papers found.
Y
Yanpeng Jia
State Key Laboratory of Robotics at Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China; University of Chinese Academy of Sciences, Beijing, China
Shiyi Wang
Shiyi Wang
Imperial College London
deep learning
S
Shiliang Shao
State Key Laboratory of Robotics at Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
Y
Yue Wang
Zhejiang University, Zhejiang, China
F
Fu Zhang
Department of Mechanical Engineering, The University of Hong Kong, Hong Kong
T
Ting Wang
State Key Laboratory of Robotics at Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China