i2Nav-Robot: A Large-Scale Indoor-Outdoor Robot Dataset for Multi-Sensor Fusion Navigation and Mapping

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing UGV datasets suffer from inadequate sensor configurations, poor temporal synchronization, insufficient ground-truth accuracy, and limited scene diversity, hindering advances in navigation and mapping algorithms. To address these limitations, this work introduces the first multi-sensor synchronized dataset supporting both indoor and outdoor complex environments. It integrates solid-state LiDAR, 4D imaging radar, stereo cameras, and navigation-grade IMU, achieving hardware-level time synchronization and centimeter-accurate, high-frequency ground-truth trajectories via post-processed integrated navigation. The dataset comprises 10 long sequences (totaling 17.06 km), covering diverse scenarios including urban streets and indoor parking garages. An online synchronization and offline joint calibration strategy ensures sensor alignment, and consistency and reliability are validated across more than ten open-source sensor fusion systems. This dataset establishes a high-quality benchmark for research in high-precision localization and multimodal mapping.

Technology Category

Application Category

📝 Abstract
Accurate and reliable navigation is crucial for autonomous unmanned ground vehicle (UGV). However, current UGV datasets fall short in meeting the demands for advancing navigation and mapping techniques due to limitations in sensor configuration, time synchronization, ground truth, and scenario diversity. To address these challenges, we present i2Nav-Robot, a large-scale dataset designed for multi-sensor fusion navigation and mapping in indoor-outdoor environments. We integrate multi-modal sensors, including the newest front-view and 360-degree solid-state LiDARs, 4-dimensional (4D) radar, stereo cameras, odometer, global navigation satellite system (GNSS) receiver, and inertial measurement units (IMU) on an omnidirectional wheeled robot. Accurate timestamps are obtained through both online hardware synchronization and offline calibration for all sensors. The dataset comprises ten larger-scale sequences covering diverse UGV operating scenarios, such as outdoor streets, and indoor parking lots, with a total length of about 17060 meters. High-frequency ground truth, with centimeter-level accuracy for position, is derived from post-processing integrated navigation methods using a navigation-grade IMU. The proposed i2Nav-Robot dataset is evaluated by more than ten open-sourced multi-sensor fusion systems, and it has proven to have superior data quality.
Problem

Research questions and friction points this paper is trying to address.

Lack of diverse UGV datasets for navigation and mapping
Insufficient sensor synchronization and ground truth accuracy
Need for multi-sensor fusion in indoor-outdoor environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal sensor integration for fusion
Hardware and offline time synchronization
High-accuracy ground truth post-processing
🔎 Similar Papers
No similar papers found.
H
Hailiang Tang
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China; Hubei Technology Innovation Center for Spatiotemporal Information and Positioning Navigation, Wuhan 430079, China; Hubei Luojia Laboratory, Wuhan 430079, China
T
Tisheng Zhang
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China; Hubei Technology Innovation Center for Spatiotemporal Information and Positioning Navigation, Wuhan 430079, China; Hubei Luojia Laboratory, Wuhan 430079, China
Liqiang Wang
Liqiang Wang
Professor of Computer Science, University of Central Florida
Big DataDeep LearningBlockchainProgram AnalysisParallel Computing
X
Xin Ding
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China
M
Man Yuan
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China
Zhiyu Xiang
Zhiyu Xiang
Professor of Information & Electronic Engineering, Zhejiang University
Computer visionRobotics
J
Jujin Chen
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China
Y
Yuhan Bian
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China
S
Shuangyan Liu
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China
Y
Yuqing Wang
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China
G
Guan Wang
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China
X
Xiaoji Niu
Intelligent and Integrated Navigation Group (i2Nav), GNSS Research Center, Wuhan University, Wuhan 430079, China; Hubei Technology Innovation Center for Spatiotemporal Information and Positioning Navigation, Wuhan 430079, China; Hubei Luojia Laboratory, Wuhan 430079, China