The NavINST Dataset for Multi-Sensor Autonomous Navigation

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-precision localization, SLAM, and multi-sensor fusion for autonomous driving in complex urban environments are hindered by the lack of high-quality, open benchmark datasets covering diverse indoor–outdoor and multi-illumination scenarios. Method: This work introduces the first open-source, multimodal navigation benchmark dataset designed for such settings. It innovatively integrates a solid-state LiDAR with a tactical-grade IMU to achieve centimeter-level ground-truth pose annotation—even in GNSS-challenged environments like underground garages. The dataset fuses synchronized measurements from IMU, multi-frequency GNSS, LiDAR, millimeter-wave radar, stereo/monocular cameras, and wheel encoders, with high-accuracy reference trajectories generated via post-processed GNSS/INS. A unified ROS-based driver architecture and standardized interfaces ensure algorithmic reproducibility. Contribution/Results: The dataset includes dense 3D maps, temporally aligned multi-modal sequences, and a comprehensive open-source toolchain. It has already enabled rigorous evaluation of multiple SLAM, localization, and sensor fusion algorithms, establishing itself as a key open benchmark in autonomous navigation research.

Technology Category

Application Category

📝 Abstract
The NavINST Laboratory has developed a comprehensive multisensory dataset from various road-test trajectories in urban environments, featuring diverse lighting conditions, including indoor garage scenarios with dense 3D maps. This dataset includes multiple commercial-grade IMUs and a high-end tactical-grade IMU. Additionally, it contains a wide array of perception-based sensors, such as a solid-state LiDAR - making it one of the first datasets to do so - a mechanical LiDAR, four electronically scanning RADARs, a monocular camera, and two stereo cameras. The dataset also includes forward speed measurements derived from the vehicle's odometer, along with accurately post-processed high-end GNSS/IMU data, providing precise ground truth positioning and navigation information. The NavINST dataset is designed to support advanced research in high-precision positioning, navigation, mapping, computer vision, and multisensory fusion. It offers rich, multi-sensor data ideal for developing and validating robust algorithms for autonomous vehicles. Finally, it is fully integrated with the ROS, ensuring ease of use and accessibility for the research community. The complete dataset and development tools are available at https://navinst.github.io.
Problem

Research questions and friction points this paper is trying to address.

Develop high-precision navigation algorithms
Validate multisensory fusion techniques
Support autonomous vehicle research
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-end IMU integration
Solid-state LiDAR inclusion
ROS-compatible multi-sensor dataset
🔎 Similar Papers
No similar papers found.