🤖 AI Summary
Existing 4D radar SLAM benchmarks suffer from narrow spatial coverage, limited platform diversity, and restricted environmental conditions, hindering robust evaluation of autonomous driving localization and mapping. To address this, we introduce the first large-scale, multi-platform (handheld, e-bike, SUV), and multi-environment (sunny, nighttime, heavy rain; campus roads, highway tunnels) 4D radar SLAM benchmark. We propose a GNSS-timestamped two-step calibration method and generate centimeter-accurate ground-truth trajectories via bidirectional LiDAR-inertial sequential localization. The benchmark integrates 4D radar, 3D LiDAR, stereo cameras, consumer-grade IMU, and GNSS/INS, enhanced by convex-hull smoothing, cross-correlation-based temporal synchronization, and TLS point-cloud registration. It has enabled quantitative evaluation of multiple radar odometry and place recognition methods, systematically revealing key challenges: scarcity of dynamic textures, weak cross-weather robustness, and poor long-term trajectory consistency.
📝 Abstract
4D radars are increasingly favored for odometry and mapping of autonomous systems due to their robustness in harsh weather and dynamic environments. Existing datasets, however, often cover limited areas and are typically captured using a single platform. To address this gap, we present a diverse large-scale dataset specifically designed for 4D radar-based localization and mapping. This dataset was gathered using three different platforms: a handheld device, an e-bike, and an SUV, under a variety of environmental conditions, including clear days, nighttime, and heavy rain. The data collection occurred from September 2023 to February 2024, encompassing diverse settings such as roads in a vegetated campus and tunnels on highways. Each route was traversed multiple times to facilitate place recognition evaluations. The sensor suite included a 3D lidar, 4D radars, stereo cameras, consumer-grade IMUs, and a GNSS/INS system. Sensor data packets were synchronized to GNSS time using a two-step process including a convex-hull-based smoothing and a correlation-based correction. The reference motion for the platforms was generated by registering lidar scans to a terrestrial laser scanner (TLS) point cloud map by a lidar inertial sequential localizer which supports forward and backward processing. The backward pass enables detailed quantitative and qualitative assessments of reference motion accuracy. To demonstrate the dataset's utility, we evaluated several state-of-the-art radar-based odometry and place recognition methods, indicating existing challenges in radar-based SLAM.