🤖 AI Summary
Existing underwater visual SLAM datasets suffer from a lack of high-precision ground-truth trajectories, hindering quantitative algorithm evaluation; moreover, challenging conditions—such as low illumination and high turbidity—severely degrade localization and mapping performance. To address these limitations, we introduce AquaticVision, the first underwater dual-modal SLAM benchmark dataset equipped with optical motion-capture ground-truth trajectories. It is the first to synchronously capture event-based (DVS) and RGB frame-based data, accompanied by rigorous spatiotemporal calibration. The dataset encompasses diverse representative underwater scenarios, thereby filling a critical gap in objective, reproducible SLAM evaluation for aquatic environments. Upon open release, AquaticVision significantly improves pose estimation robustness and reproducibility under low-light and turbid conditions. It provides a standardized, reliable benchmark for algorithm development, ablation studies, and fair performance comparison in underwater visual SLAM.
📝 Abstract
Many underwater applications, such as offshore asset inspections, rely on visual inspection and detailed 3D reconstruction. Recent advancements in underwater visual SLAM systems for aquatic environments have garnered significant attention in marine robotics research. However, existing underwater visual SLAM datasets often lack groundtruth trajectory data, making it difficult to objectively compare the performance of different SLAM algorithms based solely on qualitative results or COLMAP reconstruction. In this paper, we present a novel underwater dataset that includes ground truth trajectory data obtained using a motion capture system. Additionally, for the first time, we release visual data that includes both events and frames for benchmarking underwater visual positioning. By providing event camera data, we aim to facilitate the development of more robust and advanced underwater visual SLAM algorithms. The use of event cameras can help mitigate challenges posed by extremely low light or hazy underwater conditions. The webpage of our dataset is https://sites.google.com/view/aquaticvision-lias.