๐ค AI Summary
A lack of publicly available, high-quality underwater obstacle perception datasets for autonomous surface vehicles (ASVs) operating in complex hydrodynamic environments hinders progress in marine autonomous navigation. Method: This work introduces ASV-Underwaterโthe first open-source, multimodal underwater obstacle dataset specifically designed for ASVs. Collected over four years, it encompasses diverse targets, turbid water conditions, low-light scenarios, and dynamic occlusions, with temporally synchronized optical and acoustic imagery. All data are annotated with ego-centric, fine-grained pixel-level and bounding-box labels, and formatted to comply with standard detection frameworks (e.g., YOLOv8, Faster R-CNN). Contribution/Results: Experiments demonstrate that models trained on ASV-Underwater achieve significantly improved robustness in underwater obstacle detection and classification, effectively addressing a critical data gap in maritime perception research.
๐ Abstract
This paper introduces the first publicly accessible labeled multi-modal perception dataset for autonomous maritime navigation, focusing on in-water obstacles within the aquatic environment to enhance situational awareness for Autonomous Surface Vehicles (ASVs). This dataset, collected over 4 years and consisting of diverse objects encountered under varying environmental conditions, aims to bridge the research gap in autonomous surface vehicles by providing a multi-modal, annotated, and ego-centric perception dataset, for object detection and classification. We also show the applicability of the proposed dataset by training deep learning-based open-source perception algorithms that have shown success. We expect that our dataset will contribute to development of the marine autonomy pipelines and marine (field) robotics. This dataset is opensource and can be found at https://seepersea.github.io/.