🤖 AI Summary
Conventional oyster monitoring methods suffer from high invasiveness, labor-intensive manual video analysis, and severe degradation due to underwater imaging artifacts. Method: This work proposes a lightweight vision-based detection system optimized for edge deployment. It introduces the first Stable Diffusion–based synthetic data augmentation pipeline tailored for underwater oyster imagery; adopts and optimizes YOLOv10 for resource-constrained edge platforms (NVIDIA Jetson/Aqua2); and integrates a robust underwater optical image preprocessing algorithm. Results: The system achieves 0.657 mAP@50 on the Aqua2 platform—the highest reported accuracy for oyster detection—enabling real-time, non-invasive field monitoring. This study presents the first high-accuracy deployment of YOLOv10 on computationally limited underwater robotic platforms and empirically validates the efficacy of generative data augmentation for underwater biological detection.
📝 Abstract
Oysters are a vital keystone species in coastal ecosystems, providing significant economic, environmental, and cultural benefits. As the importance of oysters grows, so does the relevance of autonomous systems for their detection and monitoring. However, current monitoring strategies often rely on destructive methods. While manual identification of oysters from video footage is non-destructive, it is time-consuming, requires expert input, and is further complicated by the challenges of the underwater environment. To address these challenges, we propose a novel pipeline using stable diffusion to augment a collected real dataset with realistic synthetic data. This method enhances the dataset used to train a YOLOv10-based vision model. The model is then deployed and tested on an edge platform in underwater robotics, achieving a state-of-the-art 0.657 mAP@50 for oyster detection on the Aqua2 platform.