🤖 AI Summary
Addressing the significant lag in autonomous exploration and detection capabilities of underwater robots compared to aerial drones, this paper proposes a refraction-aware multi-camera visual-inertial state estimation algorithm integrated with learning-driven ego-velocity prediction to realize an end-to-end underwater autonomous navigation system. The system is deployed on a custom-built underwater robot, Ariel, featuring a tightly coupled five-camera–IMU sensing architecture, deep learning–enhanced visual-inertial odometry, generalized path planning, and vision-based detection modules. Field experiments conducted in the Trondheim submarine dry dock demonstrate substantial improvements in pose estimation robustness under challenging underwater conditions—particularly turbid, low-texture environments. The path planner exhibits strong cross-platform generalizability. Crucially, this work achieves, for the first time on an underwater platform, autonomous exploration and detection performance approaching that of state-of-the-art aerial drones.
📝 Abstract
This work presents a vision-based underwater exploration and inspection autonomy solution integrated into Ariel, a custom vision-driven underwater robot. Ariel carries a $5$ camera and IMU based sensing suite, enabling a refraction-aware multi-camera visual-inertial state estimation method aided by a learning-based proprioceptive robot velocity prediction method that enhances robustness against visual degradation. Furthermore, our previously developed and extensively field-verified autonomous exploration and general visual inspection solution is integrated on Ariel, providing aerial drone-level autonomy underwater. The proposed system is field-tested in a submarine dry dock in Trondheim under challenging visual conditions. The field demonstration shows the robustness of the state estimation solution and the generalizability of the path planning techniques across robot embodiments.