Ariel Explores: Vision-based underwater exploration and inspection via generalist drone-level autonomy

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the significant lag in autonomous exploration and detection capabilities of underwater robots compared to aerial drones, this paper proposes a refraction-aware multi-camera visual-inertial state estimation algorithm integrated with learning-driven ego-velocity prediction to realize an end-to-end underwater autonomous navigation system. The system is deployed on a custom-built underwater robot, Ariel, featuring a tightly coupled five-camera–IMU sensing architecture, deep learning–enhanced visual-inertial odometry, generalized path planning, and vision-based detection modules. Field experiments conducted in the Trondheim submarine dry dock demonstrate substantial improvements in pose estimation robustness under challenging underwater conditions—particularly turbid, low-texture environments. The path planner exhibits strong cross-platform generalizability. Crucially, this work achieves, for the first time on an underwater platform, autonomous exploration and detection performance approaching that of state-of-the-art aerial drones.

Technology Category

Application Category

📝 Abstract
This work presents a vision-based underwater exploration and inspection autonomy solution integrated into Ariel, a custom vision-driven underwater robot. Ariel carries a $5$ camera and IMU based sensing suite, enabling a refraction-aware multi-camera visual-inertial state estimation method aided by a learning-based proprioceptive robot velocity prediction method that enhances robustness against visual degradation. Furthermore, our previously developed and extensively field-verified autonomous exploration and general visual inspection solution is integrated on Ariel, providing aerial drone-level autonomy underwater. The proposed system is field-tested in a submarine dry dock in Trondheim under challenging visual conditions. The field demonstration shows the robustness of the state estimation solution and the generalizability of the path planning techniques across robot embodiments.
Problem

Research questions and friction points this paper is trying to address.

Develop vision-based underwater exploration autonomy for robots
Enhance state estimation robustness in visually degraded conditions
Achieve aerial drone-level autonomy in underwater environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Refraction-aware multi-camera visual-inertial state estimation
Learning-based proprioceptive robot velocity prediction
Aerial drone-level autonomy for underwater exploration
🔎 Similar Papers
No similar papers found.