Underwater Monocular Metric Depth Estimation: Real-World Benchmarks and Synthetic Fine-Tuning

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular underwater metric depth estimation faces severe performance bottlenecks due to light attenuation, scattering, turbidity, and scarcity of ground-truth depth annotations. Method: We introduce the first benchmark dedicated to real-world underwater metric depth estimation and propose a physics-informed synthetic data augmentation framework grounded in underwater imaging models. Using Depth Anything V2 (ViT-S) as the backbone, we fine-tune on physically simulated underwater scenes generated from Hypersim, incorporating scale-aware supervision and domain adaptation techniques. Results: Our approach significantly outperforms baselines trained solely on aerial data on real underwater benchmarks—including FLSea and SQUID—yielding more robust and generalizable depth predictions. This work provides the first systematic empirical validation of the effectiveness and necessity of physics-guided synthetic data for underwater metric depth estimation.

Technology Category

Application Category

📝 Abstract
Monocular depth estimation has recently advanced to provide not only relative but also metric depth predictions. However, its reliability in underwater environments remains limited due to light attenuation and scattering, color distortion, turbidity, and the lack of high-quality metric ground-truth data. In this paper, we present a comprehensive benchmark of zero-shot and fine-tuned monocular metric depth estimation models on real-world underwater datasets with metric depth annotations, such as FLSea and SQUID. We evaluate a diverse set of state-of-the-art models across a range of underwater conditions with different ranges. Our results show that large-scale models trained on terrestrial (real or synthetic) data, while effective in in-air settings, perform poorly underwater due to significant domain shifts. To address this, we fine-tune Depth Anything V2 with a ViT-S backbone encoder on a synthetic underwater variant of the Hypersim dataset, which we generated using a physically based underwater image formation model. We demonstrate our fine-tuned model consistently improves performance across all benchmarks and outperforms baselines trained only on the clean in-air Hypersim dataset. Our study provides a detailed evaluation and visualization for monocular metric depth estimation in underwater scenes, highlighting the importance of domain adaptation and scale-aware supervision for achieving robust and generalizable metric depth predictions in challenging underwater environments for future research.
Problem

Research questions and friction points this paper is trying to address.

Evaluating monocular depth models in underwater environments
Addressing domain shifts in underwater depth estimation
Improving performance via synthetic data fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking zero-shot and fine-tuned underwater depth models
Fine-tuning Depth Anything V2 with synthetic underwater data
Using physically based underwater image formation model
🔎 Similar Papers
No similar papers found.