🤖 AI Summary
Monocular depth estimation suffers from scale ambiguity, while light-field depth estimation lacks public benchmarks and dedicated models. To address these issues, this paper introduces a novel paradigm for metric-scale dense depth estimation from single-shot focused light-field imagery. Methodologically, we propose a two-stage framework: “sparse point cloud guidance → dense relative depth calibration.” First, a learning-based regressor estimates a sparse 3D point cloud; then, geometric calibration combined with multi-source depth maps (DINOv2 and DepthAnything) is fused and aligned to enable end-to-end metric depth prediction. Our contributions include: (i) the Light-Field Stereo (LFS) benchmark—the first light-field depth dataset with stereo-matching ground truth—and (ii) a lightweight, calibratable geometric-learning joint modeling approach. On LFS, our method achieves significantly lower absolute error than state-of-the-art monocular methods, offering a cost-effective solution for robotic perception.
📝 Abstract
Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field&Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.