🤖 AI Summary
Monocular depth estimation (MDE) faces key challenges in 3D reconstruction and novel view synthesis, including geometric inconsistency, loss of fine-grained details, poor robustness to real-world phenomena (e.g., reflections), and inefficiency for edge deployment. To address these, we propose an efficient, detail-preserving MDE framework: a Transformer-based encoder captures long-range contextual dependencies, while a lightweight convolutional decoder ensures fast inference. We further introduce a novel dual-modal density head to enhance probabilistic depth distribution modeling, coupled with an LPIPS-based perceptual loss and a pseudo-labeling multi-stage optimization strategy—jointly improving geometric accuracy and texture fidelity. Evaluated on NYUv2 and KITTI benchmarks, our method achieves state-of-the-art or competitive performance with significantly fewer parameters and lower computational cost, effectively balancing high accuracy and edge-device compatibility.
📝 Abstract
Monocular depth estimation (MDE) plays a pivotal role in various computer vision applications, such as robotics, augmented reality, and autonomous driving. Despite recent advancements, existing methods often fail to meet key requirements for 3D reconstruction and view synthesis, including geometric consistency, fine details, robustness to real-world challenges like reflective surfaces, and efficiency for edge devices. To address these challenges, we introduce a novel MDE system, called EfficientDepth, which combines a transformer architecture with a lightweight convolutional decoder, as well as a bimodal density head that allows the network to estimate detailed depth maps. We train our model on a combination of labeled synthetic and real images, as well as pseudo-labeled real images, generated using a high-performing MDE method. Furthermore, we employ a multi-stage optimization strategy to improve training efficiency and produce models that emphasize geometric consistency and fine detail. Finally, in addition to commonly used objectives, we introduce a loss function based on LPIPS to encourage the network to produce detailed depth maps. Experimental results demonstrate that EfficientDepth achieves performance comparable to or better than existing state-of-the-art models, with significantly reduced computational resources.