EfficientDepth: A Fast and Detail-Preserving Monocular Depth Estimation Model

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular depth estimation (MDE) faces key challenges in 3D reconstruction and novel view synthesis, including geometric inconsistency, loss of fine-grained details, poor robustness to real-world phenomena (e.g., reflections), and inefficiency for edge deployment. To address these, we propose an efficient, detail-preserving MDE framework: a Transformer-based encoder captures long-range contextual dependencies, while a lightweight convolutional decoder ensures fast inference. We further introduce a novel dual-modal density head to enhance probabilistic depth distribution modeling, coupled with an LPIPS-based perceptual loss and a pseudo-labeling multi-stage optimization strategy—jointly improving geometric accuracy and texture fidelity. Evaluated on NYUv2 and KITTI benchmarks, our method achieves state-of-the-art or competitive performance with significantly fewer parameters and lower computational cost, effectively balancing high accuracy and edge-device compatibility.

Technology Category

Application Category

📝 Abstract
Monocular depth estimation (MDE) plays a pivotal role in various computer vision applications, such as robotics, augmented reality, and autonomous driving. Despite recent advancements, existing methods often fail to meet key requirements for 3D reconstruction and view synthesis, including geometric consistency, fine details, robustness to real-world challenges like reflective surfaces, and efficiency for edge devices. To address these challenges, we introduce a novel MDE system, called EfficientDepth, which combines a transformer architecture with a lightweight convolutional decoder, as well as a bimodal density head that allows the network to estimate detailed depth maps. We train our model on a combination of labeled synthetic and real images, as well as pseudo-labeled real images, generated using a high-performing MDE method. Furthermore, we employ a multi-stage optimization strategy to improve training efficiency and produce models that emphasize geometric consistency and fine detail. Finally, in addition to commonly used objectives, we introduce a loss function based on LPIPS to encourage the network to produce detailed depth maps. Experimental results demonstrate that EfficientDepth achieves performance comparable to or better than existing state-of-the-art models, with significantly reduced computational resources.
Problem

Research questions and friction points this paper is trying to address.

Achieving geometric consistency in monocular depth estimation
Preserving fine details while maintaining computational efficiency
Robust depth estimation under real-world challenging conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer with lightweight convolutional decoder
Bimodal density head for detailed depth maps
Multi-stage optimization with LPIPS loss
🔎 Similar Papers
No similar papers found.
A
Andrii Litvynchuk
Leia Inc.
I
Ivan Livinsky
Leia Inc.
A
Anand Ravi
Leia Inc.
Nima Kalantari
Nima Kalantari
Associate Professor at Texas A&M University
Computer GraphicsComputational PhotographyRendering
A
Andrii Tsarov
Leia Inc.