🤖 AI Summary
This work addresses the challenge of balancing accuracy and efficiency in monocular depth estimation for remote sensing imagery by proposing a structure prior–guided diffusion refinement mechanism. The approach first leverages a Vision Transformer to rapidly generate a global structural prior, then performs a small number of lightweight iterative refinements within the latent space of a variational autoencoder (VAE) using a compact U-Net architecture, enhanced by a progressive linear fusion strategy to optimize fine details. The method achieves significantly improved perceptual quality and inference speed while maintaining low memory consumption comparable to that of lightweight ViT models—reducing LPIPS perceptual error by 11.85% and accelerating inference by over 40× compared to state-of-the-art models such as Marigold.
📝 Abstract
Real-time, high-fidelity monocular depth estimation from remote sensing imagery is crucial for numerous applications, yet existing methods face a stark trade-off between accuracy and efficiency. Although using Vision Transformer (ViT) backbones for dense prediction is fast, they often exhibit poor perceptual quality. Conversely, diffusion models offer high fidelity but at a prohibitive computational cost. To overcome these limitations, we propose Depth Detail Diffusion for Remote Sensing Monocular Depth Estimation ($D^3$-RSMDE), an efficient framework designed to achieve an optimal balance between speed and quality. Our framework first leverages a ViT-based module to rapidly generate a high-quality preliminary depth map construction, which serves as a structural prior, effectively replacing the time-consuming initial structure generation stage of diffusion models. Based on this prior, we propose a Progressive Linear Blending Refinement (PLBR) strategy, which uses a lightweight U-Net to refine the details in only a few iterations. The entire refinement step operates efficiently in a compact latent space supported by a Variational Autoencoder (VAE). Extensive experiments demonstrate that $D^3$-RSMDE achieves a notable 11.85% reduction in the Learned Perceptual Image Patch Similarity (LPIPS) perceptual metric over leading models like Marigold, while also achieving over a 40x speedup in inference and maintaining VRAM usage comparable to lightweight ViT models.