🤖 AI Summary
To address prediction ambiguity and artifacts in self-supervised monocular depth estimation caused by occlusions, textureless regions, and illumination variations, this work introduces Stable Diffusion’s latent-space visual priors—first time in this domain. We propose an SD-driven self-supervised framework featuring: (i) a hybrid image reconstruction proxy task to preserve diffusion priors; and (ii) a Scale-Shift GRU module that explicitly decouples scale-offset modeling from reprojection-induced disturbances. The method requires no additional annotations and achieves state-of-the-art performance on KITTI. Crucially, it significantly enhances zero-shot cross-domain generalization, delivering superior transfer performance on unseen datasets—including Make3D and NYUv2. Our core contribution is the first self-supervised depth estimation paradigm integrating generative priors, jointly ensuring geometric consistency and semantic robustness.
📝 Abstract
In this paper, we propose Jasmine, the first Stable Diffusion (SD)-based self-supervised framework for monocular depth estimation, which effectively harnesses SD's visual priors to enhance the sharpness and generalization of unsupervised prediction. Previous SD-based methods are all supervised since adapting diffusion models for dense prediction requires high-precision supervision. In contrast, self-supervised reprojection suffers from inherent challenges (e.g., occlusions, texture-less regions, illumination variance), and the predictions exhibit blurs and artifacts that severely compromise SD's latent priors. To resolve this, we construct a novel surrogate task of hybrid image reconstruction. Without any additional supervision, it preserves the detail priors of SD models by reconstructing the images themselves while preventing depth estimation from degradation. Furthermore, to address the inherent misalignment between SD's scale and shift invariant estimation and self-supervised scale-invariant depth estimation, we build the Scale-Shift GRU. It not only bridges this distribution gap but also isolates the fine-grained texture of SD output against the interference of reprojection loss. Extensive experiments demonstrate that Jasmine achieves SoTA performance on the KITTI benchmark and exhibits superior zero-shot generalization across multiple datasets.