Jasmine: Harnessing Diffusion Prior for Self-supervised Depth Estimation

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address prediction ambiguity and artifacts in self-supervised monocular depth estimation caused by occlusions, textureless regions, and illumination variations, this work introduces Stable Diffusion’s latent-space visual priors—first time in this domain. We propose an SD-driven self-supervised framework featuring: (i) a hybrid image reconstruction proxy task to preserve diffusion priors; and (ii) a Scale-Shift GRU module that explicitly decouples scale-offset modeling from reprojection-induced disturbances. The method requires no additional annotations and achieves state-of-the-art performance on KITTI. Crucially, it significantly enhances zero-shot cross-domain generalization, delivering superior transfer performance on unseen datasets—including Make3D and NYUv2. Our core contribution is the first self-supervised depth estimation paradigm integrating generative priors, jointly ensuring geometric consistency and semantic robustness.

Technology Category

Application Category

📝 Abstract
In this paper, we propose Jasmine, the first Stable Diffusion (SD)-based self-supervised framework for monocular depth estimation, which effectively harnesses SD's visual priors to enhance the sharpness and generalization of unsupervised prediction. Previous SD-based methods are all supervised since adapting diffusion models for dense prediction requires high-precision supervision. In contrast, self-supervised reprojection suffers from inherent challenges (e.g., occlusions, texture-less regions, illumination variance), and the predictions exhibit blurs and artifacts that severely compromise SD's latent priors. To resolve this, we construct a novel surrogate task of hybrid image reconstruction. Without any additional supervision, it preserves the detail priors of SD models by reconstructing the images themselves while preventing depth estimation from degradation. Furthermore, to address the inherent misalignment between SD's scale and shift invariant estimation and self-supervised scale-invariant depth estimation, we build the Scale-Shift GRU. It not only bridges this distribution gap but also isolates the fine-grained texture of SD output against the interference of reprojection loss. Extensive experiments demonstrate that Jasmine achieves SoTA performance on the KITTI benchmark and exhibits superior zero-shot generalization across multiple datasets.
Problem

Research questions and friction points this paper is trying to address.

Self-supervised monocular depth estimation using Stable Diffusion priors.
Addressing challenges like occlusions and texture-less regions in depth prediction.
Bridging scale-shift misalignment between SD and self-supervised depth estimation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stable Diffusion-based self-supervised depth estimation
Hybrid image reconstruction preserves detail priors
Scale-Shift GRU bridges distribution gap effectively
🔎 Similar Papers
No similar papers found.
J
Jiyuan Wang
BJTU
C
Chunyu Lin
BJTU
C
Cheng Guan
BJTU
L
Lang Nie
BJTU
J
Jing He
HKUST
Haodong Li
Haodong Li
UC San Diego. Prev: HKUST, ZJU, Tencent.
3DVGenerative ModelsAgents
Kang Liao
Kang Liao
Nanyang Technological University
Computer Vision
Y
Yao Zhao
BJTU