🤖 AI Summary
Existing video depth estimation methods struggle to balance geometric hallucinations and scale drift in generative models against the heavy reliance of discriminative approaches on large-scale annotated data. This work presents the first approach that deterministically repurposes a pretrained video diffusion model into a single-pass depth regressor, leveraging diffusion timesteps as structural anchors, latent manifold rectification (LMR), and a global affine consistency mechanism. Remarkably, it achieves high-quality video depth estimation with minimal task-specific data—requiring only 1/163 of the usual training samples—and attains zero-shot state-of-the-art performance across multiple benchmarks. The complete training pipeline is publicly released to facilitate further research.
📝 Abstract
Existing video depth estimation faces a fundamental trade-off: generative models suffer from stochastic geometric hallucinations and scale drift, while discriminative models demand massive labeled datasets to resolve semantic ambiguities. To break this impasse, we present DVD, the first framework to deterministically adapt pre-trained video diffusion models into single-pass depth regressors. Specifically, DVD features three core designs: (i) repurposing the diffusion timestep as a structural anchor to balance global stability with high-frequency details; (ii) latent manifold rectification (LMR) to mitigate regression-induced over-smoothing, enforcing differential constraints to restore sharp boundaries and coherent motion; and (iii) global affine coherence, an inherent property bounding inter-window divergence, which enables seamless long-video inference without requiring complex temporal alignment. Extensive experiments demonstrate that DVD achieves state-of-the-art zero-shot performance across benchmarks. Furthermore, DVD successfully unlocks the profound geometric priors implicit in video foundation models using 163x less task-specific data than leading baselines. Notably, we fully release our pipeline, providing the whole training suite for SOTA video depth estimation to benefit the open-source community.