π€ AI Summary
Existing monocular video geometry estimation methods predominantly rely on diffusion model priors for per-frame local modeling, overlooking their inherent cross-frame correspondence capability and thus suffering from inter-frame geometric inconsistency. This work pioneers the transfer of temporal consistency from video diffusion models to unified depth and surface normal estimation in a global coordinate system. We propose: (1) a geometric objective formulation in the global coordinate frame; (2) an efficient conditional injection mechanism leveraging position encoding reuse; and (3) a joint depthβnormal training paradigm. By explicitly modeling cross-frame correspondence constraints and enabling multi-task collaborative optimization, our approach achieves state-of-the-art performance across multiple benchmarks and supports end-to-end 3D reconstruction. Notably, it is trained solely on static videos yet demonstrates strong generalization to dynamic scenes.
π Abstract
Recently, methods leveraging diffusion model priors to assist monocular geometric estimation (e.g., depth and normal) have gained significant attention due to their strong generalization ability. However, most existing works focus on estimating geometric properties within the camera coordinate system of individual video frames, neglecting the inherent ability of diffusion models to determine inter-frame correspondence. In this work, we demonstrate that, through appropriate design and fine-tuning, the intrinsic consistency of video generation models can be effectively harnessed for consistent geometric estimation. Specifically, we 1) select geometric attributes in the global coordinate system that share the same correspondence with video frames as the prediction targets, 2) introduce a novel and efficient conditioning method by reusing positional encodings, and 3) enhance performance through joint training on multiple geometric attributes that share the same correspondence. Our results achieve superior performance in predicting global geometric attributes in videos and can be directly applied to reconstruction tasks. Even when trained solely on static video data, our approach exhibits the potential to generalize to dynamic video scenes.