🤖 AI Summary
This work addresses the challenge of unsupervised depth estimation from surround-view cameras by proposing a self-supervised method that integrates spatial geometric consistency. The approach leverages 3D surface normals and 2D texture consistency as regularizing constraints, enhances feature representation through geometric priors derived from foundation models, and introduces a dense view synthesis module that fuses spatiotemporal context alongside an adaptive weighting mechanism for geometric cues. This framework enables robust multi-view depth learning without ground-truth supervision. Evaluated on the DDAD and nuScenes datasets, the method achieves state-of-the-art performance in self-supervised surround-view depth estimation, demonstrating that explicitly modeling geometric consistency is crucial for improving both accuracy and robustness in depth prediction.
📝 Abstract
Accurate surround-view depth estimation provides a competitive alternative to laser-based sensors and is essential for 3D scene understanding in autonomous driving. While empirical studies have proposed various approaches that primarily focus on enforcing cross-view constraints at photometric level, few explicitly exploit the rich geometric structure inherent in both monocular and surround-view setting. In this work, we propose GeoSurDepth, a framework that leverages geometry consistency as the primary cue for surround-view depth estimation. Concretely, we utilize vision foundation models as pseudo geometry priors and feature representation enhancement tool to guide the network to maintain surface normal consistency in spatial 3D space and regularize object- and texture-consistent depth estimation in 2D. In addition, we introduce a novel view synthesis pipeline where 2D-3D lifting is achieved with dense depth reconstructed via spatial warping, encouraging additional photometric supervision across temporal and spatial contexts, and compensating for the limitations of target-view image reconstruction. Finally, a newly-proposed adaptive joint motion learning strategy enables the network to adaptively emphasize informative spatial geometry cues for improved motion reasoning. Extensive experiments on KITTI, DDAD and nuScenes demonstrate that GeoSurDepth achieves SoTA performance, validating the effectiveness of our approach. Our framework highlights the importance of exploiting geometry coherence and consistency for robust self-supervised depth estimation.