🤖 AI Summary
This work addresses monocular 3D occupancy prediction without 3D annotations, proposing the first fully 3D-supervision-free training paradigm. Methodologically, it decouples zero-shot semantic and relative depth representations from a vision foundation model (VFM), leverages temporal consistency for self-supervised depth calibration, and jointly projects semantic and depth cues into 3D space to generate weak supervision. It further enhances geometric consistency via scale-offset optimization and NeRF-assisted novel-view synthesis. The core contribution lies in the first integration of vision-language model (VLM) zero-shot semantics with VFM-derived relative depth, enabling metric-scale depth recovery solely through label-free temporal constraints. On nuScenes, the method achieves state-of-the-art voxel mIoU, improving by 3.34%; on SemanticKITTI, it significantly outperforms unsupervised and weakly supervised baselines, demonstrating strong generalization and practical utility.
📝 Abstract
Estimating the 3D world from 2D monocular images is a fundamental yet challenging task due to the labour-intensive nature of 3D annotations. To simplify label acquisition, this work proposes a novel approach that bridges 2D vision foundation models (VFMs) with 3D tasks by decoupling 3D supervision into an ensemble of image-level primitives, e.g., semantic and geometric components. As a key motivator, we leverage the zero-shot capabilities of vision-language models for image semantics. However, due to the notorious ill-posed problem - multiple distinct 3D scenes can produce identical 2D projections, directly inferring metric depth from a monocular image in a zero-shot manner is unsuitable. In contrast, 2D VFMs provide promising sources of relative depth, which theoretically aligns with metric depth when properly scaled and offset. Thus, we adapt the relative depth derived from VFMs into metric depth by optimising the scale and offset using temporal consistency, also known as novel view synthesis, without access to ground-truth metric depth. Consequently, we project the semantics into 3D space using the reconstructed metric depth, thereby providing 3D supervision. Extensive experiments on nuScenes and SemanticKITTI demonstrate the effectiveness of our framework. For instance, the proposed method surpasses the current state-of-the-art by 3.34% mIoU on nuScenes for voxel occupancy prediction.