🤖 AI Summary
This work addresses monocular weakly supervised 3D object detection using only 2D bounding box annotations—eliminating reliance on costly 3D ground truth or LiDAR data. Methodologically, it introduces a novel framework that leverages a frozen pre-trained 2D detector to generate pseudo-labels for depth and orientation; integrates differentiable geometric projection to enforce 2D–3D spatial consistency; and designs self-supervised losses that require no 3D supervision, enabling implicit knowledge transfer. The approach unifies pre-trained 2D models, differentiable geometric projection, pseudo-label distillation, and end-to-end optimization. On SUN RGB-D, it surpasses the Cube R-CNN baseline trained with full 3D annotations, achieving competitive 3D detection performance while drastically reducing annotation cost. To our knowledge, this is the first effective paradigm for monocular 3D detection that operates entirely under pure 2D supervision—without any 3D ground truth involvement during training.
📝 Abstract
Monocular 3D object detection is an essential task in computer vision, and it has several applications in robotics and virtual reality. However, 3D object detectors are typically trained in a fully supervised way, relying extensively on 3D labeled data, which is labor-intensive and costly to annotate. This work focuses on weakly-supervised 3D detection to reduce data needs using a monocular method that leverages a singlecamera system over expensive LiDAR sensors or multi-camera setups. We propose a general model Weak Cube R-CNN, which can predict objects in 3D at inference time, requiring only 2D box annotations for training by exploiting the relationship between 2D projections of 3D cubes. Our proposed method utilizes pre-trained frozen foundation 2D models to estimate depth and orientation information on a training set. We use these estimated values as pseudo-ground truths during training. We design loss functions that avoid 3D labels by incorporating information from the external models into the loss. In this way, we aim to implicitly transfer knowledge from these large foundation 2D models without having access to 3D bounding box annotations. Experimental results on the SUN RGB-D dataset show increased performance in accuracy compared to an annotation time equalized Cube R-CNN baseline. While not precise for centimetre-level measurements, this method provides a strong foundation for further research.