🤖 AI Summary
Monocular 3D object detection suffers from prohibitive costs of 3D annotation, necessitating efficient active learning (AL) strategies. Existing AL methods exhibit two critical limitations: (i) image-level annotation redundancy and (ii) uncertainty bias—over-selecting depth-ambiguous instances while under-sampling nearby objects. To address these, we propose the first instance-level AL framework for monocular 3D detection. Our approach integrates heterogeneous backbones, task-agnostic feature extraction, loss-weight perturbation, and a time-varying Bagging strategy to jointly enhance selection diversity and mitigate depth ambiguity bias. Furthermore, we design an instance-granular information gain metric for fine-grained sample selection. Evaluated on KITTI, our method achieves full-supervision-level or superior AP₃D performance using only 60% of the annotations, significantly improving annotation efficiency and model generalization.
📝 Abstract
Monocular 3D detection relies on just a single camera and is therefore easy to deploy. Yet, achieving reliable 3D understanding from monocular images requires substantial annotation, and 3D labels are especially costly. To maximize performance under constrained labeling budgets, it is essential to prioritize annotating samples expected to deliver the largest performance gains. This prioritization is the focus of active learning. Curiously, we observed two significant limitations in active learning algorithms for 3D monocular object detection. First, previous approaches select entire images, which is inefficient, as non-informative instances contained in the same image also need to be labeled. Secondly, existing methods rely on uncertainty-based selection, which in monocular 3D object detection creates a bias toward depth ambiguity. Consequently, distant objects are selected, while nearby objects are overlooked.
To address these limitations, we propose IDEAL-M3D, the first instance-level pipeline for monocular 3D detection. For the first time, we demonstrate that an explicitly diverse, fast-to-train ensemble improves diversity-driven active learning for monocular 3D. We induce diversity with heterogeneous backbones and task-agnostic features, loss weight perturbation, and time-dependent bagging. IDEAL-M3D shows superior performance and significant resource savings: with just 60% of the annotations, we achieve similar or better AP3D on KITTI validation and test set results compared to training the same detector on the whole dataset.