π€ AI Summary
This paper addresses the challenging problem of 3D object detection under extremely sparse supervisionβe.g., only 1β5 3D bounding boxes per class. To tackle this, we propose a cross-modal semantic guidance framework grounded in large vision-language models (LVLMs). Our method introduces three key innovations: (1) a boundary-constrained center-based prompt selection mechanism (CPST) to improve pseudo-label localization accuracy; (2) joint optimization of dynamic cluster-based pseudo-label generation (DCPG) and distribution shape scoring (DS Score) to filter high-quality supervision signals; and (3) enhanced feature discriminability via point-cloud semantic transfer and multi-scale neighborhood geometric modeling. Evaluated on KITTI and Waymo Open Dataset, our approach significantly outperforms existing sparsely supervised methods and achieves state-of-the-art performance even under zero-shot settings.
π Abstract
Recently, sparsely-supervised 3D object detection has gained great attention, achieving performance close to fully-supervised 3D objectors while requiring only a few annotated instances. Nevertheless, these methods suffer challenges when accurate labels are extremely absent. In this paper, we propose a boosting strategy, termed SP3D, explicitly utilizing the cross-modal semantic prompts generated from Large Multimodal Models (LMMs) to boost the 3D detector with robust feature discrimination capability under sparse annotation settings. Specifically, we first develop a Confident Points Semantic Transfer (CPST) module that generates accurate cross-modal semantic prompts through boundary-constrained center cluster selection. Based on these accurate semantic prompts, which we treat as seed points, we introduce a Dynamic Cluster Pseudo-label Generation (DCPG) module to yield pseudo-supervision signals from the geometry shape of multi-scale neighbor points. Additionally, we design a Distribution Shape score (DS score) that chooses high-quality supervision signals for the initial training of the 3D detector. Experiments on the KITTI dataset and Waymo Open Dataset (WOD) have validated that SP3D can enhance the performance of sparsely supervised detectors by a large margin under meager labeling conditions. Moreover, we verified SP3D in the zero-shot setting, where its performance exceeded that of the state-of-the-art methods. The code is available at https://github.com/xmuqimingxia/SP3D.