🤖 AI Summary
This work introduces Automatic Vocabulary 3D Object Detection (AV3DOD), the first end-to-end open-vocabulary 3D detection framework that eliminates reliance on manually predefined categories for both detection and semantic naming. Methodologically, it pioneers the Auto-Vocabulary paradigm: leveraging a 2D vision-language model (VLM) to generate image captions, which guide pseudo-3D bounding box construction and semantic vocabulary expansion; it further proposes Semantic Score (SS) to quantitatively evaluate the semantic fidelity of generated class names. The technical pipeline integrates multimodal alignment, semantic-enhanced feature space learning, and caption-guided pseudo-label generation. On ScanNetV2 and SUNRGB-D, AV3DOD establishes new state-of-the-art (SOTA) results in both localization accuracy (mAP) and semantic quality (SS): on ScanNetV2, it outperforms CoDA by +3.48 mAP and +24.5% SS, demonstrating strong generalization without human-annotated category priors.
📝 Abstract
Open-vocabulary 3D object detection methods are able to localize 3D boxes of classes unseen during training. Despite the name, existing methods rely on user-specified classes both at training and inference. We propose to study Auto-Vocabulary 3D Object Detection (AV3DOD), where the classes are automatically generated for the detected objects without any user input. To this end, we introduce Semantic Score (SS) to evaluate the quality of the generated class names. We then develop a novel framework, AV3DOD, which leverages 2D vision-language models (VLMs) to generate rich semantic candidates through image captioning, pseudo 3D box generation, and feature-space semantics expansion. AV3DOD achieves the state-of-the-art (SOTA) performance on both localization (mAP) and semantic quality (SS) on the ScanNetV2 and SUNRGB-D datasets. Notably, it surpasses the SOTA, CoDA, by 3.48 overall mAP and attains a 24.5% relative improvement in SS on ScanNetV2.