🤖 AI Summary
Open-vocabulary 3D object detection (OV-3DOD) faces a critical challenge: vision-language models (VLMs) lose scene-level contextual understanding when applied to 3D perception. This paper introduces the first OV-3DOD framework requiring no 3D annotations, achieved via hierarchical modeling that jointly learns local object representations and global scene structure. Our method comprises three core innovations: (1) hierarchical data integration, unifying multi-source 2D/3D priors; (2) interactive cross-modal alignment, enabling fine-grained matching between VLM features and geometric-semantic cues from point clouds and images; and (3) object-focused context modulation, dynamically enhancing contextual awareness in target-relevant regions. The approach integrates multi-granularity feature alignment, cross-level feature interaction, and object-aware feature refinement. Extensive experiments on ScanNetV2 and SUN RGB-D demonstrate significant improvements over state-of-the-art methods, validating both effectiveness and generalizability of open-category detection without 3D supervision.
📝 Abstract
Open-vocabulary 3D object detection (OV-3DOD) aims at localizing and classifying novel objects beyond closed sets. The recent success of vision-language models (VLMs) has demonstrated their remarkable capabilities to understand open vocabularies. Existing works that leverage VLMs for 3D object detection (3DOD) generally resort to representations that lose the rich scene context required for 3D perception. To address this problem, we propose in this paper a hierarchical framework, named HCMA, to simultaneously learn local object and global scene information for OV-3DOD. Specifically, we first design a Hierarchical Data Integration (HDI) approach to obtain coarse-to-fine 3D-image-text data, which is fed into a VLM to extract object-centric knowledge. To facilitate the association of feature hierarchies, we then propose an Interactive Cross-Modal Alignment (ICMA) strategy to establish effective intra-level and inter-level feature connections. To better align features across different levels, we further propose an Object-Focusing Context Adjustment (OFCA) module to refine multi-level features by emphasizing object-related features. Extensive experiments demonstrate that the proposed method outperforms SOTA methods on the existing OV-3DOD benchmarks. It also achieves promising OV-3DOD results even without any 3D annotations.