🤖 AI Summary
This work addresses the performance bottleneck of open-vocabulary 3D instance segmentation (OV-3DIS) in indoor scenes—stemming from weak concept generalization and high false-positive rates. We propose a two-stage collaborative framework: first, generating high-quality instance proposals via 3D tracking aggregation; second, enhancing object-centric semantic representations using Alpha-CLIP and introducing a normalized maximum similarity (SMS) scoring mechanism to suppress false detections. Furthermore, we design mask-guided attention and an iterative merging-deduplication strategy to improve cross-vocabulary robustness. Our method achieves state-of-the-art AP and AR scores on ScanNet200 and S3DIS, surpassing all existing open-vocabulary approaches—and even outperforming closed-set end-to-end models. This marks the first demonstration that open-vocabulary 3D instance segmentation can match or exceed closed-set performance.
📝 Abstract
Unlike closed-vocabulary 3D instance segmentation that is often trained end-to-end, open-vocabulary 3D instance segmentation (OV-3DIS) often leverages vision-language models (VLMs) to generate 3D instance proposals and classify them. While various concepts have been proposed from existing research, we observe that these individual concepts are not mutually exclusive but complementary. In this paper, we propose a new state-of-the-art solution for OV-3DIS by carefully designing a recipe to combine the concepts together and refining them to address key challenges. Our solution follows the two-stage scheme: 3D proposal generation and instance classification. We employ robust 3D tracking-based proposal aggregation to generate 3D proposals and remove overlapped or partial proposals by iterative merging/removal. For the classification stage, we replace the standard CLIP model with Alpha-CLIP, which incorporates object masks as an alpha channel to reduce background noise and obtain object-centric representation. Additionally, we introduce the standardized maximum similarity (SMS) score to normalize text-to-proposal similarity, effectively filtering out false positives and boosting precision. Our framework achieves state-of-the-art performance on ScanNet200 and S3DIS across all AP and AR metrics, even surpassing an end-to-end closed-vocabulary method.