Retrieving Objects from 3D Scenes with Box-Guided Open-Vocabulary Instance Segmentation

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in open-vocabulary 3D instance retrieval—including poor generalization to rare/unseen categories, slow inference, and heavy reliance on strong pre-trained models (e.g., CLIP or SAM)—this paper proposes a bounding-box-guided cross-modal retrieval paradigm. Leveraging 2D open-set detectors (e.g., Grounding DINO) to generate bounding boxes, our method enables lightweight, semantically aligned fusion of RGB images and point clouds, jointly driving end-to-end 3D instance segmentation and text-query-based retrieval. Crucially, it eliminates dependence on SAM or CLIP, integrating a point cloud segmentation network with a dedicated cross-modal feature alignment module. This design ensures both open-vocabulary generalization and real-time efficiency. Experiments demonstrate substantial improvements in recall and localization accuracy for long-tail and unseen categories, while achieving real-time inference speed. Moreover, the method exhibits superior robustness and deployment friendliness compared to Open-YOLO 3D.

Technology Category

Application Category

📝 Abstract
Locating and retrieving objects from scene-level point clouds is a challenging problem with broad applications in robotics and augmented reality. This task is commonly formulated as open-vocabulary 3D instance segmentation. Although recent methods demonstrate strong performance, they depend heavily on SAM and CLIP to generate and classify 3D instance masks from images accompanying the point cloud, leading to substantial computational overhead and slow processing that limit their deployment in real-world settings. Open-YOLO 3D alleviates this issue by using a real-time 2D detector to classify class-agnostic masks produced directly from the point cloud by a pretrained 3D segmenter, eliminating the need for SAM and CLIP and significantly reducing inference time. However, Open-YOLO 3D often fails to generalize to object categories that appear infrequently in the 3D training data. In this paper, we propose a method that generates 3D instance masks for novel objects from RGB images guided by a 2D open-vocabulary detector. Our approach inherits the 2D detector's ability to recognize novel objects while maintaining efficient classification, enabling fast and accurate retrieval of rare instances from open-ended text queries. Our code will be made available at https://github.com/ndkhanh360/BoxOVIS.
Problem

Research questions and friction points this paper is trying to address.

Generates 3D instance masks for novel objects from RGB images.
Enables fast and accurate retrieval of rare instances from text queries.
Reduces computational overhead by avoiding reliance on SAM and CLIP.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 2D open-vocabulary detector for novel object recognition
Generates 3D instance masks from RGB images efficiently
Enables fast retrieval from text queries without SAM/CLIP