🤖 AI Summary
This work addresses the challenges of over-merging or fragmentation in open-vocabulary 3D object detection caused by reliance solely on geometric consistency. The authors propose Group3D, a novel framework that introduces, for the first time, multimodal large language model (MLLM)-driven semantic compatibility constraints during 3D instance formation. By generating scene-adaptive vocabulary and organizing it into semantically compatible groups, Group3D guides the merging of geometric fragments in a manner consistent with both geometry and semantics. The method integrates multi-view RGB inputs to enable unified processing in both pose-known and pose-unknown scenarios. Evaluated on ScanNet and ARKitScenes, Group3D achieves state-of-the-art performance in open-vocabulary 3D detection and demonstrates strong zero-shot generalization capabilities.
📝 Abstract
Open-vocabulary 3D object detection aims to localize and recognize objects beyond a fixed training taxonomy. In multi-view RGB settings, recent approaches often decouple geometry-based instance construction from semantic labeling, generating class-agnostic fragments and assigning open-vocabulary categories post hoc. While flexible, such decoupling leaves instance construction governed primarily by geometric consistency, without semantic constraints during merging. When geometric evidence is view-dependent and incomplete, this geometry-only merging can lead to irreversible association errors, including over-merging of distinct objects or fragmentation of a single instance. We propose Group3D, a multi-view open-vocabulary 3D detection framework that integrates semantic constraints directly into the instance construction process. Group3D maintains a scene-adaptive vocabulary derived from a multimodal large language model (MLLM) and organizes it into semantic compatibility groups that encode plausible cross-view category equivalence. These groups act as merge-time constraints: 3D fragments are associated only when they satisfy both semantic compatibility and geometric consistency. This semantically gated merging mitigates geometry-driven over-merging while absorbing multi-view category variability. Group3D supports both pose-known and pose-free settings, relying only on RGB observations. Experiments on ScanNet and ARKitScenes demonstrate that Group3D achieves state-of-the-art performance in multi-view open-vocabulary 3D detection, while exhibiting strong generalization in zero-shot scenarios. The project page is available at https://ubin108.github.io/Group3D/.