🤖 AI Summary
This work addresses the limitation of existing 3D visual grounding methods, which are typically confined to sentence-level detection or segmentation and thus fail to leverage the compositional semantics and contextual reasoning inherent in natural language. To enable finer-grained 3D vision-language understanding, we introduce the task of fine-grained 3D Densely Referring Expression Segmentation (3D-DRES), which establishes explicit mappings from linguistic phrases to 3D object instances. We pioneer a phrase-to-instance annotation paradigm and construct DetailRefer, a large-scale dataset comprising 54,432 referring expressions. Furthermore, we propose DetailBase, a unified architecture capable of performing both sentence-level and phrase-level segmentation. Experiments demonstrate that models trained on DetailRefer achieve state-of-the-art performance on phrase-level segmentation and significantly outperform prior methods on standard 3D-RES benchmarks.
📝 Abstract
Current 3D visual grounding tasks only process sentence level detection or segmentation, which critically fails to leverage the rich compositional contextual reasonings within natural language expressions. To address this challenge, we introduce Detailed 3D Referring Expression Segmentation (3D-DRES), a new task that provides a phrase to 3D instance mapping, aiming at enhancing fine-grained 3D vision language understanding. To support 3D-DRES, we present DetailRefer, a new dataset comprising 54,432 descriptions spanning 11,054 distinct objects. Unlike previous datasets, DetailRefer implements a pioneering phrase-instance annotation paradigm where each referenced noun phrase is explicitly mapped to its corresponding 3D elements. Additionally, we introduce DetailBase, a purposefully streamlined yet effective baseline architecture that supports dual-mode segmentation at both sentence and phrase levels. Our experimental results demonstrate that models trained on DetailRefer not only excel at phrase-level segmentation but also show surprising improvements on traditional 3D-RES benchmarks.