🤖 AI Summary
This paper addresses the underexplored, challenging problem of 3D visual grounding (3DVG) using spoken instructions—rather than text—to localize target objects in point clouds. We propose the first unified audio–point cloud fusion framework for this task. Methodologically, we decouple speech input into object mention detection and audio-guided attention to enhance fine-grained speech–scene alignment; further, we introduce a multi-label classification head and a lightweight audio representation module that jointly leverages ASR features for robust spatial-semantic modeling of speech. Evaluated on mainstream 3DVG benchmarks—including ScanRefer and SR3D—our approach achieves state-of-the-art performance, with localization accuracy on par with text-based methods. This work constitutes the first systematic validation of spoken language as a viable and effective modality for 3D visual understanding, demonstrating its practical potential in real-world multimodal interaction scenarios.
📝 Abstract
3D Visual Grounding (3DVG) involves localizing target objects in 3D point clouds based on natural language. While prior work has made strides using textual descriptions, leveraging spoken language-known as Audio-based 3D Visual Grounding-remains underexplored and challenging. Motivated by advances in automatic speech recognition (ASR) and speech representation learning, we propose Audio-3DVG, a simple yet effective framework that integrates audio and spatial information for enhanced grounding. Rather than treating speech as a monolithic input, we decompose the task into two complementary components. First, we introduce Object Mention Detection, a multi-label classification task that explicitly identifies which objects are referred to in the audio, enabling more structured audio-scene reasoning. Second, we propose an Audio-Guided Attention module that captures interactions between candidate objects and relational speech cues, improving target discrimination in cluttered scenes. To support benchmarking, we synthesize audio descriptions for standard 3DVG datasets, including ScanRefer, Sr3D, and Nr3D. Experimental results demonstrate that Audio-3DVG not only achieves new state-of-the-art performance in audio-based grounding, but also competes with text-based methods-highlighting the promise of integrating spoken language into 3D vision tasks.