I Speak and You Find: Robust 3D Visual Grounding with Noisy and Ambiguous Speech Inputs

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D visual grounding methods heavily rely on accurate ASR-generated textual prompts, rendering them fragile under real-world speech conditions—such as accents, background noise, and variable speaking rates—that induce transcription errors and semantic ambiguities. To address this, we propose the first end-to-end framework for robust speech-driven 3D visual grounding. Our approach introduces a speech-complementary module that models phoneme-level audio–text similarity, and a contrastive-complementary module that explicitly aligns erroneous ASR outputs with original speech features, thereby significantly reducing dependence on ASR accuracy. By unifying speech signal modeling, phoneme-aware representation learning, and a 3D visual grounding backbone, the framework enables joint optimization. Evaluated on SpeechRefer and SpeechNr3D, our method achieves over 12% absolute improvement in Recall@1, demonstrating—for the first time—the feasibility of high-precision 3D object localization under noisy speech conditions.

Technology Category

Application Category

📝 Abstract
Existing 3D visual grounding methods rely on precise text prompts to locate objects within 3D scenes. Speech, as a natural and intuitive modality, offers a promising alternative. Real-world speech inputs, however, often suffer from transcription errors due to accents, background noise, and varying speech rates, limiting the applicability of existing 3DVG methods. To address these challenges, we propose extbf{SpeechRefer}, a novel 3DVG framework designed to enhance performance in the presence of noisy and ambiguous speech-to-text transcriptions. SpeechRefer integrates seamlessly with xisting 3DVG models and introduces two key innovations. First, the Speech Complementary Module captures acoustic similarities between phonetically related words and highlights subtle distinctions, generating complementary proposal scores from the speech signal. This reduces dependence on potentially erroneous transcriptions. Second, the Contrastive Complementary Module employs contrastive learning to align erroneous text features with corresponding speech features, ensuring robust performance even when transcription errors dominate. Extensive experiments on the SpeechRefer and peechNr3D datasets demonstrate that SpeechRefer improves the performance of existing 3DVG methods by a large margin, which highlights SpeechRefer's potential to bridge the gap between noisy speech inputs and reliable 3DVG, enabling more intuitive and practical multimodal systems.
Problem

Research questions and friction points this paper is trying to address.

Addresses noisy and ambiguous speech inputs in 3D visual grounding
Reduces reliance on error-prone speech-to-text transcriptions
Enhances robustness of 3DVG methods with phonetic and contrastive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speech Complementary Module enhances phonetic word similarities
Contrastive Complementary Module aligns erroneous text features
Integrates with existing 3DVG models for robust performance
🔎 Similar Papers
No similar papers found.