DSPNet: Dual-vision Scene Perception for Robust 3D Question Answering

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D question answering (QA) methods overly rely on global point-cloud representations while neglecting fine-grained local texture cues from multi-view images; moreover, point-cloud–image cross-modal alignment is highly susceptible to pose estimation noise and occlusion, leading to feature degradation and reduced robustness. To address these limitations, we propose a robust 3D QA framework that synergistically integrates dual visual cues. Our approach introduces three key innovations: (1) Text-Guided Multi-View Fusion (TGMF), enabling semantic-aware view selection; (2) Adaptive Dual-Visual Perception (ADVP), mitigating feature misalignment caused by pose errors and occlusion; and (3) Multimodal Context-Gated Reasoning (MCGR), enhancing joint textual–visual semantic inference. Extensive experiments demonstrate substantial improvements over state-of-the-art methods on both SQA3D and ScanQA benchmarks. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
3D Question Answering (3D QA) requires the model to comprehensively understand its situated 3D scene described by the text, then reason about its surrounding environment and answer a question under that situation. However, existing methods usually rely on global scene perception from pure 3D point clouds and overlook the importance of rich local texture details from multi-view images. Moreover, due to the inherent noise in camera poses and complex occlusions, there exists significant feature degradation and reduced feature robustness problems when aligning 3D point cloud with multi-view images. In this paper, we propose a Dual-vision Scene Perception Network (DSPNet), to comprehensively integrate multi-view and point cloud features to improve robustness in 3D QA. Our Text-guided Multi-view Fusion (TGMF) module prioritizes image views that closely match the semantic content of the text. To adaptively fuse back-projected multi-view images with point cloud features, we design the Adaptive Dual-vision Perception (ADVP) module, enhancing 3D scene comprehension. Additionally, our Multimodal Context-guided Reasoning (MCGR) module facilitates robust reasoning by integrating contextual information across visual and linguistic modalities. Experimental results on SQA3D and ScanQA datasets demonstrate the superiority of our DSPNet. Codes will be available at https://github.com/LZ-CH/DSPNet.
Problem

Research questions and friction points this paper is trying to address.

Improves 3D QA by integrating multi-view and point cloud features.
Addresses feature degradation in aligning 3D point clouds with images.
Enhances scene comprehension and reasoning in complex 3D environments.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-guided Multi-view Fusion prioritizes semantic matching
Adaptive Dual-vision Perception fuses multi-view and point cloud
Multimodal Context-guided Reasoning integrates visual and linguistic data
🔎 Similar Papers
No similar papers found.