SpaceMind: Camera-Guided Modality Fusion for Spatial Reasoning in Vision-Language Models

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large vision-language models (VLMs) exhibit limited capability in 3D spatial reasoning—such as distance estimation, size comparison, and cross-view consistency—and prevailing approaches often rely on auxiliary 3D inputs or superficial geometric fusion. To address this, we propose a camera-guided modality fusion mechanism that, for the first time, treats camera parameters as active spatial priors to steer reasoning. Our method explicitly models deep interaction between RGB features and geometric priors via geometric importance weighting and gated fusion. We adopt a dual-encoder architecture—comprising a VGGT-based spatial encoder and an InternViT-based 2D encoder—and incorporate camera-conditioned bias, query-agnostic weight allocation, and camera-embedding-gated feature alignment. Evaluated on three major spatial reasoning benchmarks—VSI-Bench, SQA3D, and SPBench—our approach consistently outperforms both open-source and closed-source state-of-the-art methods, demonstrating significant improvement in 3D spatial understanding without requiring explicit 3D inputs.

Technology Category

Application Category

📝 Abstract
Large vision-language models (VLMs) show strong multimodal understanding but still struggle with 3D spatial reasoning, such as distance estimation, size comparison, and cross-view consistency. Existing 3D-aware methods either depend on auxiliary 3D information or enhance RGB-only VLMs with geometry encoders through shallow feature fusion. We propose SpaceMind, a multimodal large language model explicitly designed for spatial reasoning solely from RGB inputs. The model adopts a dual-encoder architecture, integrating VGGT as a spatial understanding encoder and InternViT as a 2D visual encoder. The key idea is to treat the camera representation as an active guiding modality rather than passive metadata. Specifically, SpaceMind introduces a lightweight Camera-Guided Modality Fusion module before the language model to replace shallow fusion. It applies camera-conditioned biasing to spatial tokens, assigns query-independent weights reflecting their geometric importance, and uses the camera embedding to gate the fused representation. Empirically, SpaceMind establishes new state-of-the-art results on VSI-Bench, SQA3D and SPBench, surpassing both open and proprietary systems on VSI-Bench and SPBench by large margins and achieving state-of-the-art performance on SQA3D. These results demonstrate that camera-guided modality fusion is an effective and practical inductive bias for equipping VLMs with genuinely spatially grounded intelligence. We will release code and model checkpoints to support future research.
Problem

Research questions and friction points this paper is trying to address.

Addressing 3D spatial reasoning limitations in vision-language models
Developing camera-guided fusion for spatial understanding from RGB inputs
Enhancing distance estimation and cross-view consistency without 3D data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Camera-guided modality fusion for spatial reasoning
Dual-encoder architecture with VGGT and InternViT
Camera-conditioned biasing and geometric importance weighting
🔎 Similar Papers
No similar papers found.
R
Ruosen Zhao
Huawei
Z
Zhikang Zhang
Huawei
J
Jialei Xu
Huawei
Jiahao Chang
Jiahao Chang
The Chinese University of Hong Kong, Shenzhen
Computer VisionComputer Graphics
D
Dong Chen
Huawei
L
Lingyun Li
Huawei
W
Weijian Sun
Huawei
Zizhuang Wei
Zizhuang Wei
Peking University
Computer Vision3D modeling