VIEW2SPACE: Studying Multi-View Visual Reasoning from Sparse Observations

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing intelligent systems struggle with cross-view visual reasoning under sparse, discrete viewpoints and lack large-scale multi-view datasets annotated with both geometric and semantic information. To this end, we introduce VIEW2SPACE, the first multidimensional benchmark tailored for sparse multi-view reasoning. Built upon high-fidelity 3D scenes generated via physics-based simulation, it provides precise metadata and yields a scalable dataset transferable to real-world settings. The benchmark features non-overlapping training splits to enable rigorous system evaluation. Furthermore, we propose an embodied chain-of-thought approach grounded in visual evidence, which substantially improves model performance on moderately complex tasks and demonstrates superior generalization capabilities in cross-dataset evaluations.

Technology Category

Application Category

📝 Abstract
Multi-view visual reasoning is essential for intelligent systems that must understand complex environments from sparse and discrete viewpoints, yet existing research has largely focused on single-image or temporally dense video settings. In real-world scenarios, reasoning across views requires integrating partial observations without explicit guidance, while collecting large-scale multi-view data with accurate geometric and semantic annotations remains challenging. To address this gap, we leverage physically grounded simulation to construct diverse, high-fidelity 3D scenes with precise per-view metadata, enabling scalable data generation that remains transferable to real-world settings. Based on this engine, we introduce VIEW2SPACE, a multi-dimensional benchmark for sparse multi-view reasoning, together with a scalable, disjoint training split supporting millions of grounded question-answer pairs. Using this benchmark, a comprehensive evaluation of state-of-the-art vision-language and spatial models reveals that multi-view reasoning remains largely unsolved, with most models performing only marginally above random guessing. We further investigate whether training can bridge this gap. Our proposed Grounded Chain-of-Thought with Visual Evidence substantially improves performance under moderate difficulty, and generalizes to real-world data, outperforming existing approaches in cross-dataset evaluation. We further conduct difficulty-aware scaling analyses across model size, data scale, reasoning depth, and visibility constraints, indicating that while geometric perception can benefit from scaling under sufficient visibility, deep compositional reasoning across sparse views remains a fundamental challenge.
Problem

Research questions and friction points this paper is trying to address.

multi-view visual reasoning
sparse observations
3D scene understanding
vision-language models
geometric perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-view reasoning
sparse observations
grounded simulation
visual question answering
chain-of-thought reasoning
🔎 Similar Papers
No similar papers found.