Learning Multi-View Spatial Reasoning from Cross-View Relations

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited multi-view spatial reasoning capability of existing vision-language models, which hinders embodied agents from achieving cross-view understanding and manipulation in 3D environments. To bridge this gap, the authors introduce XVR, the first large-scale dataset systematically constructed for cross-view relational reasoning, comprising 18,000 3D scenes and 70,000 robot trajectories, from which 100,000 visual question-answering samples are generated to train models on tasks such as cross-view correspondence, spatial relation verification, and target localization. By fine-tuning vision-language models and integrating them into an end-to-end Vision-Language-Action framework, the proposed approach significantly enhances multi-view reasoning performance on the MindCube and RoboSpatial benchmarks. When deployed within the RoboCasa platform, it demonstrably improves robotic task success rates.
📝 Abstract
Vision-language models (VLMs) have achieved impressive results on single-view vision tasks, but lack the multi-view spatial reasoning capabilities essential for embodied AI systems to understand 3D environments and manipulate objects across different viewpoints. In this work, we introduce Cross-View Relations (XVR), a large-scale dataset designed to teach VLMs spatial reasoning across multiple views. XVR comprises 100K vision-question-answer samples derived from 18K diverse 3D scenes and 70K robotic manipulation trajectories, spanning three fundamental spatial reasoning tasks: Correspondence (matching objects across views), Verification (validating spatial relationships), and Localization (identifying object positions). VLMs fine-tuned on XVR achieve substantial improvements on established multi-view and robotic spatial reasoning benchmarks (MindCube and RoboSpatial). When integrated as backbones in Vision-Language-Action models, XVR-trained representations improve success rates on RoboCasa. Our results demonstrate that explicit training on cross-view spatial relations significantly enhances multi-view reasoning and transfers effectively to real-world robotic manipulation.
Problem

Research questions and friction points this paper is trying to address.

multi-view spatial reasoning
vision-language models
embodied AI
3D environments
cross-view relations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-View Relations
multi-view spatial reasoning
vision-language models
robotic manipulation
3D scene understanding
🔎 Similar Papers
No similar papers found.
S
Suchae Jeong
KAIST
J
Jaehwi Song
Hanyang University
H
Haeone Lee
KAIST
Hanna Kim
Hanna Kim
KAIST
CybersecurityData miningSocial networks
J
Jian Kim
Yonsei University
Dongjun Lee
Dongjun Lee
Korea Advanced Institute of Science and Technology
Aritificial intelligence
D
Dong Kyu Shin
Seoul National University
Changyeon Kim
Changyeon Kim
PhD student, KAIST
Reinforcement LearningMachine LearningRecommendation System
D
Dongyoon Hahm
KAIST
W
Woogyeol Jin
KAIST
J
Juheon Choi
KAIST
Kimin Lee
Kimin Lee
KAIST
Artificial IntelligenceReinforcement LearningDeep Learning