🤖 AI Summary
This work addresses the limited multi-view spatial reasoning capability of existing vision-language models, which hinders embodied agents from achieving cross-view understanding and manipulation in 3D environments. To bridge this gap, the authors introduce XVR, the first large-scale dataset systematically constructed for cross-view relational reasoning, comprising 18,000 3D scenes and 70,000 robot trajectories, from which 100,000 visual question-answering samples are generated to train models on tasks such as cross-view correspondence, spatial relation verification, and target localization. By fine-tuning vision-language models and integrating them into an end-to-end Vision-Language-Action framework, the proposed approach significantly enhances multi-view reasoning performance on the MindCube and RoboSpatial benchmarks. When deployed within the RoboCasa platform, it demonstrably improves robotic task success rates.
📝 Abstract
Vision-language models (VLMs) have achieved impressive results on single-view vision tasks, but lack the multi-view spatial reasoning capabilities essential for embodied AI systems to understand 3D environments and manipulate objects across different viewpoints. In this work, we introduce Cross-View Relations (XVR), a large-scale dataset designed to teach VLMs spatial reasoning across multiple views. XVR comprises 100K vision-question-answer samples derived from 18K diverse 3D scenes and 70K robotic manipulation trajectories, spanning three fundamental spatial reasoning tasks: Correspondence (matching objects across views), Verification (validating spatial relationships), and Localization (identifying object positions). VLMs fine-tuned on XVR achieve substantial improvements on established multi-view and robotic spatial reasoning benchmarks (MindCube and RoboSpatial). When integrated as backbones in Vision-Language-Action models, XVR-trained representations improve success rates on RoboCasa. Our results demonstrate that explicit training on cross-view spatial relations significantly enhances multi-view reasoning and transfers effectively to real-world robotic manipulation.