🤖 AI Summary
Multimodal large language models exhibit significant deficiencies in dynamic spatial reasoning—e.g., inferring spatial relationship changes induced by egocentric or object motion—while existing works predominantly focus on static scenes and suffer from prohibitive costs of annotating real-world motion data.
Method: We introduce SAT, the first synthetic dataset covering both static and dynamic spatial reasoning (175K QA pairs, 20K scenes), alongside a real-world dynamic image test set. We formally define and model dynamic spatial capability and propose a high-fidelity physics-based simulation training paradigm, demonstrating its superiority over pseudo-labeled real images. Using instruction tuning and multi-stage data mixing, we adapt open architectures (e.g., LLaVA).
Results: On real-world dynamic evaluation and long-video reasoning, LLaVA-13B and LLaVA-Video-7B achieve average improvements of 11% and 8%, respectively, surpassing several proprietary models—validating effective simulation-to-reality transfer.
📝 Abstract
Reasoning about motion and space is a fundamental cognitive capability that is required by multiple real-world applications. While many studies highlight that large multimodal language models (MLMs) struggle to reason about space, they only focus on static spatial relationships, and not dynamic awareness of motion and space, i.e., reasoning about the effect of egocentric and object motions on spatial relationships. Manually annotating such object and camera movements is expensive. Hence, we introduce SAT, a simulated spatial aptitude training dataset comprising both static and dynamic spatial reasoning across 175K question-answer (QA) pairs and 20K scenes. Complementing this, we also construct a small (150 image-QAs) yet challenging dynamic spatial test set using real-world images. Leveraging our SAT datasets and 6 existing static spatial benchmarks, we systematically investigate what improves both static and dynamic spatial awareness. Our results reveal that simulations are surprisingly effective at imparting spatial aptitude to MLMs that translate to real images. We show that perfect annotations in simulation are more effective than existing approaches of pseudo-annotating real images. For instance, SAT training improves a LLaVA-13B model by an average 11% and a LLaVA-Video-7B model by an average 8% on multiple spatial benchmarks, including our real-image dynamic test set and spatial reasoning on long videos -- even outperforming some large proprietary models. While reasoning over static relationships improves with synthetic training data, there is still considerable room for improvement for dynamic reasoning questions.