Multimodal Language Models Cannot Spot Spatial Inconsistencies

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models struggle to recognize three-dimensional spatial inconsistencies across different viewpoints of the same scene. This work introduces a novel task: given a pair of images depicting the same scene from two distinct viewpoints, detect objects that violate 3D motion consistency. To facilitate research on this task, we develop a scalable synthetic framework capable of generating multiview image pairs with controllable spatial inconsistencies, and establish an evaluation protocol integrating human comparative experiments with model assessments. This study presents the first systematic evaluation of multimodal large language models’ ability to reason about 3D spatial consistency, revealing significant limitations in their understanding of physical world dynamics—state-of-the-art models perform substantially worse than humans and exhibit unstable performance across varying scene attributes.
📝 Abstract
Spatial consistency is a fundamental property of the visual world and a key requirement for models that aim to understand physical reality. Despite recent advances, multimodal large language models (MLLMs) often struggle to reason about 3D geometry across multiple views. Rather than asking models to describe scene attributes, we introduce a more challenging task: given two views of the same scene, identify the object that violates 3D motion consistency. We propose a simple and scalable method for generating realistic, spatially inconsistent image pairs from multi-view scenes, enabling systematic evaluation of this capability. Our results show that state-of-the-art MLLMs significantly underperform human observers and exhibit substantial variability across different scene attributes, revealing a fragile and incomplete understanding of 3D structure. We hope our findings underscore the need for approaches that develop a more deeply grounded understanding of the physical world.
Problem

Research questions and friction points this paper is trying to address.

spatial inconsistency
multimodal language models
3D geometry
motion consistency
multi-view scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatial inconsistency
multimodal language models
3D motion consistency
multi-view reasoning
physical understanding
🔎 Similar Papers
No similar papers found.
O
Om Khangaonkar
University of California, Davis
H
Hadi J. Rad
Shell
Hamed Pirsiavash
Hamed Pirsiavash
Associate Professor at University of California, Davis
Computer VisionMachine Learning