🤖 AI Summary
This study investigates whether multimodal large language models (MLLMs) possess human-like spatial cognition and systematically characterizes their limitations. Method: We introduce 11Plus-Bench—the first cognitively inspired benchmark for spatial reasoning—integrating perceptual complexity quantification and fine-grained process-level annotations, grounded in standardized real-world assessments and expert labeling. We conduct a large-scale evaluation across 14 state-of-the-art MLLMs, with direct comparison to human performance. Contribution/Results: While current MLLMs fall significantly short of human-level spatial reasoning, they exhibit nascent human-like traits: inference effort scales significantly with task complexity. Yet individual model behavior lacks consistency and interpretability. 11Plus-Bench establishes a novel paradigm for modeling and evaluating spatial cognition in MLLMs, offering actionable insights for enhancing both cognitive fidelity and explainability.
📝 Abstract
For human cognitive process, spatial reasoning and perception are closely entangled, yet the nature of this interplay remains underexplored in the evaluation of multimodal large language models (MLLMs). While recent MLLM advancements show impressive performance on reasoning, their capacity for human-like spatial cognition remains an open question. In this work, we introduce a systematic evaluation framework to assess the spatial reasoning abilities of state-of-the-art MLLMs relative to human performance. Central to our work is 11Plus-Bench, a high-quality benchmark derived from realistic standardized spatial aptitude tests. 11Plus-Bench also features fine-grained expert annotations of both perceptual complexity and reasoning process, enabling detailed instance-level analysis of model behavior. Through extensive experiments across 14 MLLMs and human evaluation, we find that current MLLMs exhibit early signs of spatial cognition. Despite a large performance gap compared to humans, MLLMs' cognitive profiles resemble those of humans in that cognitive effort correlates strongly with reasoning-related complexity. However, instance-level performance in MLLMs remains largely random, whereas human correctness is highly predictable and shaped by abstract pattern complexity. These findings highlight both emerging capabilities and limitations in current MLLMs' spatial reasoning capabilities and provide actionable insights for advancing model design.