Large Vision Models Can Solve Mental Rotation Problems

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether modern vision transformers—specifically ViT, CLIP, DINOv2, and DINOv3—exhibit human-like spatial reasoning capabilities in mental rotation tasks, probing their implicit geometric understanding. Method: We employ hierarchical representation probing across three stimulus modalities—block figures, textual stimuli, and natural object images—under systematically varied complexity conditions (rotation angle, occlusion level). Contribution/Results: Self-supervised models—especially DINOv2 and DINOv3—significantly outperform supervised counterparts; discriminative representations peak at intermediate network layers; and performance degrades monotonically with increasing rotation angle and occlusion, mirroring human cognitive constraints. This work provides the first empirical evidence that large vision models implicitly encode mental rotation mechanisms, demonstrating that self-supervised pretraining fosters geometric structure learning. These findings establish critical empirical grounding for modeling spatial cognition in AI systems.

Technology Category

Application Category

📝 Abstract
Mental rotation is a key test of spatial reasoning in humans and has been central to understanding how perception supports cognition. Despite the success of modern vision transformers, it is still unclear how well these models develop similar abilities. In this work, we present a systematic evaluation of ViT, CLIP, DINOv2, and DINOv3 across a range of mental-rotation tasks, from simple block structures similar to those used by Shepard and Metzler to study human cognition, to more complex block figures, three types of text, and photo-realistic objects. By probing model representations layer by layer, we examine where and how these networks succeed. We find that i) self-supervised ViTs capture geometric structure better than supervised ViTs; ii) intermediate layers perform better than final layers; iii) task difficulty increases with rotation complexity and occlusion, mirroring human reaction times and suggesting similar constraints in embedding space representations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating vision models on mental rotation tasks
Comparing self-supervised and supervised ViTs performance
Analyzing layer-wise geometric representation capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised ViTs capture geometric structure better
Intermediate layers outperform final layers
Task difficulty mirrors human reaction times
🔎 Similar Papers
No similar papers found.
S
Sebastian Ray Mason
Technical University of Denmark, Section for Cognitive Systems
Anders Gjølbye
Anders Gjølbye
Technical University of Denmark
ExplainabilityDeep LearningEEG
P
Phillip Chavarria Højbjerg
Technical University of Denmark, Section for Cognitive Systems
L
Lenka Tětková
Technical University of Denmark, Section for Cognitive Systems
Lars Kai Hansen
Lars Kai Hansen
Professor, Cognitive Systems, DTU Compute, Technical University of Denmark
Machine learningAIneuroimagingcognitive systemssignal processing