🤖 AI Summary
This work investigates the gap between deep neural networks (DNNs) and human performance on 3D visual perspective-taking (VPT)—a core aspect of embodied cognition. To this end, we introduce 3D-PC, the first cross-subject 3D-VPT benchmark comprising three task categories: depth ordering, basic VPT, and strategic VPT—all grounded in natural-scene imagery. We conduct human behavioral experiments (N=33) and evaluate over 300 models via linear probing and text prompting. Key findings: DNNs achieve human-level or superior performance on depth ordering but perform near-chance on basic VPT; even after fine-tuning, they fail to generalize to strategic VPT tasks requiring mental modeling of others’ perspectives. Despite learning robust 3D structural representations, current large models fundamentally lack human-like VPT reasoning capacity. 3D-PC is the first benchmark to systematically expose the intrinsic limitations of DNNs in 3D mental modeling, providing a rigorous diagnostic tool and new foundation for research in embodied cognition and visual reasoning.
📝 Abstract
Visual perspective taking (VPT) is the ability to perceive and reason about the perspectives of others. It is an essential feature of human intelligence, which develops over the first decade of life and requires an ability to process the 3D structure of visual scenes. A growing number of reports have indicated that deep neural networks (DNNs) become capable of analyzing 3D scenes after training on large image datasets. We investigated if this emergent ability for 3D analysis in DNNs is sufficient for VPT with the 3D perception challenge (3D-PC): a novel benchmark for 3D perception in humans and DNNs. The 3D-PC is comprised of three 3D-analysis tasks posed within natural scene images: 1. a simple test of object depth order, 2. a basic VPT task (VPT-basic), and 3. another version of VPT (VPT-Strategy) designed to limit the effectiveness of"shortcut"visual strategies. We tested human participants (N=33) and linearly probed or text-prompted over 300 DNNs on the challenge and found that nearly all of the DNNs approached or exceeded human accuracy in analyzing object depth order. Surprisingly, DNN accuracy on this task correlated with their object recognition performance. In contrast, there was an extraordinary gap between DNNs and humans on VPT-basic. Humans were nearly perfect, whereas most DNNs were near chance. Fine-tuning DNNs on VPT-basic brought them close to human performance, but they, unlike humans, dropped back to chance when tested on VPT-Strategy. Our challenge demonstrates that the training routines and architectures of today's DNNs are well-suited for learning basic 3D properties of scenes and objects but are ill-suited for reasoning about these properties as humans do. We release our 3D-PC datasets and code to help bridge this gap in 3D perception between humans and machines.