Dimensions underlying the representational alignment of deep neural networks with humans

📅 2024-06-27
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study quantifies the representational alignment between humans and deep neural networks (DNNs) in visual versus semantic information processing. We propose the first behaviorally grounded, low-dimensional latent structure modeling framework that maps human and DNN representations into a shared, behavior-driven, comparable space. Using representational similarity analysis (RSA), cross-modal embedding (PCA/CCA), and controlled in-silico experiments, we find that DNNs exhibit a strong visual bias and severely underrepresent semantic dimensions—challenging conventional scalar alignment paradigms. Our approach enables interpretable, dimension-wise disentanglement and quantitative comparison of representations across systems, revealing fundamental differences in image processing strategies. These results establish a novel evaluation benchmark for AI–human cognitive alignment and provide a principled pathway for interpretable validation of representational fidelity.

Technology Category

Application Category

📝 Abstract
Determining the similarities and differences between humans and artificial intelligence (AI) is an important goal both in computational cognitive neuroscience and machine learning, promising a deeper understanding of human cognition and safer, more reliable AI systems. Much previous work comparing representations in humans and AI has relied on global, scalar measures to quantify their alignment. However, without explicit hypotheses, these measures only inform us about the degree of alignment, not the factors that determine it. To address this challenge, we propose a generic framework to compare human and AI representations, based on identifying latent representational dimensions underlying the same behavior in both domains. Applying this framework to humans and a deep neural network (DNN) model of natural images revealed a low-dimensional DNN embedding of both visual and semantic dimensions. In contrast to humans, DNNs exhibited a clear dominance of visual over semantic properties, indicating divergent strategies for representing images. While in-silico experiments showed seemingly consistent interpretability of DNN dimensions, a direct comparison between human and DNN representations revealed substantial differences in how they process images. By making representations directly comparable, our results reveal important challenges for representational alignment and offer a means for improving their comparability.
Problem

Research questions and friction points this paper is trying to address.

Human-AI comparison
Visual-Semantic processing
Deep Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Learning Networks
Human-AI Comparison
Visual-Semantic Processing
🔎 Similar Papers
No similar papers found.
F
F. Mahner
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
Lukas Muttenthaler
Lukas Muttenthaler
TU Berlin & Google DeepMind
Machine LearningRepresentation LearningAI AlignmentComputer VisionCognitive Science
U
Umut Gucclu
Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
M
M. Hebart
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Department of Medicine, Justus Liebig University, Giessen, Germany; Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Germany