🤖 AI Summary
This study quantifies the representational alignment between humans and deep neural networks (DNNs) in visual versus semantic information processing. We propose the first behaviorally grounded, low-dimensional latent structure modeling framework that maps human and DNN representations into a shared, behavior-driven, comparable space. Using representational similarity analysis (RSA), cross-modal embedding (PCA/CCA), and controlled in-silico experiments, we find that DNNs exhibit a strong visual bias and severely underrepresent semantic dimensions—challenging conventional scalar alignment paradigms. Our approach enables interpretable, dimension-wise disentanglement and quantitative comparison of representations across systems, revealing fundamental differences in image processing strategies. These results establish a novel evaluation benchmark for AI–human cognitive alignment and provide a principled pathway for interpretable validation of representational fidelity.
📝 Abstract
Determining the similarities and differences between humans and artificial intelligence (AI) is an important goal both in computational cognitive neuroscience and machine learning, promising a deeper understanding of human cognition and safer, more reliable AI systems. Much previous work comparing representations in humans and AI has relied on global, scalar measures to quantify their alignment. However, without explicit hypotheses, these measures only inform us about the degree of alignment, not the factors that determine it. To address this challenge, we propose a generic framework to compare human and AI representations, based on identifying latent representational dimensions underlying the same behavior in both domains. Applying this framework to humans and a deep neural network (DNN) model of natural images revealed a low-dimensional DNN embedding of both visual and semantic dimensions. In contrast to humans, DNNs exhibited a clear dominance of visual over semantic properties, indicating divergent strategies for representing images. While in-silico experiments showed seemingly consistent interpretability of DNN dimensions, a direct comparison between human and DNN representations revealed substantial differences in how they process images. By making representations directly comparable, our results reveal important challenges for representational alignment and offer a means for improving their comparability.