🤖 AI Summary
This study investigates the alignment mechanisms between neural network representations and human visual learning in few-shot image understanding. Method: We systematically evaluate generalization behaviors of 86 pretrained models on continuous relational reasoning and natural image classification tasks, introducing the first quantitative measure of cross-task consistency between model representations and human cognitive trajectories. Our approach integrates representational similarity analysis, intrinsic dimension estimation, cognitive-modeling–driven evaluation, and multimodal contrastive learning assessment. Results: Multimodal contrastive learning emerges as the strongest predictor of human few-shot generalization—significantly outperforming conventional metrics such as parameter count or training data scale. Pretrained models serve as effective sources of cognitive representations. The proposed evaluation paradigm establishes a generalizable, ecologically valid framework for cross-species intelligence modeling, advancing the study of human-aligned artificial perception.
📝 Abstract
Humans represent scenes and objects in rich feature spaces, carrying information that allows us to generalise about category memberships and abstract functions with few examples. What determines whether a neural network model generalises like a human? We tested how well the representations of $86$ pretrained neural network models mapped to human learning trajectories across two tasks where humans had to learn continuous relationships and categories of natural images. In these tasks, both human participants and neural networks successfully identified the relevant stimulus features within a few trials, demonstrating effective generalisation. We found that while training dataset size was a core determinant of alignment with human choices, contrastive training with multi-modal data (text and imagery) was a common feature of currently publicly available models that predicted human generalisation. Intrinsic dimensionality of representations had different effects on alignment for different model types. Lastly, we tested three sets of human-aligned representations and found no consistent improvements in predictive accuracy compared to the baselines. In conclusion, pretrained neural networks can serve to extract representations for cognitive models, as they appear to capture some fundamental aspects of cognition that are transferable across tasks. Both our paradigms and modelling approach offer a novel way to quantify alignment between neural networks and humans and extend cognitive science into more naturalistic domains.