š¤ AI Summary
This study investigates whether performance improvements of artificial neural networks (ANNs) on auditory tasks correlate with increased alignment between their internal representations and neural activity in the human auditory cortex. Using voxel-wise and component-wise regression, representational similarity analysis (RSA), and systematic evaluation across six audio tasks from the HEAREval benchmark, we analyze 36 models. Our results show: (1) a strong positive correlation (Pearsonās *r* > 0.7) between downstream task performance and fMRI-based representational alignment; (2) self-supervised pretrained models significantly outperform supervised and traditional models in predicting fMRI responses; and (3) brain-aligned representational structure emerges spontaneously early in self-supervised trainingāwithout explicit neurobiological optimization. These findings reveal an intrinsic neurocomputational interpretability advantage of self-supervised learning, offering a novel paradigm for developing brain-inspired auditory models grounded in functional correspondence with biological audition.
š Abstract
Artificial neural networks (ANNs) are increasingly powerful models of brain computation, yet it remains unclear whether improving their task performance also makes their internal representations more similar to brain signals. To address this question in the auditory domain, we quantified the alignment between the internal representations of 36 different audio models and brain activity from two independent fMRI datasets. Using voxel-wise and component-wise regression, and representation similarity analysis (RSA), we found that recent self-supervised audio models with strong performance in diverse downstream tasks are better predictors of auditory cortex activity than older and more specialized models. To assess the quality of the audio representations, we evaluated these models in 6 auditory tasks from the HEAREval benchmark, spanning music, speech, and environmental sounds. This revealed strong positive Pearson correlations ($r>0.7$) between a model's overall task performance and its alignment with brain representations. Finally, we analyzed the evolution of the similarity between audio and brain representations during the pretraining of EnCodecMAE. We discovered that brain similarity increases progressively and emerges early during pretraining, despite the model not being explicitly optimized for this objective. This suggests that brain-like representations can be an emergent byproduct of learning to reconstruct missing information from naturalistic audio data.