🤖 AI Summary
This study investigates whether the internal representations of audio large language models (Audio LLMs) align with the neural dynamics of human auditory processing. Analyzing layer-wise sentence-level representations from 12 open-source Audio LLMs against electroencephalography (EEG) signals, the work employs eight representational similarity metrics—including Spearman correlation–based representational similarity analysis (RSA)—alongside spatiotemporal EEG analysis. Results reveal substantial discrepancies in model rankings across different evaluation metrics and identify a depth-dependent alignment peak within the 250–500 ms time window corresponding to the N400 component. Furthermore, the study introduces a tri-modal neighborhood consistency criterion that uncovers the influence of negative prosody on representational geometry. Collectively, these findings systematically characterize the spatiotemporal and affective alignment between Audio LLMs and the human brain, offering neuroscientific insights into their auditory language processing mechanisms.
📝 Abstract
Audio Large Language Models (Audio LLMs) have demonstrated strong capabilities in integrating speech perception with language understanding. However, whether their internal representations align with human neural dynamics during naturalistic listening remains largely unexplored. In this work, we systematically examine layer-wise representational alignment between 12 open-source Audio LLMs and Electroencephalogram (EEG) signals across 2 datasets. Specifically, we employ 8 similarity metrics, such as Spearman-based Representational Similarity Analysis (RSA), to characterize within-sentence representational geometry. Our analysis reveals 3 key findings: (1) we observe a rank-dependence split, in which model rankings vary substantially across different similarity metrics; (2) we identify spatio-temporal alignment patterns characterized by depth-dependent alignment peaks and a pronounced increase in RSA within the 250-500 ms time window, consistent with N400-related neural dynamics; (3) we find an affective dissociation whereby negative prosody, identified using a proposed Tri-modal Neighborhood Consistency (TNC) criterion, reduces geometric similarity while enhancing covariance-based dependence. These findings provide new neurobiological insights into the representational mechanisms of Audio LLMs.