Do Models Hear Like Us? Probing the Representational Alignment of Audio LLMs and Naturalistic EEG

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether the internal representations of audio large language models (Audio LLMs) align with the neural dynamics of human auditory processing. Analyzing layer-wise sentence-level representations from 12 open-source Audio LLMs against electroencephalography (EEG) signals, the work employs eight representational similarity metrics—including Spearman correlation–based representational similarity analysis (RSA)—alongside spatiotemporal EEG analysis. Results reveal substantial discrepancies in model rankings across different evaluation metrics and identify a depth-dependent alignment peak within the 250–500 ms time window corresponding to the N400 component. Furthermore, the study introduces a tri-modal neighborhood consistency criterion that uncovers the influence of negative prosody on representational geometry. Collectively, these findings systematically characterize the spatiotemporal and affective alignment between Audio LLMs and the human brain, offering neuroscientific insights into their auditory language processing mechanisms.

Technology Category

Application Category

📝 Abstract
Audio Large Language Models (Audio LLMs) have demonstrated strong capabilities in integrating speech perception with language understanding. However, whether their internal representations align with human neural dynamics during naturalistic listening remains largely unexplored. In this work, we systematically examine layer-wise representational alignment between 12 open-source Audio LLMs and Electroencephalogram (EEG) signals across 2 datasets. Specifically, we employ 8 similarity metrics, such as Spearman-based Representational Similarity Analysis (RSA), to characterize within-sentence representational geometry. Our analysis reveals 3 key findings: (1) we observe a rank-dependence split, in which model rankings vary substantially across different similarity metrics; (2) we identify spatio-temporal alignment patterns characterized by depth-dependent alignment peaks and a pronounced increase in RSA within the 250-500 ms time window, consistent with N400-related neural dynamics; (3) we find an affective dissociation whereby negative prosody, identified using a proposed Tri-modal Neighborhood Consistency (TNC) criterion, reduces geometric similarity while enhancing covariance-based dependence. These findings provide new neurobiological insights into the representational mechanisms of Audio LLMs.
Problem

Research questions and friction points this paper is trying to address.

Audio LLMs
representational alignment
EEG
neural dynamics
naturalistic listening
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio LLMs
Representational Similarity Analysis
EEG alignment
N400 dynamics
Tri-modal Neighborhood Consistency
🔎 Similar Papers
No similar papers found.
H
Haoyun Yang
School of Computer Science, Chongqing University, Chongqing, China
Xin Xiao
Xin Xiao
ByteDance Research
VLAVLM
J
Jiang Zhong
School of Computer Science, Chongqing University, Chongqing, China
Y
Yu Tian
Dept. of Comp. Sci. and Tech., Institute for AI, Tsinghua University, Beijing, China
X
Xiaohua Dong
School of Economics and Business Administration, Chongqing University, Chongqing, China
Yu Mao
Yu Mao
City University of Hong Kong
Data CompressionEmbedded SystemEfficient Neural Network Design
Hao Wu
Hao Wu
Asa and Patricia Springer Professor, Boston Children's Hospital and Harvard Medical School
Structural biologyinnate immunityadaptive immunitytherapeutics
K
Kaiwen Wei
School of Computer Science, Chongqing University, Chongqing, China