Interpretable Embeddings of Speech Enhance and Explain Brain Encoding Performance of Audio Models

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Self-supervised speech models (SSMs) remain poorly interpretable with respect to neural speech encoding in the human brain, limiting their utility in computational neuroscience. Method: We construct an interpretable linear encoding model that systematically integrates multi-level speech features—acoustic (Mel spectrograms, Gabor-filtered representations, speech presence), linguistic (phonemic, syntactic, semantic question-answering), and contextual embeddings from state-of-the-art SSMs (Whisper, HuBERT, WavLM). Contribution/Results: We demonstrate that SSMs implicitly preserve critical low-frequency acoustic information while also encoding irreducible high-level semantic features. Crucially, combining interpretable features with SSM embeddings significantly improves prediction accuracy of neural responses (e.g., EEG/MEG) compared to either SSMs alone or conventional hand-crafted features. This work identifies, for the first time, the core feature components driving neural alignment between SSMs and human cortical activity, establishing a novel interdisciplinary framework bridging explainable AI and computational neuroscience.

Technology Category

Application Category

📝 Abstract
Self-supervised speech models (SSMs) are increasingly hailed as more powerful computational models of human speech perception than models based on traditional hand-crafted features. However, since their representations are inherently black-box, it remains unclear what drives their alignment with brain responses. To remedy this, we built linear encoding models from six interpretable feature families: mel-spectrogram, Gabor filter bank features, speech presence, phonetic, syntactic, and semantic Question-Answering features, and contextualized embeddings from three state-of-the-art SSMs (Whisper, HuBERT, WavLM), quantifying the shared and unique neural variance captured by each feature class. Contrary to prevailing assumptions, our interpretable model predicted electrocorticography (ECoG) responses to speech more accurately than any SSM. Moreover, augmenting SSM representations with interpretable features yielded the best overall neural predictions, significantly outperforming either class alone. Further variance-partitioning analyses revealed previously unresolved components of SSM representations that contribute to their neural alignment: 1. Despite the common assumption that later layers of SSMs discard low-level acoustic information, these models compress and preferentially retain frequency bands critical for neural encoding of speech (100-1000 Hz). 2. Contrary to previous claims, SSMs encode brain-relevant semantic information that cannot be reduced to lower-level features, improving with context length and model size. These results highlight the importance of using refined, interpretable features in understanding speech perception.
Problem

Research questions and friction points this paper is trying to address.

Understand what drives SSMs' alignment with brain responses
Compare interpretable features and SSMs in predicting brain activity
Identify key components of SSMs that enhance neural encoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear encoding models from interpretable feature families
Augmenting SSM representations with interpretable features
Variance-partitioning analyses reveal SSM neural alignment components
🔎 Similar Papers
No similar papers found.