AudioSAE: Towards Understanding of Audio-Processing Models with Sparse AutoEncoders

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of interpretability in internal representations of audio models and the absence of systematic applications of sparse autoencoders (SAEs) in the audio domain. For the first time, SAEs are trained across all encoder layers of Whisper and HuBERT, and their stability and interpretability are systematically evaluated through feature consistency, concept ablation, feature manipulation, and EEG correlation analyses. Results show that over 50% of SAE features remain stable across random seeds; ablating only 19–27% of features effectively removes specific concepts; and manipulating these features reduces Whisper’s false alarm rate by 70% with negligible impact on word error rate. This work further uncovers a link between audio model representations and human auditory neural activity, establishing a new paradigm for interpretable audio representations.

Technology Category

Application Category

📝 Abstract
Sparse Autoencoders (SAEs) are powerful tools for interpreting neural representations, yet their use in audio remains underexplored. We train SAEs across all encoder layers of Whisper and HuBERT, provide an extensive evaluation of their stability, interpretability, and show their practical utility. Over 50% of the features remain consistent across random seeds, and reconstruction quality is preserved. SAE features capture general acoustic and semantic information as well as specific events, including environmental noises and paralinguistic sounds (e.g. laughter, whispering) and disentangle them effectively, requiring removal of only 19-27% of features to erase a concept. Feature steering reduces Whisper's false speech detections by 70% with negligible WER increase, demonstrating real-world applicability. Finally, we find SAE features correlated with human EEG activity during speech perception, indicating alignment with human neural processing. The code and checkpoints are available at https://github.com/audiosae/audiosae_demo.
Problem

Research questions and friction points this paper is trying to address.

audio interpretability
neural representations
sparse autoencoders
speech perception
feature disentanglement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse AutoEncoders
Audio Interpretability
Feature Disentanglement
Neural Alignment
Feature Steering
🔎 Similar Papers
No similar papers found.