🤖 AI Summary
To address the high computational cost and poor interpretability of deep neural network (DNN) representations for audio, this paper proposes an auditory-cortex-inspired spectrotemporal modulation (STM) feature representation method for speech, music, and environmental sound classification. Unlike prior work, our approach directly employs neurophysiologically grounded STM features in end-to-end audio classification—without pretraining—while matching the performance of state-of-the-art pretrained DNNs. By integrating spectrotemporal modulation analysis with biologically plausible auditory cortex modeling, and coupling shallow classifiers (e.g., SVM or lightweight CNNs) with unsupervised feature extraction, the method achieves competitive accuracy on multi-class natural audio tasks. It reduces computational overhead by over 60% compared to leading DNNs and yields features with explicit, neurobiologically meaningful auditory semantics. This work establishes a new paradigm for efficient, interpretable machine audition, with direct implications for auditory neuroscience and audio-based brain–computer interfaces.
📝 Abstract
Audio DNNs have demonstrated impressive performance on various machine listening tasks; however, most of their representations are computationally costly and uninterpretable, leaving room for optimization. Here, we propose a novel approach centered on spectrotemporal modulation (STM) features, a signal processing method that mimics the neurophysiological representation in the human auditory cortex. The classification performance of our STM-based model, without any pretraining, is comparable to that of pretrained audio DNNs across diverse naturalistic speech, music, and environmental sounds, which are essential categories for both human cognition and machine perception. These results show that STM is an efficient and interpretable feature representation for audio classification, advancing the development of machine listening and unlocking exciting new possibilities for basic understanding of speech and auditory sciences, as well as developing audio BCI and cognitive computing.