General-purpose audio representation learning for real-world sound scenes

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current audio foundation models rely on dry, single-source, and non-spatial audio data, limiting their generalization to realistic acoustic environments—characterized by naturalness, background noise, multiple concurrent sources, and spatialization—and lacking inherent spatial awareness. To address this, we propose GRAM, a novel self-supervised training paradigm that enables robust spatial audio representation learning within a masked modeling framework for the first time. GRAM is built upon a masked autoencoder architecture, supporting both Transformer- and Mamba-based backbones. We further introduce HEAR-NS, a natural-soundscape-enhanced variant of the HEAR benchmark, and a new source localization evaluation task. Experiments demonstrate that GRAM achieves state-of-the-art (SOTA) performance on auditory scene analysis tasks—matching or exceeding prior SOTA models using only one-third to one-fifth of their training steps—and establishes new SOTA on the source localization task, outperforming fully supervised methods.

Technology Category

Application Category

📝 Abstract
While audio foundation models perform well on myriad of tasks from sound classification to speech analysis, these models are trained and tested on dry, non-spatial, single-source audio clips. This limits their success in real-world situations and results in spatially unaware audio embeddings. To address these limitations, we propose a novel self-supervised training approach for General-Purpose, Real-world Audio Models (GRAMs). The GRAM training approach enables robust spatial audio representation learning for naturalistic, noisy sound scenes and can be applied to any masking-based deep learning model. We demonstrate the success of our approach by training two state-of-the-art models, one with a transformer and one with a mamba backbone. We assess the quality of the extracted audio representations from GRAMs using the original version of the HEAR benchmark, a newly synthesized, naturalistic version of the HEAR benchmark, and novel sound localization tasks based on HEAR benchmark datasets. The results show that our approach minimizes the performance gap between dry, non-spatial, single-source sound scenes and naturalistic sound scenes for crucial tasks such as auditory scene analysis, outperforming existing state-of-the-art audio foundation models at a fraction of the training steps. Moreover, GRAMs show state-of-the-art performance on sound localization tasks, exceeding even supervised sound localization models. In sum, the proposed approach represents a significant advancement towards robust audio foundation models for real-world applications with state-of-the-art performance on naturalistic sound scenes as well as spatial audio representation learning.
Problem

Research questions and friction points this paper is trying to address.

Addressing limitations of audio models in real-world spatial scenes
Proposing self-supervised training for robust spatial audio representation
Improving performance on naturalistic sound scenes and localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised training for real-world audio models
Robust spatial audio representation learning
Transformer and Mamba backbone integration
🔎 Similar Papers
No similar papers found.
G
Goksenin Yuksel
Donders Institute, Radboud University, Nijmegen, The Netherlands
M
M. Gerven
Donders Institute, Radboud University, Nijmegen, The Netherlands
Kiki van der Heijden
Kiki van der Heijden
Assistant Professor Radboud University, Research Fellow Columbia University
auditory neuroscienceneuroimagingsound localizationsound encoding