Compositional Audio Representation Learning

📅 2024-09-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human audition exhibits compositional capability, enabling separation and identification of individual sound sources from complex acoustic mixtures; however, existing audio representations are predominantly segment-level and lack source-centered disentanglement. This paper proposes a source-centered audio representation learning framework, introducing— for the first time—two complementary paradigms: (i) supervised classification-guided learning, where source-embedding disentanglement is optimized via classification loss, and (ii) unsupervised deep feature reconstruction, which abandons spectrogram-level reconstruction in favor of reconstructing high-level audio features to substantially enhance source disentanglement. Integrated with disentanglement regularization and deep architectures, the learned representations outperform baselines on downstream classification tasks, empirically validating the benefit of supervision for representation quality. Moreover, the framework yields highly interpretable representations and supports flexible, controllable source decoding.

Technology Category

Application Category

📝 Abstract
Human auditory perception is compositional in nature -- we identify auditory streams from auditory scenes with multiple sound events. However, such auditory scenes are typically represented using clip-level representations that do not disentangle the constituent sound sources. In this work, we learn source-centric audio representations where each sound source is represented using a distinct, disentangled source embedding in the audio representation. We propose two novel approaches to learning source-centric audio representations: a supervised model guided by classification and an unsupervised model guided by feature reconstruction, both of which outperform the baselines. We thoroughly evaluate the design choices of both approaches using an audio classification task. We find that supervision is beneficial to learn source-centric representations, and that reconstructing audio features is more useful than reconstructing spectrograms to learn unsupervised source-centric representations. Leveraging source-centric models can help unlock the potential of greater interpretability and more flexible decoding in machine listening.
Problem

Research questions and friction points this paper is trying to address.

Develop source-centric audio representations for disentangled sound sources.
Propose supervised and unsupervised models for audio representation learning.
Evaluate models using audio classification to improve interpretability and decoding.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supervised model with classification guidance
Unsupervised model with feature reconstruction
Source-centric audio representation learning
🔎 Similar Papers
No similar papers found.
S
Sripathi Sridhar
Sound Interaction and Computing (SInC) Lab, New Jersey Institute of Technology, Newark, NJ, USA
Mark Cartwright
Mark Cartwright
Assistant Professor, New Jersey Institute of Technology
Machine ListeningHuman-Computer InteractionMachine LearningAudioMusic