Investigating the Sensitivity of Pre-trained Audio Embeddings to Common Effects

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the sensitivity and deformation characteristics of pretrained audio embeddings—OpenL3, PANNs, and CLAP—to common audio distortions including gain variation, low-pass filtering, reverberation, and bit-depth compression. We propose a quantification method based on Canonical Correlation Analysis (CCA) to measure embedding trajectory dimensionality and linearity. Our analysis reveals, for the first time, that distortion-induced embedding deformations evolve monotonically along a single dominant direction, yet reside in high-dimensional subspaces and exhibit strong global nonlinearity. Through parametric effect modeling and trajectory analysis, we further demonstrate that explicitly removing the estimated deformation direction fails to improve robustness in downstream instrument classification. This work provides new insights into the structural fragility of pretrained audio representations and shows that simple linear correction strategies are insufficient for enhancing robustness against realistic audio distortions.

Technology Category

Application Category

📝 Abstract
In recent years, foundation models have significantly advanced data-driven systems across various domains. Yet, their underlying properties, especially when functioning as feature extractors, remain under-explored. In this paper, we investigate the sensitivity to audio effects of audio embeddings extracted from widely-used foundation models, including OpenL3, PANNs, and CLAP. We focus on audio effects as the source of sensitivity due to their prevalent presence in large audio datasets. By applying parameterized audio effects (gain, low-pass filtering, reverberation, and bitcrushing), we analyze the correlation between the deformation trajectories and the effect strength in the embedding space. We propose to quantify the dimensionality and linearizability of the deformation trajectories induced by audio effects using canonical correlation analysis. We find that there exists a direction along which the embeddings move monotonically as the audio effect strength increases, but that the subspace containing the displacements is generally high-dimensional. This shows that pre-trained audio embeddings do not globally linearize the effects. Our empirical results on instrument classification downstream tasks confirm that projecting out the estimated deformation directions cannot generally improve the robustness of pre-trained embeddings to audio effects.
Problem

Research questions and friction points this paper is trying to address.

Pretrained Audio Features
Audio Variations Robustness
Linear Transformations Sensitivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained Audio Features
Nonlinear and Multi-directional Changes
Robustness Enhancement
🔎 Similar Papers
No similar papers found.
V
Victor Deng
LTCI, Télécom Paris, Institut Polytechnique de Paris, France; Département d’Informatique, École Normale Supérieure, Paris, France
C
Changhong Wang
LTCI, Télécom Paris, Institut Polytechnique de Paris, France
G
Gael Richard
LTCI, Télécom Paris, Institut Polytechnique de Paris, France
Brian McFee
Brian McFee
Music and Performing Arts Professions / Center for Data Science, New York University
machine learningmusic information retrieval