Adversarial Training Improves Generalization Under Distribution Shifts in Bioacoustics

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the degradation of generalization and robustness in audio classification models for bioacoustics due to significant distributional shifts in real-world data. We propose and comparatively evaluate two novel adversarial training strategies: joint adversarial attacks in both output space and embedding space. For the first time, we systematically assess prototype stability within an interpretable prototypical model—AudioProtoPNet—and integrate it with a ConvNeXt backbone. Experiments on a bird sound classification benchmark demonstrate that our approach improves clean-test accuracy by an average of 10.5% while preserving model interpretability. Moreover, it substantially enhances robustness against distribution shifts and diverse adversarial attacks (e.g., PGD, FGSM, CW). This work establishes a new paradigm and provides empirical foundations for deploying robust, field-ready bioacoustic recognition systems.

Technology Category

Application Category

📝 Abstract
Adversarial training is a promising strategy for enhancing model robustness against adversarial attacks. However, its impact on generalization under substantial data distribution shifts in audio classification remains largely unexplored. To address this gap, this work investigates how different adversarial training strategies improve generalization performance and adversarial robustness in audio classification. The study focuses on two model architectures: a conventional convolutional neural network (ConvNeXt) and an inherently interpretable prototype-based model (AudioProtoPNet). The approach is evaluated using a challenging bird sound classification benchmark. This benchmark is characterized by pronounced distribution shifts between training and test data due to varying environmental conditions and recording methods, a common real-world challenge. The investigation explores two adversarial training strategies: one based on output-space attacks that maximize the classification loss function, and another based on embedding-space attacks designed to maximize embedding dissimilarity. These attack types are also used for robustness evaluation. Additionally, for AudioProtoPNet, the study assesses the stability of its learned prototypes under targeted embedding-space attacks. Results show that adversarial training, particularly using output-space attacks, improves clean test data performance by an average of 10.5% relative and simultaneously strengthens the adversarial robustness of the models. These findings, although derived from the bird sound domain, suggest that adversarial training holds potential to enhance robustness against both strong distribution shifts and adversarial attacks in challenging audio classification settings.
Problem

Research questions and friction points this paper is trying to address.

Investigates adversarial training's impact on audio classification generalization
Evaluates robustness under data distribution shifts in bird sound classification
Tests output-space and embedding-space adversarial training strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training enhances model robustness.
Output-space attacks improve clean test performance.
Embedding-space attacks assess prototype stability.
🔎 Similar Papers
No similar papers found.