Foundation Models for Bioacoustics -- a Comparative Review

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the selection and adaptation optimization of large-scale pretrained foundation models for bioacoustics to enhance transfer performance across diverse classification tasks—particularly bird vocalization and environmental sound recognition—thereby supporting automated biodiversity monitoring. We systematically evaluate BirdMAE and BEATs$_{NLM}$ on the BirdSet and BEANS benchmarks, employing self-supervised and supervised pretraining, linear probing, and attention-based probing strategies, while comparing representation generalization across Transformer and ConvNeXt architectures. Results show BirdMAE achieves superior performance on BirdSet, whereas BEATs$_{NLM}$ attains marginally better accuracy on BEANS; notably, attention probing significantly boosts Transformer performance. This work is the first to empirically demonstrate the critical role of probing mechanisms in determining bioacoustic model transferability. It establishes a reproducible, principled framework for model selection, lightweight adaptation, and real-world deployment, offering both practical guidelines and theoretical insights for the bioacoustics community.

Technology Category

Application Category

📝 Abstract
Automated bioacoustic analysis is essential for biodiversity monitoring and conservation, requiring advanced deep learning models that can adapt to diverse bioacoustic tasks. This article presents a comprehensive review of large-scale pretrained bioacoustic foundation models and systematically investigates their transferability across multiple bioacoustic classification tasks. We overview bioacoustic representation learning including major pretraining data sources and benchmarks. On this basis, we review bioacoustic foundation models by thoroughly analysing design decisions such as model architecture, pretraining scheme, and training paradigm. Additionally, we evaluate selected foundation models on classification tasks from the BEANS and BirdSet benchmarks, comparing the generalisability of learned representations under both linear and attentive probing strategies. Our comprehensive experimental analysis reveals that BirdMAE, trained on large-scale bird song data with a self-supervised objective, achieves the best performance on the BirdSet benchmark. On BEANS, BEATs$_{NLM}$, the extracted encoder of the NatureLM-audio large audio model, is slightly better. Both transformer-based models require attentive probing to extract the full performance of their representations. ConvNext$_{BS}$ and Perch models trained with supervision on large-scale bird song data remain competitive for passive acoustic monitoring classification tasks of BirdSet in linear probing settings. Training a new linear classifier has clear advantages over evaluating these models without further training. While on BEANS, the baseline model BEATs trained with self-supervision on AudioSet outperforms bird-specific models when evaluated with attentive probing. These findings provide valuable guidance for practitioners selecting appropriate models to adapt them to new bioacoustic classification tasks via probing.
Problem

Research questions and friction points this paper is trying to address.

Evaluating transferability of bioacoustic foundation models across tasks
Comparing self-supervised and supervised models on bioacoustic benchmarks
Identifying optimal probing strategies for bioacoustic representation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale pretrained bioacoustic foundation models
Self-supervised learning for bioacoustic representation
Attentive probing for optimal model performance
🔎 Similar Papers
No similar papers found.
R
Raphael Schwinger
INS, Kiel University, Germany
P
Paria Vali Zadeh
INS, Kiel University, Germany
Lukas Rauch
Lukas Rauch
University of Kassel
Deep LearningSelf-Supervised LearningActive LearningBioacoustics
M
Mats Kurz
INS, Kiel University, Germany
T
Tom Hauschild
INS, Kiel University, Germany
S
Sam Lapp
University of Pittsburgh, Pittsburgh, PA, USA
Sven Tomforde
Sven Tomforde
Universität Kiel, Intelligent Systems
Machine LearningOrganic ComputingAutonomic ComputingAutonomous LearningAutonomous Systems