🤖 AI Summary
This study addresses the selection and adaptation optimization of large-scale pretrained foundation models for bioacoustics to enhance transfer performance across diverse classification tasks—particularly bird vocalization and environmental sound recognition—thereby supporting automated biodiversity monitoring. We systematically evaluate BirdMAE and BEATs$_{NLM}$ on the BirdSet and BEANS benchmarks, employing self-supervised and supervised pretraining, linear probing, and attention-based probing strategies, while comparing representation generalization across Transformer and ConvNeXt architectures. Results show BirdMAE achieves superior performance on BirdSet, whereas BEATs$_{NLM}$ attains marginally better accuracy on BEANS; notably, attention probing significantly boosts Transformer performance. This work is the first to empirically demonstrate the critical role of probing mechanisms in determining bioacoustic model transferability. It establishes a reproducible, principled framework for model selection, lightweight adaptation, and real-world deployment, offering both practical guidelines and theoretical insights for the bioacoustics community.
📝 Abstract
Automated bioacoustic analysis is essential for biodiversity monitoring and conservation, requiring advanced deep learning models that can adapt to diverse bioacoustic tasks. This article presents a comprehensive review of large-scale pretrained bioacoustic foundation models and systematically investigates their transferability across multiple bioacoustic classification tasks. We overview bioacoustic representation learning including major pretraining data sources and benchmarks. On this basis, we review bioacoustic foundation models by thoroughly analysing design decisions such as model architecture, pretraining scheme, and training paradigm. Additionally, we evaluate selected foundation models on classification tasks from the BEANS and BirdSet benchmarks, comparing the generalisability of learned representations under both linear and attentive probing strategies. Our comprehensive experimental analysis reveals that BirdMAE, trained on large-scale bird song data with a self-supervised objective, achieves the best performance on the BirdSet benchmark. On BEANS, BEATs$_{NLM}$, the extracted encoder of the NatureLM-audio large audio model, is slightly better. Both transformer-based models require attentive probing to extract the full performance of their representations. ConvNext$_{BS}$ and Perch models trained with supervision on large-scale bird song data remain competitive for passive acoustic monitoring classification tasks of BirdSet in linear probing settings. Training a new linear classifier has clear advantages over evaluating these models without further training. While on BEANS, the baseline model BEATs trained with self-supervision on AudioSet outperforms bird-specific models when evaluated with attentive probing. These findings provide valuable guidance for practitioners selecting appropriate models to adapt them to new bioacoustic classification tasks via probing.