🤖 AI Summary
Sound event localization and detection (SELD) suffers from scarce large-scale annotated data and poor adaptability to low-resource multi-channel or mono-channel scenarios. Method: We propose PSELDNets, the first SELD foundation model, pretrained on 1,167 hours of synthetic multi-channel audio—generated via simulated spatial room impulse responses (SRIRs) across 170 sound classes—and introduce AdapterBit, a lightweight, parameter-efficient adapter enabling effective fine-tuning with only ~10% labeled multi- or mono-channel data. A unified architecture jointly models multi-channel time-frequency features for end-to-end localization and detection optimization. Results: PSELDNets achieves state-of-the-art performance on TAU-SRIR and three major public benchmarks. Under low-resource settings, it matches or surpasses full fine-tuning, while maintaining high localization accuracy even with mono-channel input.
📝 Abstract
Sound event localization and detection (SELD) has seen substantial advancements through learning-based methods. These systems, typically trained from scratch on specific datasets, have shown considerable generalization capabilities. Recently, deep neural networks trained on large-scale datasets have achieved remarkable success in the sound event classification (SEC) field, prompting an open question of whether these advancements can be extended to develop general-purpose SELD models. In this paper, leveraging the power of pre-trained SEC models, we propose pre-trained SELD networks (PSELDNets) on large-scale synthetic datasets. These synthetic datasets, generated by convolving sound events with simulated spatial room impulse responses (SRIRs), contain 1,167 hours of audio clips with an ontology of 170 sound classes. These PSELDNets are transferred to downstream SELD tasks. When we adapt PSELDNets to specific scenarios, particularly in low-resource data cases, we introduce a data-efficient fine-tuning method, AdapterBit. PSELDNets are evaluated on a synthetic-test-set using collected SRIRs from TAU Spatial Room Impulse Response Database (TAU-SRIR DB) and achieve satisfactory performance. We also conduct our experiments to validate the transferability of PSELDNets to three publicly available datasets and our own collected audio recordings. Results demonstrate that PSELDNets surpass state-of-the-art systems across all publicly available datasets. Given the need for direction-of-arrival estimation, SELD generally relies on sufficient multi-channel audio clips. However, incorporating the AdapterBit, PSELDNets show more efficient adaptability to various tasks using minimal multi-channel or even just monophonic audio clips, outperforming the traditional fine-tuning approaches.