PSELDNets: Pre-trained Neural Networks on Large-scale Synthetic Datasets for Sound Event Localization and Detection

📅 2024-11-10
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Sound event localization and detection (SELD) suffers from scarce large-scale annotated data and poor adaptability to low-resource multi-channel or mono-channel scenarios. Method: We propose PSELDNets, the first SELD foundation model, pretrained on 1,167 hours of synthetic multi-channel audio—generated via simulated spatial room impulse responses (SRIRs) across 170 sound classes—and introduce AdapterBit, a lightweight, parameter-efficient adapter enabling effective fine-tuning with only ~10% labeled multi- or mono-channel data. A unified architecture jointly models multi-channel time-frequency features for end-to-end localization and detection optimization. Results: PSELDNets achieves state-of-the-art performance on TAU-SRIR and three major public benchmarks. Under low-resource settings, it matches or surpasses full fine-tuning, while maintaining high localization accuracy even with mono-channel input.

Technology Category

Application Category

📝 Abstract
Sound event localization and detection (SELD) has seen substantial advancements through learning-based methods. These systems, typically trained from scratch on specific datasets, have shown considerable generalization capabilities. Recently, deep neural networks trained on large-scale datasets have achieved remarkable success in the sound event classification (SEC) field, prompting an open question of whether these advancements can be extended to develop general-purpose SELD models. In this paper, leveraging the power of pre-trained SEC models, we propose pre-trained SELD networks (PSELDNets) on large-scale synthetic datasets. These synthetic datasets, generated by convolving sound events with simulated spatial room impulse responses (SRIRs), contain 1,167 hours of audio clips with an ontology of 170 sound classes. These PSELDNets are transferred to downstream SELD tasks. When we adapt PSELDNets to specific scenarios, particularly in low-resource data cases, we introduce a data-efficient fine-tuning method, AdapterBit. PSELDNets are evaluated on a synthetic-test-set using collected SRIRs from TAU Spatial Room Impulse Response Database (TAU-SRIR DB) and achieve satisfactory performance. We also conduct our experiments to validate the transferability of PSELDNets to three publicly available datasets and our own collected audio recordings. Results demonstrate that PSELDNets surpass state-of-the-art systems across all publicly available datasets. Given the need for direction-of-arrival estimation, SELD generally relies on sufficient multi-channel audio clips. However, incorporating the AdapterBit, PSELDNets show more efficient adaptability to various tasks using minimal multi-channel or even just monophonic audio clips, outperforming the traditional fine-tuning approaches.
Problem

Research questions and friction points this paper is trying to address.

Extending pre-trained SEC models to SELD foundation models
Improving SELD performance with large-scale synthetic datasets
Enhancing adaptability to low-resource SELD scenarios efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained SELD networks on synthetic dataset
Data-efficient fine-tuning with AdapterBit
Transferability to low-resource real-world scenarios
🔎 Similar Papers
No similar papers found.
Jinbo Hu
Jinbo Hu
Institute of Acoustics, Chinese Academy of Sciences
Sound event localization and detection
Yin Cao
Yin Cao
Associate Professor, Xi'an Jiaotong-Liverpool University
Machine LearningAudio Signal ProcessingAcousticsNoise Control
M
Ming Wu
Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
F
Fang Kang
Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Oulu 90570, Finland
Feiran Yang
Feiran Yang
State Key Laboratory of Acoustics, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
Wenwu Wang
Wenwu Wang
Professor, University of Surrey, UK
signal processingmachine learningmachine listeningaudio/speech/audio-visualmultimodal fusion
M
Mark D. Plumbley
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, U.K.
J
Jun Yang
Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China