🤖 AI Summary
Acoustic scene classification (ASC) suffers from cross-device domain shift under limited labeled data, leading to poor generalization to unseen devices. To address this, we propose an entropy-guided curriculum learning framework: an auxiliary domain classifier estimates device posterior probabilities for training samples, and their Shannon entropy is used to construct a curriculum—prioritizing high-entropy (i.e., device-ambiguous) samples to encourage learning of domain-invariant features. Our method requires no modification to the backbone architecture and introduces zero inference overhead, ensuring plug-and-play compatibility and architecture independence. Extensive experiments on multiple DCASE 2024 benchmarks demonstrate substantial mitigation of domain shift, with particularly pronounced performance gains in low-data regimes. The core innovation lies in the first use of device prediction entropy as an adaptive, self-supervised criterion for curriculum scheduling—enabling effective domain-agnostic feature learning without additional supervision or architectural constraints.
📝 Abstract
Acoustic Scene Classification (ASC) faces challenges in generalizing across recording devices, particularly when labeled data is limited. The DCASE 2024 Challenge Task 1 highlights this issue by requiring models to learn from small labeled subsets recorded on a few devices. These models need to then generalize to recordings from previously unseen devices under strict complexity constraints. While techniques such as data augmentation and the use of pre-trained models are well-established for improving model generalization, optimizing the training strategy represents a complementary yet less-explored path that introduces no additional architectural complexity or inference overhead. Among various training strategies, curriculum learning offers a promising paradigm by structuring the learning process from easier to harder examples. In this work, we propose an entropy-guided curriculum learning strategy to address the domain shift problem in data-efficient ASC. Specifically, we quantify the uncertainty of device domain predictions for each training sample by computing the Shannon entropy of the device posterior probabilities estimated by an auxiliary domain classifier. Using entropy as a proxy for domain invariance, the curriculum begins with high-entropy samples and gradually incorporates low-entropy, domain-specific ones to facilitate the learning of generalizable representations. Experimental results on multiple DCASE 2024 ASC baselines demonstrate that our strategy effectively mitigates domain shift, particularly under limited labeled data conditions. Our strategy is architecture-agnostic and introduces no additional inference cost, making it easily integrable into existing ASC baselines and offering a practical solution to domain shift.