🤖 AI Summary
In zero-shot environmental sound classification, audio-text models suffer severe performance degradation under background noise, primarily governed by signal-to-noise ratio (SNR) rather than background type. To address this without retraining, we propose a domain adaptation framework: (i) a novel quantifiable background contribution fusion mechanism enabling SNR-aware weighted inference; and (ii) a cross-modal semantic gap analysis framework that mitigates the audio–text modality discrepancy via prototype-based contrastive learning, embedding alignment, and distance metric optimization. Evaluated on multi-background, multi-SNR benchmarks, our method significantly improves classification accuracy across diverse state-of-the-art prototype-based approaches. It requires no additional annotations, fine-tuning, or architectural modifications, demonstrating strong generalization and practical applicability in realistic noisy environments.
📝 Abstract
Audio-text models are widely used in zero-shot environmental sound classification as they alleviate the need for annotated data. However, we show that their performance severely drops in the presence of background sound sources. Our analysis reveals that this degradation is primarily driven by SNR levels of background soundscapes, and independent of background type. To address this, we propose a novel method that quantifies and integrates the contribution of background sources into the classification process, improving performance without requiring model retraining. Our domain adaptation technique enhances accuracy across various backgrounds and SNR conditions. Moreover, we analyze the modality gap between audio and text embeddings, showing that narrowing this gap improves classification performance. The method generalizes effectively across state-of-the-art prototypical approaches, showcasing its scalability and robustness for diverse environments.