🤖 AI Summary
Existing deepfake detection methods focus predominantly on speech or singing voices and exhibit poor generalizability to environmental sounds; moreover, no large-scale, high-diversity benchmark exists for environmental sound deepfake detection. Method: This paper formally defines the environmental sound deepfake detection task and introduces EnvSDD—the first large-scale, multi-source benchmark (362 hours)—encompassing authentic/forged samples and supporting cross-model and cross-dataset generalization evaluation. We propose a detection framework built upon pretrained audio foundation models, incorporating environment-specific feature extraction and discriminative architectures, alongside protocols for multi-source synthesis, fine-grained annotation, and cross-domain assessment. Contribution/Results: On EnvSDD, our method significantly outperforms state-of-the-art speech/singing voice detectors: test accuracy improves by 12.7%, and cross-model generalization AUC reaches 0.91—demonstrating that environmental sound deepfake detection necessitates a dedicated paradigm.
📝 Abstract
Audio generation systems now create very realistic soundscapes that can enhance media production, but also pose potential risks. Several studies have examined deepfakes in speech or singing voice. However, environmental sounds have different characteristics, which may make methods for detecting speech and singing deepfakes less effective for real-world sounds. In addition, existing datasets for environmental sound deepfake detection are limited in scale and audio types. To address this gap, we introduce EnvSDD, the first large-scale curated dataset designed for this task, consisting of 45.25 hours of real and 316.74 hours of fake audio. The test set includes diverse conditions to evaluate the generalizability, such as unseen generation models and unseen datasets. We also propose an audio deepfake detection system, based on a pre-trained audio foundation model. Results on EnvSDD show that our proposed system outperforms the state-of-the-art systems from speech and singing domains.