🤖 AI Summary
This work addresses the limited robustness of audio deepfake detection against unmodified, compressed, and “washed” (i.e., anti-detection) speech. To tackle this, we propose a multilingual, multi-synthesis-source data integration strategy. Methodologically, we integrate the WavLM self-supervised large model frontend with RawBoost acoustic augmentation within the AASIST architecture, enabling joint modeling across languages and distortion types. Crucially, our approach leverages collaborative training on diverse multilingual and multi-source synthetic data, substantially enhancing generalization under complex adversarial conditions—including real-world noise, compression artifacts, and active evasion attacks. Evaluated on the SAFE Challenge, our method achieves second place in both Task 1 (original deepfake detection) and Task 3 (washed-audio detection), demonstrating strong robustness and practical efficacy in realistic, challenging scenarios.
📝 Abstract
The SAFE Challenge evaluates synthetic speech detection across three tasks: unmodified audio, processed audio with compression artifacts, and laundered audio designed to evade detection. We systematically explore self-supervised learning (SSL) front-ends, training data compositions, and audio length configurations for robust deepfake detection. Our AASIST-based approach incorporates WavLM large frontend with RawBoost augmentation, trained on a multilingual dataset of 256,600 samples spanning 9 languages and over 70 TTS systems from CodecFake, MLAAD v5, SpoofCeleb, Famous Figures, and MAILABS. Through extensive experimentation with different SSL front-ends, three training data versions, and two audio lengths, we achieved second place in both Task 1 (unmodified audio detection) and Task 3 (laundered audio detection), demonstrating strong generalization and robustness.