🤖 AI Summary
To address poor generalization in open-world audio deepfake detection—caused by unknown spoofing methods in the test domain—this paper proposes RawNetLite, a lightweight end-to-end model that operates directly on raw waveforms, eliminating handcrafted feature engineering. Its key contributions are: (1) a cross-domain mixup training strategy jointly optimized with Focal Loss to enhance hard-sample learning; and (2) the first systematic modeling of codec-induced distortion, integrated with waveform-level augmentations—including pitch shifting, additive noise, and time stretching—to improve robustness. On the FakeOrReal benchmark, RawNetLite achieves 99.7% F1 score (EER = 0.25%). Under challenging cross-domain evaluation (AVSpoof2021 + CodecFake), it maintains 83.4% F1 (EER = 16.4%), significantly outperforming existing lightweight approaches.
📝 Abstract
Audio deepfakes represent a growing threat to digital security and trust, leveraging advanced generative models to produce synthetic speech that closely mimics real human voices. Detecting such manipulations is especially challenging under open-world conditions, where spoofing methods encountered during testing may differ from those seen during training. In this work, we propose an end-to-end deep learning framework for audio deepfake detection that operates directly on raw waveforms. Our model, RawNetLite, is a lightweight convolutional-recurrent architecture designed to capture both spectral and temporal features without handcrafted preprocessing. To enhance robustness, we introduce a training strategy that combines data from multiple domains and adopts Focal Loss to emphasize difficult or ambiguous samples. We further demonstrate that incorporating codec-based manipulations and applying waveform-level audio augmentations (e.g., pitch shifting, noise, and time stretching) leads to significant generalization improvements under realistic acoustic conditions. The proposed model achieves over 99.7% F1 and 0.25% EER on in-domain data (FakeOrReal), and up to 83.4% F1 with 16.4% EER on a challenging out-of-distribution test set (AVSpoof2021 + CodecFake). These findings highlight the importance of diverse training data, tailored objective functions and audio augmentations in building resilient and generalizable audio forgery detectors. Code and pretrained models are available at https://iplab.dmi.unict.it/mfs/Deepfakes/PaperRawNet2025/.