🤖 AI Summary
This work addresses the deepfake speech detection (DSD) task by proposing a lightweight Depthwise-Inception Network (DIN) integrated with a Contrastive Training Strategy (CTS). Methodologically, input speech is converted into STFT or LF spectrograms; DIN extracts discriminative audio embeddings, while CTS explicitly models the Gaussian distribution of bona fide speech in the embedding space, enabling binary classification via Euclidean distance from test samples to this distribution. To our knowledge, this is the first approach synergistically combining Depthwise-Inception architecture and contrastive learning for DSD—achieving both distribution modeling and distance-based decision-making within a single, efficient model. Evaluated on ASVspoof 2019 LA, it achieves an EER of 4.6%, accuracy of 95.4%, F1-score of 97.3%, and AUC of 98.9%, with only 1.77M parameters and 985M FLOPs—outperforming the challenge’s best single-system baseline and supporting real-time deployment.
📝 Abstract
In this paper, we propose a deep neural network approach for deepfake speech detection (DSD) based on a lowcomplexity Depthwise-Inception Network (DIN) trained with a contrastive training strategy (CTS). In this framework, input audio recordings are first transformed into spectrograms using Short-Time Fourier Transform (STFT) and Linear Filter (LF), which are then used to train the DIN. Once trained, the DIN processes bonafide utterances to extract audio embeddings, which are used to construct a Gaussian distribution representing genuine speech. Deepfake detection is then performed by computing the distance between a test utterance and this distribution to determine whether the utterance is fake or bonafide. To evaluate our proposed systems, we conducted extensive experiments on the benchmark dataset of ASVspoof 2019 LA. The experimental results demonstrate the effectiveness of combining the Depthwise-Inception Network with the contrastive learning strategy in distinguishing between fake and bonafide utterances. We achieved Equal Error Rate (EER), Accuracy (Acc.), F1, AUC scores of 4.6%, 95.4%, 97.3%, and 98.9% respectively using a single, low-complexity DIN with just 1.77 M parameters and 985 M FLOPS on short audio segments (4 seconds). Furthermore, our proposed system outperforms the single-system submissions in the ASVspoof 2019 LA challenge, showcasing its potential for real-time applications.