🤖 AI Summary
Weak generalization capability of audio deepfake detection models—particularly against unseen attacks and multilingual inputs—remains a critical challenge. To address this, we propose a multi-level self-supervised feature gating mechanism that integrates multi-kernel gated convolutions with Centered Kernel Alignment (CKA)-based diversity regularization. Operating atop XLS-R front-end features, our method jointly optimizes layer-wise modeling of localized and global speech artifacts. This design enhances cross-domain and cross-lingual robustness while improving model interpretability. Extensive experiments on mainstream benchmarks—including cross-domain and multilingual evaluation sets—demonstrate state-of-the-art performance, significantly outperforming existing detection approaches in both accuracy and generalization.
📝 Abstract
Recent advancements in generative AI, particularly in speech synthesis, have enabled the generation of highly natural-sounding synthetic speech that closely mimics human voices. While these innovations hold promise for applications like assistive technologies, they also pose significant risks, including misuse for fraudulent activities, identity theft, and security threats. Current research on spoofing detection countermeasures remains limited by generalization to unseen deepfake attacks and languages. To address this, we propose a gating mechanism extracting relevant feature from the speech foundation XLS-R model as a front-end feature extractor. For downstream back-end classifier, we employ Multi-kernel gated Convolution (MultiConv) to capture both local and global speech artifacts. Additionally, we introduce Centered Kernel Alignment (CKA) as a similarity metric to enforce diversity in learned features across different MultiConv layers. By integrating CKA with our gating mechanism, we hypothesize that each component helps improving the learning of distinct synthetic speech patterns. Experimental results demonstrate that our approach achieves state-of-the-art performance on in-domain benchmarks while generalizing robustly to out-of-domain datasets, including multilingual speech samples. This underscores its potential as a versatile solution for detecting evolving speech deepfake threats.