Multi-level SSL Feature Gating for Audio Deepfake Detection

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak generalization capability of audio deepfake detection models—particularly against unseen attacks and multilingual inputs—remains a critical challenge. To address this, we propose a multi-level self-supervised feature gating mechanism that integrates multi-kernel gated convolutions with Centered Kernel Alignment (CKA)-based diversity regularization. Operating atop XLS-R front-end features, our method jointly optimizes layer-wise modeling of localized and global speech artifacts. This design enhances cross-domain and cross-lingual robustness while improving model interpretability. Extensive experiments on mainstream benchmarks—including cross-domain and multilingual evaluation sets—demonstrate state-of-the-art performance, significantly outperforming existing detection approaches in both accuracy and generalization.

Technology Category

Application Category

📝 Abstract
Recent advancements in generative AI, particularly in speech synthesis, have enabled the generation of highly natural-sounding synthetic speech that closely mimics human voices. While these innovations hold promise for applications like assistive technologies, they also pose significant risks, including misuse for fraudulent activities, identity theft, and security threats. Current research on spoofing detection countermeasures remains limited by generalization to unseen deepfake attacks and languages. To address this, we propose a gating mechanism extracting relevant feature from the speech foundation XLS-R model as a front-end feature extractor. For downstream back-end classifier, we employ Multi-kernel gated Convolution (MultiConv) to capture both local and global speech artifacts. Additionally, we introduce Centered Kernel Alignment (CKA) as a similarity metric to enforce diversity in learned features across different MultiConv layers. By integrating CKA with our gating mechanism, we hypothesize that each component helps improving the learning of distinct synthetic speech patterns. Experimental results demonstrate that our approach achieves state-of-the-art performance on in-domain benchmarks while generalizing robustly to out-of-domain datasets, including multilingual speech samples. This underscores its potential as a versatile solution for detecting evolving speech deepfake threats.
Problem

Research questions and friction points this paper is trying to address.

Detecting highly natural-sounding synthetic speech deepfakes
Addressing generalization to unseen deepfake attacks and languages
Improving feature diversity for distinct synthetic pattern recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gating mechanism extracts XLS-R features
Multi-kernel convolution captures speech artifacts
Centered Kernel Alignment enforces feature diversity
🔎 Similar Papers
No similar papers found.
H
Hoan My Tran
Univ Rennes, IRISA, CNRS
Damien Lolive
Damien Lolive
UBS, CNRS, IRISA
NLPtext-to-speech synthesisspeech processing
A
Aghilas Sini
Univ Le Mans, LIUM
Arnaud Delhay
Arnaud Delhay
Université de Rennes - IRISA
speech processingcomputational complexityanalogical proportionsanomaly detection
P
Pierre-François Marteau
Univ Bretagne Sud, IRISA, CNRS
D
David Guennec
Univ Rennes, IRISA, CNRS