Comprehensive Layer-wise Analysis of SSL Models for Audio Deepfake Detection

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost and poor cross-lingual/cross-scenario generalization of self-supervised learning (SSL) models—particularly wav2vec 2.0 and HuBERT—in audio deepfake detection. We systematically analyze the discriminative capacity of individual Transformer layers via layer-wise feature freezing, linear probing, and rigorous evaluation across languages (English, Mandarin, Spanish) and domains (speech, singing). Contrary to prevailing assumptions, we find that lower-layer representations exhibit superior forgery discrimination. Building on this insight, we propose a lightweight layer-pruning strategy: retaining only the first 3–6 Transformer layers achieves ≥98% of full-model performance. On multilingual deepfake benchmarks, the pruned models achieve equal-error rates (EER) <1.5% while substantially reducing computational overhead and inference latency. This establishes a new paradigm for efficient, robust audio deepfake detection.

Technology Category

Application Category

📝 Abstract
This paper conducts a comprehensive layer-wise analysis of self-supervised learning (SSL) models for audio deepfake detection across diverse contexts, including multilingual datasets (English, Chinese, Spanish), partial, song, and scene-based deepfake scenarios. By systematically evaluating the contributions of different transformer layers, we uncover critical insights into model behavior and performance. Our findings reveal that lower layers consistently provide the most discriminative features, while higher layers capture less relevant information. Notably, all models achieve competitive equal error rate (EER) scores even when employing a reduced number of layers. This indicates that we can reduce computational costs and increase the inference speed of detecting deepfakes by utilizing only a few lower layers. This work enhances our understanding of SSL models in deepfake detection, offering valuable insights applicable across varied linguistic and contextual settings. Our trained models and code are publicly available: https://github.com/Yaselley/SSL_Layerwise_Deepfake.
Problem

Research questions and friction points this paper is trying to address.

Layer-wise analysis of SSL models
Audio deepfake detection across languages
Reducing computational costs with fewer layers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise SSL model analysis
Lower layers provide discriminative features
Reduced layers enhance computational efficiency
🔎 Similar Papers
2024-04-22arXiv.orgCitations: 25