Two Views, One Truth: Spectral and Self-Supervised Features Fusion for Robust Speech Deepfake Detection

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Audio deepfake detection faces two key challenges: (1) poor robustness of unimodal features—either raw waveforms or spectrograms—and (2) limited generalization to unseen spoofing algorithms. To address these, we propose a multimodal self-supervised fusion framework that jointly leverages waveform representations extracted via self-supervised learning (SSL) and handcrafted spectral features—including MFCCs, LFCCs, and CQCCs. Cross-modal alignment and adaptive fusion are achieved through a cross-attention mechanism, while a learnable gating module enhances discriminability of the fused representation. Crucially, our approach is algorithm-agnostic and domain-independent, enabling universal detection without reliance on specific generative models. Evaluated on four public benchmarks, it significantly outperforms SSL-only baselines, reducing the equal error rate (EER) by 38%. This demonstrates the effectiveness of multimodal collaborative modeling in improving both robustness and generalization for audio deepfake detection.

Technology Category

Application Category

📝 Abstract
Recent advances in synthetic speech have made audio deepfakes increasingly realistic, posing significant security risks. Existing detection methods that rely on a single modality, either raw waveform embeddings or spectral based features, are vulnerable to non spoof disturbances and often overfit to known forgery algorithms, resulting in poor generalization to unseen attacks. To address these shortcomings, we investigate hybrid fusion frameworks that integrate self supervised learning (SSL) based representations with handcrafted spectral descriptors (MFCC , LFCC, CQCC). By aligning and combining complementary information across modalities, these fusion approaches capture subtle artifacts that single feature approaches typically overlook. We explore several fusion strategies, including simple concatenation, cross attention, mutual cross attention, and a learnable gating mechanism, to optimally blend SSL features with fine grained spectral cues. We evaluate our approach on four challenging public benchmarks and report generalization performance. All fusion variants consistently outperform an SSL only baseline, with the cross attention strategy achieving the best generalization with a 38% relative reduction in equal error rate (EER). These results confirm that joint modeling of waveform and spectral views produces robust, domain agnostic representations for audio deepfake detection.
Problem

Research questions and friction points this paper is trying to address.

Detect realistic audio deepfakes using multimodal fusion
Improve generalization by combining SSL and spectral features
Reduce vulnerability to unseen attacks and disturbances
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fusion of SSL and spectral features
Cross attention for optimal blending
Robust domain agnostic representations
🔎 Similar Papers
No similar papers found.