A Comparative Study on Proactive and Passive Detection of Deepfake Speech

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a unified evaluation standard for active watermarking and passive detection methods in deepfake speech authentication. We propose the first cross-paradigm (active/passive) standardized evaluation framework, enabling fair, head-to-head comparison of performance and robustness under identical datasets, consistent metrics, and diverse adversarial attacks. Experimental results reveal distinct vulnerability patterns: active and passive models exhibit differential sensitivity to phonetic perturbations—including pitch shifting, temporal stretching, and spectral distortion—while demonstrating complementary strengths. The framework includes fully open-sourced code, reproducible protocols, and benchmarking tools. It provides both theoretical guidance and practical infrastructure for informed method selection in real-world deployment scenarios.

Technology Category

Application Category

📝 Abstract
Solutions for defending against deepfake speech fall into two categories: proactive watermarking models and passive conventional deepfake detectors. While both address common threats, their differences in training, optimization, and evaluation prevent a unified protocol for joint evaluation and selecting the best solutions for different cases. This work proposes a framework to evaluate both model types in deepfake speech detection. To ensure fair comparison and minimize discrepancies, all models were trained and tested on common datasets, with performance evaluated using a shared metric. We also analyze their robustness against various adversarial attacks, showing that different models exhibit distinct vulnerabilities to different speech attribute distortions. Our training and evaluation code is available at Github.
Problem

Research questions and friction points this paper is trying to address.

Compare proactive and passive deepfake speech detection methods
Develop unified framework for evaluating both model types
Analyze robustness against adversarial attacks and vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes framework for evaluating deepfake detection models
Uses common datasets and shared metrics
Analyzes robustness against adversarial attacks
🔎 Similar Papers
No similar papers found.