Benchmarking Fake Voice Detection in the Fake Voice Generation Arms Race

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The proliferation of deepfake speech and the absence of standardized evaluation protocols for detection methods hinder progress in audio security. Method: This work establishes the first large-scale, cross-domain benchmark for fake speech detection, systematically evaluating eight state-of-the-art detectors on speech synthesized by 20 mainstream text-to-speech systems. It introduces a reproducible adversarial testing framework to assess real-world robustness. Contribution/Results: We identify severe cross-domain performance degradation and critical security vulnerabilities in current detectors. To address the lack of standardized evaluation, we propose a comprehensive, multi-dimensional metric suite—incorporating generalization, domain robustness, and adversarial resilience. Our open-source, extensible benchmark enables rigorous, comparable assessment and facilitates the shift from isolated “point defenses” toward holistic system-level robustness in trustworthy AI-based speech security.

Technology Category

Application Category

📝 Abstract
As advances in synthetic voice generation accelerate, an increasing variety of fake voice generators have emerged, producing audio that is often indistinguishable from real human speech. This evolution poses new and serious threats across sectors where audio recordings serve as critical evidence. Although fake voice detectors are also advancing, the arms race between fake voice generation and detection has become more intense and complex. In this work, we present the first large-scale, cross-domain evaluation of fake voice detectors, benchmarking 8 state-of-the-art models against datasets synthesized by 20 different fake voice generation systems. To the best of our knowledge, this is the most comprehensive cross-domain assessment conducted to date. Our study reveals substantial security vulnerabilities in current fake voice detection systems, underscoring critical gaps in their real-world robustness. To advance the field, we propose a unified and effective metric that consolidates the diverse and often inconsistent evaluation criteria previously used across different studies. This metric enables standardized, straightforward comparisons of the robustness of fake voice detectors. We conclude by offering actionable recommendations for building more resilient fake voice detection technologies, with the broader goal of reinforcing the foundations of AI security and trustworthiness.
Problem

Research questions and friction points this paper is trying to address.

Evaluating fake voice detection against diverse synthetic generators
Identifying security vulnerabilities in current voice detection systems
Proposing unified metrics for standardized robustness assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarked 8 detectors against 20 fake voice systems
Proposed unified metric for standardized robustness evaluation
Provided actionable recommendations for resilient detection technologies
🔎 Similar Papers
No similar papers found.
X
Xutao Mao
Vanderbilt University
K
Ke Li
Vanderbilt University
C
Cameron Baird
Vanderbilt University
E
Ezra Xuanru Tao
Vanderbilt University
Dan Lin
Dan Lin
NANYANG TECHNOLOGICAL UNIVERSITY (NTU)
Data Mining and Machine LearningComputer Vision