🤖 AI Summary
The proliferation of deepfake speech and the absence of standardized evaluation protocols for detection methods hinder progress in audio security. Method: This work establishes the first large-scale, cross-domain benchmark for fake speech detection, systematically evaluating eight state-of-the-art detectors on speech synthesized by 20 mainstream text-to-speech systems. It introduces a reproducible adversarial testing framework to assess real-world robustness. Contribution/Results: We identify severe cross-domain performance degradation and critical security vulnerabilities in current detectors. To address the lack of standardized evaluation, we propose a comprehensive, multi-dimensional metric suite—incorporating generalization, domain robustness, and adversarial resilience. Our open-source, extensible benchmark enables rigorous, comparable assessment and facilitates the shift from isolated “point defenses” toward holistic system-level robustness in trustworthy AI-based speech security.
📝 Abstract
As advances in synthetic voice generation accelerate, an increasing variety of fake voice generators have emerged, producing audio that is often indistinguishable from real human speech. This evolution poses new and serious threats across sectors where audio recordings serve as critical evidence. Although fake voice detectors are also advancing, the arms race between fake voice generation and detection has become more intense and complex. In this work, we present the first large-scale, cross-domain evaluation of fake voice detectors, benchmarking 8 state-of-the-art models against datasets synthesized by 20 different fake voice generation systems. To the best of our knowledge, this is the most comprehensive cross-domain assessment conducted to date. Our study reveals substantial security vulnerabilities in current fake voice detection systems, underscoring critical gaps in their real-world robustness. To advance the field, we propose a unified and effective metric that consolidates the diverse and often inconsistent evaluation criteria previously used across different studies. This metric enables standardized, straightforward comparisons of the robustness of fake voice detectors. We conclude by offering actionable recommendations for building more resilient fake voice detection technologies, with the broader goal of reinforcing the foundations of AI security and trustworthiness.