Less is More for Synthetic Speech Detection in the Wild

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the poor generalization of synthetic speech detectors in real-world scenarios. To systematically evaluate robustness and failure modes under controlled distribution shifts, we introduce ShiftySpeech—a comprehensive benchmark spanning 7 domains, 6 TTS systems, 12 vocoders, and 3 languages (3,000+ hours). Our empirical analysis reveals that training data diversity does not necessarily improve cross-domain generalization; instead, a streamlined strategy—using a single vocoder and single speaker—achieves superior robustness, challenging the “more data is better” paradigm. Leveraging self-supervised speech representations, we design a lightweight detector and integrate rigorous distribution-shift analysis with a cross-domain evaluation framework. On the In-the-Wild benchmark, our approach achieves state-of-the-art performance, significantly outperforming multi-source training methods. This work establishes a novel “less-but-better” detection paradigm grounded in principled data curation and representation learning.

Technology Category

Application Category

📝 Abstract
Driven by advances in self-supervised learning for speech, state-of-the-art synthetic speech detectors have achieved low error rates on popular benchmarks such as ASVspoof. However, prior benchmarks do not address the wide range of real-world variability in speech. Are reported error rates realistic in real-world conditions? To assess detector failure modes and robustness under controlled distribution shifts, we introduce ShiftySpeech, a benchmark with more than 3000 hours of synthetic speech from 7 domains, 6 TTS systems, 12 vocoders, and 3 languages. We found that all distribution shifts degraded model performance, and contrary to prior findings, training on more vocoders, speakers, or with data augmentation did not guarantee better generalization. In fact, we found that training on less diverse data resulted in better generalization, and that a detector fit using samples from a single carefully selected vocoder and a single speaker achieved state-of-the-art results on the challenging In-the-Wild benchmark.
Problem

Research questions and friction points this paper is trying to address.

Assesses synthetic speech detector robustness
Introduces ShiftySpeech benchmark for variability
Finds less diverse training improves generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

ShiftySpeech benchmark with 3000+ hours
Training on less diverse data improves
Single vocoder and speaker achieve state-of-the-art
🔎 Similar Papers
No similar papers found.