🤖 AI Summary
This paper addresses the poor generalization of synthetic speech detectors in real-world scenarios. To systematically evaluate robustness and failure modes under controlled distribution shifts, we introduce ShiftySpeech—a comprehensive benchmark spanning 7 domains, 6 TTS systems, 12 vocoders, and 3 languages (3,000+ hours). Our empirical analysis reveals that training data diversity does not necessarily improve cross-domain generalization; instead, a streamlined strategy—using a single vocoder and single speaker—achieves superior robustness, challenging the “more data is better” paradigm. Leveraging self-supervised speech representations, we design a lightweight detector and integrate rigorous distribution-shift analysis with a cross-domain evaluation framework. On the In-the-Wild benchmark, our approach achieves state-of-the-art performance, significantly outperforming multi-source training methods. This work establishes a novel “less-but-better” detection paradigm grounded in principled data curation and representation learning.
📝 Abstract
Driven by advances in self-supervised learning for speech, state-of-the-art synthetic speech detectors have achieved low error rates on popular benchmarks such as ASVspoof. However, prior benchmarks do not address the wide range of real-world variability in speech. Are reported error rates realistic in real-world conditions? To assess detector failure modes and robustness under controlled distribution shifts, we introduce ShiftySpeech, a benchmark with more than 3000 hours of synthetic speech from 7 domains, 6 TTS systems, 12 vocoders, and 3 languages. We found that all distribution shifts degraded model performance, and contrary to prior findings, training on more vocoders, speakers, or with data augmentation did not guarantee better generalization. In fact, we found that training on less diverse data resulted in better generalization, and that a detector fit using samples from a single carefully selected vocoder and a single speaker achieved state-of-the-art results on the challenging In-the-Wild benchmark.