🤖 AI Summary
Traditional single-center one-class learning (OCL) for audio deepfake detection fails to adequately model intra-class variability of genuine speech—particularly overlooking critical cues such as speech quality—leading to poor generalization against unseen attacks. To address this, we propose a speech-quality-aware multi-center OCL framework: leveraging Mean Opinion Score (MOS) to partition the feature space into high- and low-quality subspaces; learning separate, compact class centers for each subspace; and introducing a quality-label-free multi-center ensemble scoring mechanism that adaptively optimizes decision thresholds. Evaluated on the In-the-Wild dataset, our method achieves a 5.09% equal error rate (EER), significantly outperforming existing OCL-based approaches. This work is the first to systematically incorporate speech quality priors into the OCL paradigm, thereby enhancing robustness against previously unseen deepfake attacks.
📝 Abstract
Recent work shows that one-class learning can detect unseen deepfake attacks by modeling a compact distribution of bona fide speech around a single centroid. However, the single-centroid assumption can oversimplify the bona fide speech representation and overlook useful cues, such as speech quality, which reflects the naturalness of the speech. Speech quality can be easily obtained using existing speech quality assessment models that estimate it through Mean Opinion Score. In this paper, we propose QAMO: Quality-Aware Multi-Centroid One-Class Learning for speech deepfake detection. QAMO extends conventional one-class learning by introducing multiple quality-aware centroids. In QAMO, each centroid is optimized to represent a distinct speech quality subspaces, enabling better modeling of intra-class variability in bona fide speech. In addition, QAMO supports a multi-centroid ensemble scoring strategy, which improves decision thresholding and reduces the need for quality labels during inference. With two centroids to represent high- and low-quality speech, our proposed QAMO achieves an equal error rate of 5.09% in In-the-Wild dataset, outperforming previous one-class and quality-aware systems.