🤖 AI Summary
This work addresses the synthetic speech provenance problem—i.e., identifying the specific AI system that generated a given forged speech sample—by proposing an end-to-end dual-path framework integrating classification and metric learning, thereby filling a critical research gap in robust provenance attribution beyond mere spoofing detection. It is the first to systematically validate ResNet-based backbones for speech source attribution, leveraging contrastive and triplet losses within a unified metric learning paradigm. Evaluated on the MLAADv5 benchmark, the method achieves state-of-the-art performance, matching or locally surpassing leading self-supervised representation learning approaches in classification accuracy. The study advances speaker recognition techniques toward forensic audio applications, establishing a new lightweight, efficient, and deployable paradigm for synthetic speech provenance analysis, supported by empirical validation.
📝 Abstract
This paper addresses source tracing in synthetic speech-identifying generative systems behind manipulated audio via speaker recognition-inspired pipelines. While prior work focuses on spoofing detection, source tracing lacks robust solutions. We evaluate two approaches: classification-based and metric-learning. We tested our methods on the MLAADv5 benchmark using ResNet and self-supervised learning (SSL) backbones. The results show that ResNet achieves competitive performance with the metric learning approach, matching and even exceeding SSL-based systems. Our work demonstrates ResNet's viability for source tracing while underscoring the need to optimize SSL representations for this task. Our work bridges speaker recognition methodologies with audio forensic challenges, offering new directions for combating synthetic media manipulation.