🤖 AI Summary
This paper introduces Singing Voice Deepfake Source Attribution (SVDSA), a novel task aimed at identifying and tracing the generative model responsible for synthetic singing voices. To exploit source-specific cues—including timbral distortions, pitch anomalies, and synthesis artifacts—we pioneer the integration of multimodal foundation models (e.g., ImageBind, LanguageBind) into SVDSA. We propose COFFE, a cross-modal fusion framework that achieves robust joint modeling of speech, image, and text representations via cross-modal feature alignment and a Chernoff-distance-driven fusion loss. Experiments on multiple deepfake singing datasets demonstrate that COFFE significantly outperforms unimodal baselines and state-of-the-art multimodal fusion strategies, establishing new SOTA performance. Our approach provides an interpretable and scalable paradigm for audio deepfake provenance analysis, advancing forensic audio attribution beyond conventional single-modality methods.
📝 Abstract
In this work, we introduce the task of singing voice deepfake source attribution (SVDSA). We hypothesize that multimodal foundation models (MMFMs) such as ImageBind, LanguageBind will be most effective for SVDSA as they are better equipped for capturing subtle source-specific characteristics-such as unique timbre, pitch manipulation, or synthesis artifacts of each singing voice deepfake source due to their cross-modality pre-training. Our experiments with MMFMs, speech foundation models and music foundation models verify the hypothesis that MMFMs are the most effective for SVDSA. Furthermore, inspired from related research, we also explore fusion of foundation models (FMs) for improved SVDSA. To this end, we propose a novel framework, COFFE which employs Chernoff Distance as novel loss function for effective fusion of FMs. Through COFFE with the symphony of MMFMs, we attain the topmost performance in comparison to all the individual FMs and baseline fusion methods.