Towards Source Attribution of Singing Voice Deepfake with Multimodal Foundation Models

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces Singing Voice Deepfake Source Attribution (SVDSA), a novel task aimed at identifying and tracing the generative model responsible for synthetic singing voices. To exploit source-specific cues—including timbral distortions, pitch anomalies, and synthesis artifacts—we pioneer the integration of multimodal foundation models (e.g., ImageBind, LanguageBind) into SVDSA. We propose COFFE, a cross-modal fusion framework that achieves robust joint modeling of speech, image, and text representations via cross-modal feature alignment and a Chernoff-distance-driven fusion loss. Experiments on multiple deepfake singing datasets demonstrate that COFFE significantly outperforms unimodal baselines and state-of-the-art multimodal fusion strategies, establishing new SOTA performance. Our approach provides an interpretable and scalable paradigm for audio deepfake provenance analysis, advancing forensic audio attribution beyond conventional single-modality methods.

Technology Category

Application Category

📝 Abstract
In this work, we introduce the task of singing voice deepfake source attribution (SVDSA). We hypothesize that multimodal foundation models (MMFMs) such as ImageBind, LanguageBind will be most effective for SVDSA as they are better equipped for capturing subtle source-specific characteristics-such as unique timbre, pitch manipulation, or synthesis artifacts of each singing voice deepfake source due to their cross-modality pre-training. Our experiments with MMFMs, speech foundation models and music foundation models verify the hypothesis that MMFMs are the most effective for SVDSA. Furthermore, inspired from related research, we also explore fusion of foundation models (FMs) for improved SVDSA. To this end, we propose a novel framework, COFFE which employs Chernoff Distance as novel loss function for effective fusion of FMs. Through COFFE with the symphony of MMFMs, we attain the topmost performance in comparison to all the individual FMs and baseline fusion methods.
Problem

Research questions and friction points this paper is trying to address.

Attributing singing voice deepfakes to their sources
Using multimodal models to capture source-specific characteristics
Improving attribution via fusion of foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal foundation models for source attribution
Chernoff Distance loss for model fusion
COFFE framework enhances deepfake detection
🔎 Similar Papers
No similar papers found.