CapST: An Enhanced and Lightweight Model Attribution Approach for Synthetic Videos

📅 2023-11-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of generative model attribution for deepfake videos—i.e., identifying the specific forgery model used, beyond binary real/fake classification. The proposed method employs VGG19 as a backbone to extract frame-level features and introduces, for the first time, a synergistic integration of capsule networks with spatiotemporal attention mechanisms to capture hierarchical spatiotemporal artifacts inherent in deepfakes. Additionally, a lightweight video-level temporal fusion strategy is designed to aggregate frame-level predictions robustly while preserving computational efficiency. Evaluated on the DFDM benchmark, the approach achieves a 4% absolute accuracy improvement over state-of-the-art baselines, significantly reduces computational overhead, and demonstrates strong generalization across diverse forgery models—thereby enabling high-accuracy, resource-efficient deepfake model attribution.
📝 Abstract
Deepfake videos, generated through AI faceswapping techniques, have garnered considerable attention due to their potential for powerful impersonation attacks. While existing research primarily focuses on binary classification to discern between real and fake videos, however determining the specific generation model for a fake video is crucial for forensic investigation. Addressing this gap, this paper investigates the model attribution problem of Deepfake videos from a recently proposed dataset, Deepfakes from Different Models (DFDM), derived from various Autoencoder models. The dataset comprises 6,450 Deepfake videos generated by five distinct models with variations in encoder, decoder, intermediate layer, input resolution, and compression ratio. This study formulates Deepfakes model attribution as a multiclass classification task, proposing a segment of VGG19 as a feature extraction backbone, known for its effectiveness in imagerelated tasks, while integrated a Capsule Network with a Spatio-Temporal attention mechanism. The Capsule module captures intricate hierarchies among features for robust identification of deepfake attributes. Additionally, the video-level fusion technique leverages temporal attention mechanisms to handle concatenated feature vectors, capitalizing on inherent temporal dependencies in deepfake videos. By aggregating insights across frames, our model gains a comprehensive understanding of video content, resulting in more precise predictions. Experimental results on the deepfake benchmark dataset (DFDM) demonstrate the efficacy of our proposed method, achieving up to a 4% improvement in accurately categorizing deepfake videos compared to baseline models while demanding fewer computational resources.
Problem

Research questions and friction points this paper is trying to address.

Attributing deep-fake videos to specific generation models
Enhancing forensic analysis for source tracing and countermeasures
Improving model attribution accuracy with reduced computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Capsule networks for hierarchical encoding
Spatio-temporal attention for frame dependencies
Truncated VGG19 for feature extraction
🔎 Similar Papers
No similar papers found.
Wasim Ahmad
Wasim Ahmad
AI Researcher, Friedrich Schiller University
Machine LearningAlgorithmsCausal Inference
Yan-Tsung Peng
Yan-Tsung Peng
National Chengchi University
Yuan-Hao Chang
Yuan-Hao Chang
Professor, Dept. of CSIE, National Taiwan University; IEEE Fellow
Comuter SystemComputer ArchitectureEmbedded SystemOperating SystemNon-volatile Memory
G
Gaddisa Olani Ganfure
Dire Dawa University, Ethiopia
S
Sarwar Khan
Research Center for Information Technology Innovation, Academia Sinica, Taiwan (R.O.C.), Social Networks and Human-Centred Computing, Taiwan International Graduate Program, Taiwan (R.O.C.), and Department of Computer Science, National Chengchi University, Taiwan (R.O.C.)
S
Sahibzada Adil Shahzad