CAST: Cross-Attentive Spatio-Temporal feature fusion for Deepfake detection

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods typically model spatial and temporal features separately and fuse them superficially (e.g., via averaging or concatenation), limiting their ability to capture fine-grained, time-varying artifacts—such as eye blinking or lip distortion—thereby constraining deepfake video detection performance. To address this, we propose an end-to-end cross-modal cross-attention mechanism that enables temporal features to dynamically attend to salient spatial regions, achieving tightly coupled spatiotemporal modeling. Our architecture jointly leverages CNNs for spatial artifact extraction and Transformers for temporal inconsistency modeling, augmented by a bidirectional cross-attention module for unified optimization. Evaluated on FaceForensics++, our method achieves 99.49% AUC; under cross-dataset generalization, it attains 93.31% AUC—demonstrating substantial improvements in fine-grained artifact localization accuracy and model robustness.

Technology Category

Application Category

📝 Abstract
Deepfakes have emerged as a significant threat to digital media authenticity, increasing the need for advanced detection techniques that can identify subtle and time-dependent manipulations. CNNs are effective at capturing spatial artifacts, and Transformers excel at modeling temporal inconsistencies. However, many existing CNN-Transformer models process spatial and temporal features independently. In particular, attention-based methods often use separate attention mechanisms for spatial and temporal features and combine them using naive approaches like averaging, addition, or concatenation, which limits the depth of spatio-temporal interaction. To address this challenge, we propose a unified CAST model that leverages cross-attention to effectively fuse spatial and temporal features in a more integrated manner. Our approach allows temporal features to dynamically attend to relevant spatial regions, enhancing the model's ability to detect fine-grained, time-evolving artifacts such as flickering eyes or warped lips. This design enables more precise localization and deeper contextual understanding, leading to improved performance across diverse and challenging scenarios. We evaluate the performance of our model using the FaceForensics++, Celeb-DF, and DeepfakeDetection datasets in both intra- and cross-dataset settings to affirm the superiority of our approach. Our model achieves strong performance with an AUC of 99.49 percent and an accuracy of 97.57 percent in intra-dataset evaluations. In cross-dataset testing, it demonstrates impressive generalization by achieving a 93.31 percent AUC on the unseen DeepfakeDetection dataset. These results highlight the effectiveness of cross-attention-based feature fusion in enhancing the robustness of deepfake video detection.
Problem

Research questions and friction points this paper is trying to address.

Detect subtle time-dependent manipulations in deepfake videos
Fuse spatial and temporal features more effectively for detection
Improve generalization across diverse deepfake datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-attention for spatio-temporal feature fusion
Dynamic attention to time-evolving artifacts
Improved generalization in cross-dataset testing
🔎 Similar Papers
No similar papers found.
Aryan Thakre
Aryan Thakre
Student of Computer Engineering, College Of Engineering Pune
NLPMachine LearningDeep Learning
O
Omkar Nagwekar
Department of Computer Science and Engineering, COEP Technological University, PUNE, Maharashtra, India
V
Vedang Talekar
Department of Computer Science and Engineering, COEP Technological University, PUNE, Maharashtra, India
A
Aparna Santra Biswas
Department of Computer Science and Engineering, COEP Technological University, PUNE, Maharashtra, India