Unleashing Vision-Language Semantics for Deepfake Video Detection

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes VLAForge, a novel framework for deepfake video detection that leverages cross-modal semantic information from pretrained vision-language models—addressing the limitation of existing methods that rely solely on visual features. Central to VLAForge is the ForgePerceiver module, which enhances both fine-grained and holistic visual perception of forgery cues. Furthermore, the framework incorporates identity-prior-guided textual prompts to establish an identity-aware vision-language alignment scoring mechanism. By effectively integrating cross-modal semantics with identity-specific authenticity signals, VLAForge achieves state-of-the-art performance across multiple deepfake video benchmarks, significantly outperforming current approaches in both frame-level and video-level detection tasks.

Technology Category

Application Category

📝 Abstract
Recent Deepfake Video Detection (DFD) studies have demonstrated that pre-trained Vision-Language Models (VLMs) such as CLIP exhibit strong generalization capabilities in detecting artifacts across different identities. However, existing approaches focus on leveraging visual features only, overlooking their most distinctive strength -- the rich vision-language semantics embedded in the latent space. We propose VLAForge, a novel DFD framework that unleashes the potential of such cross-modal semantics to enhance model's discriminability in deepfake detection. This work i) enhances the visual perception of VLM through a ForgePerceiver, which acts as an independent learner to capture diverse, subtle forgery cues both granularly and holistically, while preserving the pretrained Vision-Language Alignment (VLA) knowledge, and ii) provides a complementary discriminative cue -- Identity-Aware VLA score, derived by coupling cross-modal semantics with the forgery cues learned by ForgePerceiver. Notably, the VLA score is augmented by an identity prior-informed text prompting to capture authenticity cues tailored to each identity, thereby enabling more discriminative cross-modal semantics. Comprehensive experiments on video DFD benchmarks, including classical face-swapping forgeries and recent full-face generation forgeries, demonstrate that our VLAForge substantially outperforms state-of-the-art methods at both frame and video levels. Code is available at https://github.com/mala-lab/VLAForge.
Problem

Research questions and friction points this paper is trying to address.

Deepfake Video Detection
Vision-Language Models
Cross-modal Semantics
Forgery Detection
Visual-Language Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language Models
Deepfake Video Detection
Cross-modal Semantics
Identity-Aware Prompting
ForgePerceiver
🔎 Similar Papers
No similar papers found.