UMCL: Unimodal-generated Multimodal Contrastive Learning for Cross-compression-rate Deepfake Detection

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor generalization of deepfake detection across social media platforms with varying compression rates, this paper proposes a unimodal-to-multimodal contrastive learning framework that generates complementary multimodal representations—including rPPG signals, temporal keypoints, and vision-language embeddings—solely from raw video frames. We introduce affinity-driven semantic alignment and cross-quality similarity learning, enabling robust detection across diverse compression levels without requiring additional modality acquisition or annotation. Experiments demonstrate state-of-the-art performance across multiple compression grades and forgery types. Notably, the method maintains high accuracy even when individual modalities are severely degraded, significantly improving model stability and interpretability.

Technology Category

Application Category

📝 Abstract
In deepfake detection, the varying degrees of compression employed by social media platforms pose significant challenges for model generalization and reliability. Although existing methods have progressed from single-modal to multimodal approaches, they face critical limitations: single-modal methods struggle with feature degradation under data compression in social media streaming, while multimodal approaches require expensive data collection and labeling and suffer from inconsistent modal quality or accessibility in real-world scenarios. To address these challenges, we propose a novel Unimodal-generated Multimodal Contrastive Learning (UMCL) framework for robust cross-compression-rate (CCR) deepfake detection. In the training stage, our approach transforms a single visual modality into three complementary features: compression-robust rPPG signals, temporal landmark dynamics, and semantic embeddings from pre-trained vision-language models. These features are explicitly aligned through an affinity-driven semantic alignment (ASA) strategy, which models inter-modal relationships through affinity matrices and optimizes their consistency through contrastive learning. Subsequently, our cross-quality similarity learning (CQSL) strategy enhances feature robustness across compression rates. Extensive experiments demonstrate that our method achieves superior performance across various compression rates and manipulation types, establishing a new benchmark for robust deepfake detection. Notably, our approach maintains high detection accuracy even when individual features degrade, while providing interpretable insights into feature relationships through explicit alignment.
Problem

Research questions and friction points this paper is trying to address.

Detecting deepfakes across varying compression rates on social media
Overcoming feature degradation in single-modal compressed data
Avoiding expensive multimodal data collection and inconsistent quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates multimodal features from single visual modality
Aligns features through affinity-driven semantic alignment strategy
Enhances robustness with cross-quality similarity learning
🔎 Similar Papers
No similar papers found.