PRPO: Paragraph-level Policy Optimization for Vision-Language Deepfake Detection

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deepfake detection faces two key bottlenecks: scarcity of high-quality multimodal data and poor alignment between multimodal large language model (MLLM) reasoning and visual evidence, resulting in low interpretability. To address these challenges, we propose Paragraph-level Relative Policy Optimization (PRPO), a reinforcement learning framework that aligns LLM reasoning with image evidence at the paragraph level. We further introduce the first deepfake detection dataset featuring fine-grained, human-annotated reasoning traces. Compared to conventional policy optimization methods (e.g., GRPO), PRPO significantly improves detection accuracy and reasoning fidelity, achieving a human-evaluated reasoning quality score of 4.55/5.0. Ablation studies confirm its effectiveness. This work pioneers paragraph-level reasoning alignment for deepfake detection, establishing a novel paradigm for trustworthy multimodal reasoning.

Technology Category

Application Category

📝 Abstract
The rapid rise of synthetic media has made deepfake detection a critical challenge for online safety and trust. Progress remains constrained by the scarcity of large, high-quality datasets. Although multimodal large language models (LLMs) exhibit strong reasoning capabilities, their performance on deepfake detection is poor, often producing explanations that are misaligned with visual evidence or hallucinatory. To address this limitation, we introduce a reasoning-annotated dataset for deepfake detection and propose Paragraph-level Relative Policy Optimization (PRPO), a reinforcement learning algorithm that aligns LLM reasoning with image content at the paragraph level. Experiments show that PRPO improves detection accuracy by a wide margin and achieves the highest reasoning score of 4.55/5.0. Ablation studies further demonstrate that PRPO significantly outperforms GRPO under test-time conditions. These results underscore the importance of grounding multimodal reasoning in visual evidence to enable more reliable and interpretable deepfake detection.
Problem

Research questions and friction points this paper is trying to address.

Addresses poor multimodal LLM performance in deepfake detection
Aligns LLM reasoning with visual evidence to reduce hallucinations
Improves detection accuracy and reliability through paragraph-level optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Paragraph-level Relative Policy Optimization for deepfake detection
Reinforcement learning aligns LLM reasoning with images
Dataset with reasoning annotations improves detection accuracy
🔎 Similar Papers
No similar papers found.