LAVID: An Agentic LVLM Framework for Diffusion-Generated Video Detection

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion-based video forgery detection methods lack efficient zero-shot approaches. Method: This paper proposes the first intelligent agent-based zero-shot detection framework leveraging large vision-language models (LVLMs), enabling end-to-end video authenticity assessment via dynamic tool invocation (e.g., frame analysis, optical flow extraction) and self-rewriting structured prompting—requiring no training, fine-tuning, labeled data, or generator-specific priors. Contribution/Results: Its core innovation is an LVLM agent architecture supporting runtime tool selection and prompt self-optimization. Evaluated on our high-quality benchmark *vidfor*, the framework achieves F1-score improvements of 6.2–30.2% over four state-of-the-art methods, demonstrating significantly enhanced cross-generator generalization capability.

Technology Category

Application Category

📝 Abstract
The impressive achievements of generative models in creating high-quality videos have raised concerns about digital integrity and privacy vulnerabilities. Recent works of AI-generated content detection have been widely studied in the image field (e.g., deepfake), yet the video field has been unexplored. Large Vision Language Model (LVLM) has become an emerging tool for AI-generated content detection for its strong reasoning and multimodal capabilities. It breaks the limitations of traditional deep learning based methods faced with like lack of transparency and inability to recognize new artifacts. Motivated by this, we propose LAVID, a novel LVLMs-based ai-generated video detection with explicit knowledge enhancement. Our insight list as follows: (1) The leading LVLMs can call external tools to extract useful information to facilitate its own video detection task; (2) Structuring the prompt can affect LVLM's reasoning ability to interpret information in video content. Our proposed pipeline automatically selects a set of explicit knowledge tools for detection, and then adaptively adjusts the structure prompt by self-rewriting. Different from prior SOTA that trains additional detectors, our method is fully training-free and only requires inference of the LVLM for detection. To facilitate our research, we also create a new benchmark vidfor with high-quality videos generated from multiple sources of video generation tools. Evaluation results show that LAVID improves F1 scores by 6.2 to 30.2% over the top baselines on our datasets across four SOTA LVLMs.
Problem

Research questions and friction points this paper is trying to address.

Detect AI-generated videos
Enhance digital integrity
Leverage LVLM capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

LVLM-based video detection
Explicit knowledge enhancement
Training-free inference method
🔎 Similar Papers
No similar papers found.
Q
Qingyuan Liu
Columbia University
Yun-Yun Tsai
Yun-Yun Tsai
Ph.D. student at Computer Science, Columbia University
Adversarial Machine LearningAI SecurityModel RobustnessTransfer learning
R
Ruijian Zha
Columbia University
V
Victoria Li
Columbia University
P
Pengyuan Shi
Columbia University
Chengzhi Mao
Chengzhi Mao
Assistant Professor, Rutgers University
LLMComputer VisionMachine Learning
J
Junfeng Yang
Columbia University