GenVideoLens: Where LVLMs Fall Short in AI-Generated Video Detection?

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of fine-grained evaluation of large vision-language models (LVLMs) in detecting AI-generated videos, which obscures their limitations across multidimensional notions of realism. To this end, we introduce GenVideoLens—a fine-grained benchmark comprising 500 videos annotated across 15 distinct realism dimensions—and establish the first multidimensional evaluation framework specifically designed for AI-generated video detection. Through expert annotations, temporal perturbation experiments, and comprehensive comparisons across 11 representative LVLMs, we find that while these models perform reasonably well on perceptual cues, they exhibit significant deficiencies in optical consistency, physical interaction, and temporal causal reasoning. Notably, certain open-source smaller models outperform closed-source larger counterparts on specific dimensions, and current models generally fail to effectively leverage temporal information.

Technology Category

Application Category

📝 Abstract
In recent years, AI-generated videos have become increasingly realistic and sophisticated. Meanwhile, Large Vision-Language Models (LVLMs) have shown strong potential for detecting such content. However, existing evaluation protocols largely treat the task as a binary classification problem and rely on coarse-grained metrics such as overall accuracy, providing limited insight into where LVLMs succeed or fail. To address this limitation, we introduce GenVideoLens, a fine-grained benchmark that enables dimension-wise evaluation of LVLM capabilities in AI-generated video detection. The benchmark contains 400 highly deceptive AI-generated videos and 100 real videos, annotated by experts across 15 authenticity dimensions covering perceptual, optical, physical, and temporal cues. We evaluate eleven representative LVLMs on this benchmark. Our analysis reveals a pronounced dimensional imbalance. While LVLMs perform relatively well on perceptual cues, they struggle with optical consistency, physical interactions, and temporal-causal reasoning. Model performance also varies substantially across dimensions, with smaller open-source models sometimes outperforming stronger proprietary models on specific authenticity cues. Temporal perturbation experiments further show that current LVLMs make limited use of temporal information. Overall, GenVideoLens provides diagnostic insights into LVLM behavior, revealing key capability gaps and offering guidance for improving future AI-generated video detection systems.
Problem

Research questions and friction points this paper is trying to address.

AI-generated video detection
Large Vision-Language Models
fine-grained evaluation
authenticity dimensions
temporal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

fine-grained benchmark
AI-generated video detection
Large Vision-Language Models
temporal-causal reasoning
authenticity dimensions
🔎 Similar Papers
No similar papers found.
Y
Yueying Zou
Beijing University of Posts and Telecommunications, Beijing, China
P
Pei Pei Li
Beijing University of Posts and Telecommunications, Beijing, China
Z
Zekun Li
University of California, Santa Barbara, CA, USA
Xinyu Guo
Xinyu Guo
Samsung Research America
AIcomputer visionmachine learningmedical image analysis
X
Xing Cui
Beijing University of Posts and Telecommunications, Beijing, China
Huaibo Huang
Huaibo Huang
NLPR, MAIS, CASIA
Computer VisionGenerative ModelsLow-level VisionFace Recognition
R
Ran He
Center for Research on Intelligent Perception and Computing, NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China