🤖 AI Summary
This work addresses the limitations of existing AI-generated video detection methods, which often discard high-frequency artifacts due to fixed-resolution preprocessing and rely on outdated datasets, rendering them ineffective against state-of-the-art, highly realistic generative models. To overcome these challenges, we propose a native-scale detection framework built upon the Qwen2.5-VL Vision Transformer that directly processes videos of variable resolution and temporal length, thereby preserving critical forensic traces. We introduce a large-scale, multi-source dataset comprising over 140,000 videos generated by 15 advanced models and release the Magic Videos benchmark for evaluating detection performance on high-fidelity synthetic content. Extensive experiments demonstrate that our approach significantly outperforms current methods across multiple benchmarks, highlighting the pivotal role of native-scale modeling in enhancing detection accuracy and establishing a new strong baseline for AI-generated video detection.
📝 Abstract
The rapid advancement of video generation models has enabled the creation of highly realistic synthetic media, raising significant societal concerns regarding the spread of misinformation. However, current detection methods suffer from critical limitations. They rely on preprocessing operations like fixed-resolution resizing and cropping. These operations not only discard subtle, high-frequency forgery traces but also cause spatial distortion and significant information loss. Furthermore, existing methods are often trained and evaluated on outdated datasets that fail to capture the sophistication of modern generative models. To address these challenges, we introduce a comprehensive dataset and a novel detection framework. First, we curate a large-scale dataset of over 140K videos from 15 state-of-the-art open-source and commercial generators, along with Magic Videos benchmark designed specifically for evaluating ultra-realistic synthetic content. In addition, we propose a novel detection framework built on the Qwen2.5-VL Vision Transformer, which operates natively at variable spatial resolutions and temporal durations. This native-scale approach effectively preserves the high-frequency artifacts and spatiotemporal inconsistencies typically lost during conventional preprocessing. Extensive experiments demonstrate that our method achieves superior performance across multiple benchmarks, underscoring the critical importance of native-scale processing and establishing a robust new baseline for AI-generated video detection.