🤖 AI Summary
AI-generated video detection methods suffer from poor generalization across diverse generative models. Method: This paper proposes a forensics-oriented frequency-domain enhancement approach that leverages wavelet decomposition to localize and replace critical frequency bands, thereby guiding the detector to focus on low-level, model-agnostic artifacts introduced by generators—rather than volatile high-level semantic inconsistencies. The method employs a lightweight classifier, single-source model training, and a multi-model generalization evaluation paradigm. Contribution/Results: Trained exclusively on videos synthesized by a single generator (e.g., SVD), the method achieves significantly higher cross-model detection accuracy than state-of-the-art methods on challenging benchmarks including NOVA and FLUX. This demonstrates that the learned features are highly robust and generalize effectively across unseen generative models, without requiring multi-source training data or architectural modifications.
📝 Abstract
Synthetic video generation is progressing very rapidly. The latest models can produce very realistic high-resolution videos that are virtually indistinguishable from real ones. Although several video forensic detectors have been recently proposed, they often exhibit poor generalization, which limits their applicability in a real-world scenario. Our key insight to overcome this issue is to guide the detector towards seeing what really matters. In fact, a well-designed forensic classifier should focus on identifying intrinsic low-level artifacts introduced by a generative architecture rather than relying on high-level semantic flaws that characterize a specific model. In this work, first, we study different generative architectures, searching and identifying discriminative features that are unbiased, robust to impairments, and shared across models. Then, we introduce a novel forensic-oriented data augmentation strategy based on the wavelet decomposition and replace specific frequency-related bands to drive the model to exploit more relevant forensic cues. Our novel training paradigm improves the generalizability of AI-generated video detectors, without the need for complex algorithms and large datasets that include multiple synthetic generators. To evaluate our approach, we train the detector using data from a single generative model and test it against videos produced by a wide range of other models. Despite its simplicity, our method achieves a significant accuracy improvement over state-of-the-art detectors and obtains excellent results even on very recent generative models, such as NOVA and FLUX. Code and data will be made publicly available.