🤖 AI Summary
This work addresses the challenge of detecting AI-generated videos from high-fidelity models such as Sora2 and Veo3, which current methods struggle to handle due to their reliance on shallow features or computationally expensive multimodal architectures. We propose EA-Swin, an embedding-agnostic Swin Transformer that leverages a factorized window attention mechanism to directly model spatiotemporal dependencies in pretrained video embeddings, offering compatibility with any Vision Transformer (ViT)-style encoder. To support comprehensive evaluation, we introduce EA-Video, a new benchmark comprising 130,000 videos, enabling the first unified assessment of generalization across diverse ViT-based embeddings and unseen generators. EA-Swin achieves state-of-the-art accuracy of 0.97–0.99 on mainstream generators, outperforming existing methods by 5%–20%, and demonstrates strong generalization to previously unseen video synthesis models.
📝 Abstract
Recent advances in foundation video generators such as Sora2, Veo3, and other commercial systems have produced highly realistic synthetic videos, exposing the limitations of existing detection methods that rely on shallow embedding trajectories, image-based adaptation, or computationally heavy MLLMs. We propose EA-Swin, an Embedding-Agnostic Swin Transformer that models spatiotemporal dependencies directly on pretrained video embeddings via a factorized windowed attention design, making it compatible with generic ViT-style patch-based encoders. Alongside the model, we construct the EA-Video dataset, a benchmark dataset comprising 130K videos that integrates newly collected samples with curated existing datasets, covering diverse commercial and open-source generators and including unseen-generator splits for rigorous cross-distribution evaluation. Extensive experiments show that EA-Swin achieves 0.97-0.99 accuracy across major generators, outperforming prior SoTA methods (typically 0.8-0.9) by a margin of 5-20%, while maintaining strong generalization to unseen distributions, establishing a scalable and robust solution for modern AI-generated video detection.