EA-Swin: An Embedding-Agnostic Swin Transformer for AI-Generated Video Detection

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting AI-generated videos from high-fidelity models such as Sora2 and Veo3, which current methods struggle to handle due to their reliance on shallow features or computationally expensive multimodal architectures. We propose EA-Swin, an embedding-agnostic Swin Transformer that leverages a factorized window attention mechanism to directly model spatiotemporal dependencies in pretrained video embeddings, offering compatibility with any Vision Transformer (ViT)-style encoder. To support comprehensive evaluation, we introduce EA-Video, a new benchmark comprising 130,000 videos, enabling the first unified assessment of generalization across diverse ViT-based embeddings and unseen generators. EA-Swin achieves state-of-the-art accuracy of 0.97–0.99 on mainstream generators, outperforming existing methods by 5%–20%, and demonstrates strong generalization to previously unseen video synthesis models.

Technology Category

Application Category

📝 Abstract
Recent advances in foundation video generators such as Sora2, Veo3, and other commercial systems have produced highly realistic synthetic videos, exposing the limitations of existing detection methods that rely on shallow embedding trajectories, image-based adaptation, or computationally heavy MLLMs. We propose EA-Swin, an Embedding-Agnostic Swin Transformer that models spatiotemporal dependencies directly on pretrained video embeddings via a factorized windowed attention design, making it compatible with generic ViT-style patch-based encoders. Alongside the model, we construct the EA-Video dataset, a benchmark dataset comprising 130K videos that integrates newly collected samples with curated existing datasets, covering diverse commercial and open-source generators and including unseen-generator splits for rigorous cross-distribution evaluation. Extensive experiments show that EA-Swin achieves 0.97-0.99 accuracy across major generators, outperforming prior SoTA methods (typically 0.8-0.9) by a margin of 5-20%, while maintaining strong generalization to unseen distributions, establishing a scalable and robust solution for modern AI-generated video detection.
Problem

Research questions and friction points this paper is trying to address.

AI-generated video detection
deepfake detection
video forensics
synthetic video
foundation video generators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding-Agnostic
Swin Transformer
Spatiotemporal Modeling
AI-Generated Video Detection
Factorized Windowed Attention
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30
H
Hung Mai
N2TP Technology Solution JSC, Hanoi, Vietnam
L
Loi Dinh
University of Science, Vietnam National University, HCMC, Vietnam
D
Duc Hai Nguyen
N2TP Technology Solution JSC, Hanoi, Vietnam
Dat Do
Dat Do
Kruskal Instructor, University of Chicago
StatisticsPopulation geneticsOptimal transport
L
Luong Doan
N2TP Technology Solution JSC, Hanoi, Vietnam
K
Khanh Nguyen Quoc
N2TP Technology Solution JSC, Hanoi, Vietnam
H
Huan Vu
College of Technology, National Economics University, Hanoi, Vietnam
P
Phong Ho
N2TP Technology Solution JSC, Hanoi, Vietnam
N
Naeem Ul Islam
College of Informatics, Yuan Ze University, Taoyuan, Taiwan
T
Tuan Do
N2TP Technology Solution JSC, Hanoi, Vietnam