🤖 AI Summary
Current video forgery detection methods for diffusion-based video generation lack generalizability and robustness at the video level, with most existing approaches operating only at the frame (image) level. To address this, we propose MM-Det++, a video-level deepfake detection framework based on a spatiotemporal–multimodal dual-branch architecture. Its key contributions include: (i) a Frame-Centric Vision Transformer (FC-ViT) to capture fine-grained intra-frame artifacts; (ii) synergistic modeling of spatiotemporal dynamics and cross-modal semantic reasoning via integration of Multimodal Large Language Models (MLLMs) and a Unified Multimodal Learning (UML) module; and (iii) a learnable multimodal reasoning paradigm to enhance generalization. Evaluated on our large-scale DVF benchmark, MM-Det++ achieves significant improvements over state-of-the-art methods, demonstrating high accuracy and strong robustness across diverse generative models and unseen scenarios.
📝 Abstract
The proliferation of videos generated by diffusion models has raised increasing concerns about information security, highlighting the urgent need for reliable detection of synthetic media. Existing methods primarily focus on image-level forgery detection, leaving generic video-level forgery detection largely underexplored. To advance video forensics, we propose a consolidated multimodal detection algorithm, named MM-Det++, specifically designed for detecting diffusion-generated videos. Our approach consists of two innovative branches and a Unified Multimodal Learning (UML) module. Specifically, the Spatio-Temporal (ST) branch employs a novel Frame-Centric Vision Transformer (FC-ViT) to aggregate spatio-temporal information for detecting diffusion-generated videos, where the FC-tokens enable the capture of holistic forgery traces from each video frame. In parallel, the Multimodal (MM) branch adopts a learnable reasoning paradigm to acquire Multimodal Forgery Representation (MFR) by harnessing the powerful comprehension and reasoning capabilities of Multimodal Large Language Models (MLLMs), which discerns the forgery traces from a flexible semantic perspective. To integrate multimodal representations into a coherent space, a UML module is introduced to consolidate the generalization ability of MM-Det++. In addition, we also establish a large-scale and comprehensive Diffusion Video Forensics (DVF) dataset to advance research in video forgery detection. Extensive experiments demonstrate the superiority of MM-Det++ and highlight the effectiveness of unified multimodal forgery learning in detecting diffusion-generated videos.