Rethinking the Use of Vision Transformers for AI-Generated Image Detection

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the underutilization of multi-layer features in Vision Transformer (ViT)-based detection of AI-generated images, this paper proposes MoLD—a gated mechanism-based dynamic multi-layer feature fusion method. Unlike mainstream approaches that rely solely on final-layer ViT features, MoLD employs learnable gating modules to adaptively weight and integrate both deep semantic features and early-stage local features from pre-trained ViTs (e.g., CLIP-ViT and DINOv2), thereby exploiting their complementary representational strengths. Experiments demonstrate that MoLD significantly improves detection accuracy on both GAN- and diffusion-model-generated images. Moreover, it exhibits strong generalization across diverse generative models and robustness in real-world scenarios. By enabling effective joint exploitation of hierarchical ViT representations, MoLD establishes a novel paradigm for leveraging multi-layer ViT features in AIGC detection.

Technology Category

Application Category

📝 Abstract
Rich feature representations derived from CLIP-ViT have been widely utilized in AI-generated image detection. While most existing methods primarily leverage features from the final layer, we systematically analyze the contributions of layer-wise features to this task. Our study reveals that earlier layers provide more localized and generalizable features, often surpassing the performance of final-layer features in detection tasks. Moreover, we find that different layers capture distinct aspects of the data, each contributing uniquely to AI-generated image detection. Motivated by these findings, we introduce a novel adaptive method, termed MoLD, which dynamically integrates features from multiple ViT layers using a gating-based mechanism. Extensive experiments on both GAN- and diffusion-generated images demonstrate that MoLD significantly improves detection performance, enhances generalization across diverse generative models, and exhibits robustness in real-world scenarios. Finally, we illustrate the scalability and versatility of our approach by successfully applying it to other pre-trained ViTs, such as DINOv2.
Problem

Research questions and friction points this paper is trying to address.

Optimizes AI-generated image detection using multi-layer ViT features
Enhances detection performance and cross-model generalization
Introduces adaptive feature integration for real-world robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive multi-layer feature integration
Gating-based dynamic fusion mechanism
Enhanced generalization across generative models
🔎 Similar Papers
No similar papers found.