🤖 AI Summary
This work addresses the limited generalization of existing AI-generated image detection methods, which struggle to adapt to diverse generative models. To overcome this challenge, the authors propose a novel framework that integrates a lightweight perception-sensitive detector with a multimodal large language model. For the first time, a fuzzy decision tree is introduced to adaptively fuse low-level artifact cues and high-level semantic information. The proposed approach achieves state-of-the-art detection accuracy across multiple mainstream generative models while maintaining computational efficiency, demonstrating significant improvements in cross-model generalization. Its robustness and effectiveness are validated in complex generation scenarios, highlighting its potential for practical deployment in real-world applications.
📝 Abstract
The malicious use and widespread dissemination of AI-generated images pose a serious threat to the authenticity of digital content. Existing detection methods exploit low-level artifacts left by common manipulation steps within the generation pipeline, but they often lack generalization due to model-specific overfitting. Recently, researchers have resorted to Multimodal Large Language Models (MLLMs) for AIGC detection, leveraging their high-level semantic reasoning and broad generalization capabilities. While promising, MLLMs lack the fine-grained perceptual sensitivity to subtle generation artifacts, making them inadequate as standalone detectors. To address this issue, we propose a novel AI-generated image detection framework that synergistically integrates lightweight artifact-aware detectors with MLLMs via a fuzzy decision tree. The decision tree treats the outputs of basic detectors as fuzzy membership values, enabling adaptive fusion of complementary cues from semantic and perceptual perspectives. Extensive experiments demonstrate that the proposed method achieves state-of-the-art accuracy and strong generalization across diverse generative models.