🤖 AI Summary
Existing AI-generated image detection models incur substantial computational overhead, hindering real-time deployment in resource-constrained scenarios such as social media platforms.
Method: This paper proposes a lightweight cross-domain detection framework. We first systematically evaluate off-the-shelf lightweight neural networks for AI image detection; then, innovatively integrate spatial- and frequency-domain features, establishing a multi-domain joint training and inference paradigm on the GenImage subset.
Contribution/Results: Our method achieves accuracy comparable to state-of-the-art (SOTA) models (e.g., CLIP+ViT), while reducing parameters by 87%, FLOPs by 92%, and memory footprint by 76%. It demonstrates strong robustness against diverse generative models—including Stable Diffusion, GANs, and diffusion-based approaches—as well as common post-processing attacks. This work provides a practical, efficient, and scalable pathway for AI-generated content authentication.
📝 Abstract
The recent proliferation of photorealistic AI-generated images (AIGI) has raised urgent concerns about their potential misuse, particularly on social media platforms. Current state-of-the-art AIGI detection methods typically rely on large, deep neural architectures, creating significant computational barriers to real-time, large-scale deployment on platforms like social media. To challenge this reliance on computationally intensive models, we introduce LAID, the first framework -- to our knowledge -- that benchmarks and evaluates the detection performance and efficiency of off-the-shelf lightweight neural networks. In this framework, we comprehensively train and evaluate selected models on a representative subset of the GenImage dataset across spatial, spectral, and fusion image domains. Our results demonstrate that lightweight models can achieve competitive accuracy, even under adversarial conditions, while incurring substantially lower memory and computation costs compared to current state-of-the-art methods. This study offers valuable insight into the trade-off between efficiency and performance in AIGI detection and lays a foundation for the development of practical, scalable, and trustworthy detection systems. The source code of LAID can be found at: https://github.com/nchivar/LAID.