🤖 AI Summary
The proliferation of large artificial intelligence models (LAIMs) poses significant risks of multimedia content misuse and challenges for AI governance. Method: This paper presents the first systematic survey on detection techniques for LAIM-generated content across text, image, video, audio, and multimodal modalities. We propose a novel taxonomy organized by modality and evaluated along three axes—performance, generalizability, and robustness—while introducing the “beyond detection” research perspective. Our analysis integrates generative mechanisms, benchmark datasets, open-source tools, and social media impact, covering statistical features, frequency-domain analysis, neural fingerprints, self-supervised/contrastive learning, and black-box/white-box methods. Contribution/Results: We establish the first unified taxonomy for LAIM-generated content detection, yielding the most comprehensive research map to date. We identify core technical challenges and propose six concrete future directions to advance AI content safety and governance; our accompanying GitHub repository has been widely adopted in the research community.
📝 Abstract
The rapid advancement of Large AI Models (LAIMs), particularly diffusion models and large language models, has marked a new era where AI-generated multimedia is increasingly integrated into various aspects of daily life. Although beneficial in numerous fields, this content presents significant risks, including potential misuse, societal disruptions, and ethical concerns. Consequently, detecting multimedia generated by LAIMs has become crucial, with a marked rise in related research. Despite this, there remains a notable gap in systematic surveys that focus specifically on detecting LAIM-generated multimedia. Addressing this, we provide the first survey to comprehensively cover existing research on detecting multimedia (such as text, images, videos, audio, and multimodal content) created by LAIMs. Specifically, we introduce a novel taxonomy for detection methods, categorized by media modality, and aligned with two perspectives: pure detection (aiming to enhance detection performance) and beyond detection (adding attributes like generalizability, robustness, and interpretability to detectors). Additionally, we have presented a brief overview of generation mechanisms, public datasets, online detection tools, and evaluation metrics to provide a valuable resource for researchers and practitioners in this field. Most importantly, we offer a focused analysis from a social media perspective to highlight their broader societal impact. Furthermore, we identify current challenges in detection and propose directions for future research that address unexplored, ongoing, and emerging issues in detecting multimedia generated by LAIMs. Our aim for this survey is to fill an academic gap and contribute to global AI security efforts, helping to ensure the integrity of information in the digital realm. The project link is https://github.com/Purdue-M2/Detect-LAIM-generated-Multimedia-Survey.