🤖 AI Summary
As AI-generated images become increasingly photorealistic, the proliferation of synthetic content necessitates robust methods to distinguish authentic from fabricated imagery. This work introduces MS COCOAI, the first large-scale, multi-model benchmark for AI-generated image detection, built upon the MS COCO dataset. Comprising 96,000 samples generated by five state-of-the-art models—Stable Diffusion 2.1, Stable Diffusion 3, SDXL, DALL·E 3, and MidJourney v6—the dataset supports two core tasks: binary classification of real versus synthetic images and forensic attribution to specific generative models. Released as the open-source Defactify_Image_Dataset, this resource establishes a standardized, high-quality evaluation framework to advance research in detecting and tracing AI-synthesized visual content.
📝 Abstract
Multimodal generative AI systems like Stable Diffusion, DALL-E, and MidJourney have fundamentally changed how synthetic images are created. These tools drive innovation but also enable the spread of misleading content, false information, and manipulated media. As generated images become harder to distinguish from photographs, detecting them has become an urgent priority. To combat this challenge, We release MS COCOAI, a novel dataset for AI generated image detection consisting of 96000 real and synthetic datapoints, built using the MS COCO dataset. To generate synthetic images, we use five generators: Stable Diffusion 3, Stable Diffusion 2.1, SDXL, DALL-E 3, and MidJourney v6. Based on the dataset, we propose two tasks: (1) classifying images as real or generated, and (2) identifying which model produced a given synthetic image. The dataset is available at https://huggingface.co/datasets/Rajarshi-Roy-research/Defactify_Image_Dataset.