MiraGe: Multimodal Discriminative Representation Learning for Generalizable AI-Generated Image Detection

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-generated image detectors often lack generalization to emerging generative models (e.g., Sora) due to over-reliance on generator-specific statistical priors. Method: We propose a generator-agnostic multimodal discriminative representation learning framework built upon CLIP, wherein text embeddings serve as semantic anchors; multimodal prompt learning jointly optimizes the image feature space, theoretically enforcing intra-class compactness and inter-class separability. Contribution/Results: By integrating cross-modal alignment into detection—rather than treating it as a unimodal classification task—our approach eliminates dependence on generator-specific artifacts. Experiments demonstrate state-of-the-art performance across multiple benchmarks and exceptional zero-shot cross-generator robustness, notably maintaining high accuracy on unseen generators including Sora. The method significantly advances generalizable, prior-free detection of synthetic imagery.

Technology Category

Application Category

📝 Abstract
Recent advances in generative models have highlighted the need for robust detectors capable of distinguishing real images from AI-generated images. While existing methods perform well on known generators, their performance often declines when tested with newly emerging or unseen generative models due to overlapping feature embeddings that hinder accurate cross-generator classification. In this paper, we propose Multimodal Discriminative Representation Learning for Generalizable AI-generated Image Detection (MiraGe), a method designed to learn generator-invariant features. Motivated by theoretical insights on intra-class variation minimization and inter-class separation, MiraGe tightly aligns features within the same class while maximizing separation between classes, enhancing feature discriminability. Moreover, we apply multimodal prompt learning to further refine these principles into CLIP, leveraging text embeddings as semantic anchors for effective discriminative representation learning, thereby improving generalizability. Comprehensive experiments across multiple benchmarks show that MiraGe achieves state-of-the-art performance, maintaining robustness even against unseen generators like Sora.
Problem

Research questions and friction points this paper is trying to address.

Detecting AI-generated images across diverse unseen generators
Overcoming overlapping feature embeddings for accurate classification
Enhancing generalizability with multimodal discriminative representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal discriminative representation learning for detection
Generator-invariant features via intra-class alignment
Multimodal prompt learning with CLIP embeddings
🔎 Similar Papers
No similar papers found.
K
Kuo Shi
University of Technology Sydney, Ultimo, NSW, Australia
J
Jie Lu
University of Technology Sydney, Ultimo, NSW, Australia
S
Shanshan Ye
University of Technology Sydney, Ultimo, NSW, Australia
Guangquan Zhang
Guangquan Zhang
University of Technology Sydney, Australia
fuzzy sets and systemsmachine learningdecision support systems
Z
Zhen Fang
University of Technology Sydney, Ultimo, NSW, Australia