Breaking Latent Prior Bias in Detectors for Generalizable AIGC Image Detection

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AIGC image detectors exhibit poor generalization across unseen generative models, primarily because they rely on non-robust shortcut features—such as initial noise priors—rather than authentic generation artifacts. This work identifies and mitigates this latent-variable prior bias via **Orthogonal Manifold Adversarial Training (OMAT)**: under fixed conditional constraints, it optimizes the initial latent variables of diffusion models to generate reproducible, manifold-preserving adversarial examples. We further introduce **GenImage++**, a novel multi-source benchmark for cross-generator evaluation. Our approach integrates latent-variable optimization, adversarial training, and cross-generator assessment within ResNet50- and CLIP-based detector architectures. Experiments demonstrate substantial improvements in cross-generator robustness, achieving state-of-the-art generalization performance on outputs from cutting-edge models including Flux.1 and SD3—without modifying the detector’s architecture.

Technology Category

Application Category

📝 Abstract
Current AIGC detectors often achieve near-perfect accuracy on images produced by the same generator used for training but struggle to generalize to outputs from unseen generators. We trace this failure in part to latent prior bias: detectors learn shortcuts tied to patterns stemming from the initial noise vector rather than learning robust generative artifacts. To address this, we propose On-Manifold Adversarial Training (OMAT): by optimizing the initial latent noise of diffusion models under fixed conditioning, we generate on-manifold adversarial examples that remain on the generator's output manifold-unlike pixel-space attacks, which introduce off-manifold perturbations that the generator itself cannot reproduce and that can obscure the true discriminative artifacts. To test against state-of-the-art generative models, we introduce GenImage++, a test-only benchmark of outputs from advanced generators (Flux.1, SD3) with extended prompts and diverse styles. We apply our adversarial-training paradigm to ResNet50 and CLIP baselines and evaluate across existing AIGC forensic benchmarks and recent challenge datasets. Extensive experiments show that adversarially trained detectors significantly improve cross-generator performance without any network redesign. Our findings on latent-prior bias offer valuable insights for future dataset construction and detector evaluation, guiding the development of more robust and generalizable AIGC forensic methodologies.
Problem

Research questions and friction points this paper is trying to address.

Detectors fail to generalize across unseen AIGC generators
Latent prior bias causes reliance on non-robust noise patterns
Need for adversarial training to improve cross-generator detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-Manifold Adversarial Training (OMAT) optimizes latent noise
Generates on-manifold adversarial examples for robust detection
Improves cross-generator performance without network redesign
🔎 Similar Papers
No similar papers found.
Y
Yue Zhou
Guangdong Provincial Key Laboratory of Intelligent Information Processing, Shenzhen Key Laboratory of Media Security, and SZU-AFS Joint Innovation Center for AI Technology, Shenzhen University
Xinan He
Xinan He
Nanchang University MS student
DeepFakesMultimedia ForensicsAIGC Detection
K
KaiQing Lin
Guangdong Provincial Key Laboratory of Intelligent Information Processing, Shenzhen Key Laboratory of Media Security, and SZU-AFS Joint Innovation Center for AI Technology, Shenzhen University
B
Bin Fan
University of North Texas
F
Fengfeng Ding
Nanchang University
B
Bin Li
Guangdong Provincial Key Laboratory of Intelligent Information Processing, Shenzhen Key Laboratory of Media Security, and SZU-AFS Joint Innovation Center for AI Technology, Shenzhen University