🤖 AI Summary
Existing AI-generated image detectors suffer from severe cross-distribution generalization degradation due to dual-domain biases—both pixel-level and frequency-domain (e.g., DCT/FFT)—in training data, causing models to learn spurious correlations. To address this, we propose Dual-Domain Alignment (DDA), the first framework that jointly models pixel-space and frequency-domain features. DDA employs generative reconstruction and adversarial domain alignment to learn domain-invariant representations across pixel and frequency domains. We introduce two new benchmark datasets—DDA-COCO and EvalGEN—covering major diffusion- and GAN-based generators. Trained exclusively on MSCOCO, DDA achieves significant average accuracy gains across eight standard benchmarks; notably, its out-of-distribution (wild) test accuracy improves by 7.2%. Comprehensive robustness evaluation across multiple benchmarks confirms DDA’s superior generalization capability over prior methods.
📝 Abstract
Existing detectors are often trained on biased datasets, leading to the possibility of overfitting on non-causal image attributes that are spuriously correlated with real/synthetic labels. While these biased features enhance performance on the training data, they result in substantial performance degradation when applied to unbiased datasets. One common solution is to perform dataset alignment through generative reconstruction, matching the semantic content between real and synthetic images. However, we revisit this approach and show that pixel-level alignment alone is insufficient. The reconstructed images still suffer from frequency-level misalignment, which can perpetuate spurious correlations. To illustrate, we observe that reconstruction models tend to restore the high-frequency details lost in real images (possibly due to JPEG compression), inadvertently creating a frequency-level misalignment, where synthetic images appear to have richer high-frequency content than real ones. This misalignment leads to models associating high-frequency features with synthetic labels, further reinforcing biased cues. To resolve this, we propose Dual Data Alignment (DDA), which aligns both the pixel and frequency domains. Moreover, we introduce two new test sets: DDA-COCO, containing DDA-aligned synthetic images for testing detector performance on the most aligned dataset, and EvalGEN, featuring the latest generative models for assessing detectors under new generative architectures such as visual auto-regressive generators. Finally, our extensive evaluations demonstrate that a detector trained exclusively on DDA-aligned MSCOCO could improve across 8 diverse benchmarks by a non-trivial margin, showing a +7.2% on in-the-wild benchmarks, highlighting the improved generalizability of unbiased detectors.