On the MIA Vulnerability Gap Between Private GANs and Diffusion Models

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the differential privacy (DP) vulnerability disparity between generative adversarial networks (GANs) and diffusion models under membership inference attacks (MIAs). Method: We propose the first theoretical framework grounded in model stability to analyze DP privacy guarantees, revealing GANs’ inherently lower data sensitivity and thus superior potential for DP compliance. A unified MIA evaluation protocol is employed across multiple benchmark datasets and varying privacy budgets (ε). Contribution/Results: Under strong DP constraints (e.g., ε ≤ 2), GANs consistently outperform diffusion models—reducing MIA success rates by 12.7–28.3 percentage points on average and exhibiting significantly lower privacy leakage. This study provides the first systematic, stability-based explanation of how architectural differences in generative models affect their DP resilience, offering both theoretical foundations and practical guidance for designing privacy-preserving, high-fidelity generative AI systems.

Technology Category

Application Category

📝 Abstract
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis. While both can be trained under differential privacy (DP) to protect sensitive data, their sensitivity to membership inference attacks (MIAs), a key threat to data confidentiality, remains poorly understood. In this work, we present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models. We begin by showing, through a stability-based analysis, that GANs exhibit fundamentally lower sensitivity to data perturbations than diffusion models, suggesting a structural advantage in resisting MIAs. We then validate this insight with a comprehensive empirical study using a standardized MIA pipeline to evaluate privacy leakage across datasets and privacy budgets. Our results consistently reveal a marked privacy robustness gap in favor of GANs, even in strong DP regimes, highlighting that model type alone can critically shape privacy leakage.
Problem

Research questions and friction points this paper is trying to address.

Analyzing MIA vulnerability gap between private GANs and diffusion models
Evaluating privacy risks of differentially private generative models
Comparing structural resistance to membership inference attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stability analysis comparing GANs and diffusion models
Standardized MIA pipeline evaluating privacy leakage
Theoretical and empirical analysis of privacy risks
🔎 Similar Papers
No similar papers found.