How Do Diffusion Models Improve Adversarial Robustness?

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The intrinsic mechanisms by which diffusion models enhance adversarial robustness remain poorly understood. Method: We systematically investigate the denoising process through adversarial sample purification, controlled experiments with fixed randomness, ℓp-distance analysis, and empirical validation of the correlation between compression ratio and robustness gain. Contribution/Results: We reveal that robustness improvement stems not from ℓp-distance reduction toward clean samples, but from an input-space compression effect induced by the inherent stochasticity of the diffusion process. Crucially, we identify compression ratio—not conventional denoising—as the primary driver of robustness gain. This insight yields a gradient-free, interpretable metric for quantifying robustness. Experiments on CIFAR-10 demonstrate that fixing diffusion randomness collapses robust accuracy to ~24%, while compression ratio exhibits strong correlation (ρ > 0.9) with robustness gain—establishing a novel, interpretable paradigm for robust diffusion-based purification.

Technology Category

Application Category

📝 Abstract
Recent findings suggest that diffusion models significantly enhance empirical adversarial robustness. While some intuitive explanations have been proposed, the precise mechanisms underlying these improvements remain unclear. In this work, we systematically investigate how and how well diffusion models improve adversarial robustness. First, we observe that diffusion models intriguingly increase, rather than decrease, the $ell_p$ distance to clean samples--challenging the intuition that purification denoises inputs closer to the original data. Second, we find that the purified images are heavily influenced by the internal randomness of diffusion models, where a compression effect arises within each randomness configuration. Motivated by this observation, we evaluate robustness under fixed randomness and find that the improvement drops to approximately 24% on CIFAR-10--substantially lower than prior reports approaching 70%. Importantly, we show that this remaining robustness gain strongly correlates with the model's ability to compress the input space, revealing the compression rate as a reliable robustness indicator without requiring gradient-based analysis. Our findings provide novel insights into the mechanisms underlying diffusion-based purification, and offer guidance for developing more effective and principled adversarial purification systems.
Problem

Research questions and friction points this paper is trying to address.

Understand mechanisms of diffusion models enhancing adversarial robustness
Analyze impact of internal randomness on purification effectiveness
Explore correlation between input space compression and robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion models increase adversarial sample distances
Purified images influenced by model randomness
Compression rate indicates robustness without gradients
🔎 Similar Papers
No similar papers found.