Keep It Real: Challenges in Attacking Compression-Based Adversarial Purification

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the robustness mechanisms of compression-based adversarial purification, revealing that perceptual fidelity—i.e., alignment between the reconstructed output distribution and natural image statistics—is the decisive factor governing attack success. Contrary to prevailing gradient-masking explanations, we conduct systematic white-box and adaptive attacks across diverse compression models. Our evaluation demonstrates that high-fidelity reconstructions significantly enhance adversarial robustness, whereas low-fidelity models are highly vulnerable. Empirically, attack success rate increases sharply as reconstruction fidelity degrades, confirming that distributional alignment inherently confers robustness. To our knowledge, this is the first work to explicitly identify *perceptual fidelity* as a core security dimension of compression-based defenses. It establishes a novel theoretical framework for modeling adversarial purification mechanisms and provides empirical foundations for rigorous security assessment of such defenses. (149 words)

Technology Category

Application Category

📝 Abstract
Previous work has suggested that preprocessing images through lossy compression can defend against adversarial perturbations, but comprehensive attack evaluations have been lacking. In this paper, we construct strong white-box and adaptive attacks against various compression models and identify a critical challenge for attackers: high realism in reconstructed images significantly increases attack difficulty. Through rigorous evaluation across multiple attack scenarios, we demonstrate that compression models capable of producing realistic, high-fidelity reconstructions are substantially more resistant to our attacks. In contrast, low-realism compression models can be broken. Our analysis reveals that this is not due to gradient masking. Rather, realistic reconstructions maintaining distributional alignment with natural images seem to offer inherent robustness. This work highlights a significant obstacle for future adversarial attacks and suggests that developing more effective techniques to overcome realism represents an essential challenge for comprehensive security evaluation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating attacks on compression-based adversarial purification defenses
Assessing impact of high realism reconstructions on attack difficulty
Analyzing robustness of compression models maintaining natural image alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strong white-box attacks on compression models
High realism increases attack difficulty
Realistic reconstructions offer inherent robustness
🔎 Similar Papers
No similar papers found.