🤖 AI Summary
This work addresses the insufficient understanding of how image compression affects the adversarial robustness of deep classifiers, particularly highlighting critical security vulnerabilities when attacks are directly launched in the compressed domain. The paper proposes a novel paradigm for generating adversarial examples in the compressed representation space and, for the first time, systematically demonstrates that compression amplifies adversarial perturbations through a “decision space contraction” mechanism, substantially increasing attack success rates. Experimental results show that, under identical perturbation budgets, attacks conducted in the compressed domain significantly outperform conventional pixel-space attacks. These findings underscore a fundamental vulnerability in systems incorporating compression pipelines and provide theoretical insights into the intrinsic relationship between compression and model robustness.
📝 Abstract
Image compression is a ubiquitous component of modern visual pipelines, routinely applied by social media platforms and resource-constrained systems prior to inference. Despite its prevalence, the impact of compression on adversarial robustness remains poorly understood. We study a previously unexplored adversarial setting in which attacks are applied directly in compressed representations, and show that compression can act as an adversarial amplifier for deep image classifiers. Under identical nominal perturbation budgets, compression-aware attacks are substantially more effective than their pixel-space counterparts. We attribute this effect to decision space reduction, whereby compression induces a non-invertible, information-losing transformation that contracts classification margins and increases sensitivity to perturbations. Extensive experiments across standard benchmarks and architectures support our analysis and reveal a critical vulnerability in compression-in-the-loop deployment settings. Code will be released.