🤖 AI Summary
While the vulnerability of deep networks to static adversarial perturbations is well-studied, their dynamic fragility under common image transformations—such as scaling, rotation, blurring, and JPEG compression—has been largely overlooked.
Method: This paper introduces *transformation-dependent adversarial attacks*, uncovering the phenomenon that adversarial perturbations dynamically evolve with geometric and photometric transformations. We model the coupling between perturbations and transformations and design *morphable* programmable perturbations—single injections that induce diverse, controllable, targeted misclassifications across varying transformation parameters.
Contribution/Results: Our method is compatible with both CNNs and Vision Transformers (ViTs), achieving over 90% targeted attack success rates on image classification and object detection tasks—substantially outperforming conventional static attacks. Comprehensive experiments demonstrate strong cross-model and cross-task generalizability, and reveal systematic dependencies of attack efficacy on transformation types and network architectures. This work advances adversarial robustness research from static paradigms toward dynamic, transformation-aware modeling.
📝 Abstract
We introduce transformation-dependent adversarial attacks, a new class of threats where a single additive perturbation can trigger diverse, controllable mis-predictions by systematically transforming the input (e.g., scaling, blurring, compression). Unlike traditional attacks with static effects, our perturbations embed metamorphic properties to enable different adversarial attacks as a function of the transformation parameters. We demonstrate the transformation-dependent vulnerability across models (e.g., convolutional networks and vision transformers) and vision tasks (e.g., image classification and object detection). Our proposed geometric and photometric transformations enable a range of targeted errors from one crafted input (e.g., higher than 90% attack success rate for classifiers). We analyze effects of model architecture and type/variety of transformations on attack effectiveness. This work forces a paradigm shift by redefining adversarial inputs as dynamic, controllable threats. We highlight the need for robust defenses against such multifaceted, chameleon-like perturbations that current techniques are ill-prepared for.