Transformation-Dependent Adversarial Attacks

📅 2024-06-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While the vulnerability of deep networks to static adversarial perturbations is well-studied, their dynamic fragility under common image transformations—such as scaling, rotation, blurring, and JPEG compression—has been largely overlooked. Method: This paper introduces *transformation-dependent adversarial attacks*, uncovering the phenomenon that adversarial perturbations dynamically evolve with geometric and photometric transformations. We model the coupling between perturbations and transformations and design *morphable* programmable perturbations—single injections that induce diverse, controllable, targeted misclassifications across varying transformation parameters. Contribution/Results: Our method is compatible with both CNNs and Vision Transformers (ViTs), achieving over 90% targeted attack success rates on image classification and object detection tasks—substantially outperforming conventional static attacks. Comprehensive experiments demonstrate strong cross-model and cross-task generalizability, and reveal systematic dependencies of attack efficacy on transformation types and network architectures. This work advances adversarial robustness research from static paradigms toward dynamic, transformation-aware modeling.

Technology Category

Application Category

📝 Abstract
We introduce transformation-dependent adversarial attacks, a new class of threats where a single additive perturbation can trigger diverse, controllable mis-predictions by systematically transforming the input (e.g., scaling, blurring, compression). Unlike traditional attacks with static effects, our perturbations embed metamorphic properties to enable different adversarial attacks as a function of the transformation parameters. We demonstrate the transformation-dependent vulnerability across models (e.g., convolutional networks and vision transformers) and vision tasks (e.g., image classification and object detection). Our proposed geometric and photometric transformations enable a range of targeted errors from one crafted input (e.g., higher than 90% attack success rate for classifiers). We analyze effects of model architecture and type/variety of transformations on attack effectiveness. This work forces a paradigm shift by redefining adversarial inputs as dynamic, controllable threats. We highlight the need for robust defenses against such multifaceted, chameleon-like perturbations that current techniques are ill-prepared for.
Problem

Research questions and friction points this paper is trying to address.

Exploiting transform-dependent adversarial perturbations in deep networks.
Demonstrating vulnerability across architectures, tasks, and image transforms.
Using transform-dependent perturbations as defense against information disclosure.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transform-dependent adversarial attacks exploit image transforms.
Perturbations exhibit metamorphic properties for diverse effects.
Achieves high attack success rates in blackbox scenarios.
🔎 Similar Papers
No similar papers found.
Y
Yaoteng Tan
University of California Riverside
Zikui Cai
Zikui Cai
University of Maryland
Machine LearningTrustworthy AIComputer VisionRobotics
M
M. S. Asif
University of California Riverside