DAASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Lp-norm-constrained adversarial attack methods often misalign with human visual perception and lack effective modeling of perceptual quality. To address this, we propose DAASH—a differentiable meta-attack framework that dynamically composes standard Lp attacks across multiple stages, jointly optimizing classification loss and perceptual distortion metrics (SSIM, LPIPS, FID) via adaptive stage-wise weighted fusion. DAASH is the first purely Lp-based method to surpass state-of-the-art perceptual attacks in performance. Evaluated on CIFAR-10, CIFAR-100, and ImageNet, DAASH significantly outperforms methods such as AdvAD: achieving up to a 20.63% higher attack success rate, and improvements of approximately 11 (SSIM), 0.015 (LPIPS), and 5.7 (FID). It thus delivers both high attack efficacy and strong visual imperceptibility, with excellent generalization across datasets and models.

Technology Category

Application Category

📝 Abstract
Numerous techniques have been proposed for generating adversarial examples in white-box settings under strict Lp-norm constraints. However, such norm-bounded examples often fail to align well with human perception, and only recently have a few methods begun specifically exploring perceptually aligned adversarial examples. Moreover, it remains unclear whether insights from Lp-constrained attacks can be effectively leveraged to improve perceptual efficacy. In this paper, we introduce DAASH, a fully differentiable meta-attack framework that generates effective and perceptually aligned adversarial examples by strategically composing existing Lp-based attack methods. DAASH operates in a multi-stage fashion: at each stage, it aggregates candidate adversarial examples from multiple base attacks using learned, adaptive weights and propagates the result to the next stage. A novel meta-loss function guides this process by jointly minimizing misclassification loss and perceptual distortion, enabling the framework to dynamically modulate the contribution of each base attack throughout the stages. We evaluate DAASH on adversarially trained models across CIFAR-10, CIFAR-100, and ImageNet. Despite relying solely on Lp-constrained based methods, DAASH significantly outperforms state-of-the-art perceptual attacks such as AdvAD -- achieving higher attack success rates (e.g., 20.63% improvement) and superior visual quality, as measured by SSIM, LPIPS, and FID (improvements $approx$ of 11, 0.015, and 5.7, respectively). Furthermore, DAASH generalizes well to unseen defenses, making it a practical and strong baseline for evaluating robustness without requiring handcrafted adaptive attacks for each new defense.
Problem

Research questions and friction points this paper is trying to address.

Generating effective and perceptually aligned adversarial examples
Leveraging Lp-based attacks to improve perceptual efficacy
Creating stealthy adversarial examples without handcrafted adaptive attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable meta-attack framework composing Lp-based methods
Multi-stage aggregation with adaptive learned weights
Meta-loss minimizes misclassification and perceptual distortion
🔎 Similar Papers
No similar papers found.
A
Abdullah Al Nomaan Nafi
Electrical and Computer Engineering, University of Maine, Orono, Maine, USA
H
Habibur Rahaman
Electrical and Computer Engineering, University of Florida, Gainesville, Florida, USA
Z
Zafaryab Haider
Electrical and Computer Engineering, University of Maine, Orono, Maine, USA
T
Tanzim Mahfuz
Electrical and Computer Engineering, University of Maine, Orono, Maine, USA
Fnu Suya
Fnu Suya
University of Tennessee, Knoxville
Machine Learning Security
Swarup Bhunia
Swarup Bhunia
University of Florida
IoT SecurityHardware SecurityEnergy-Efficient ElectronicsFood/Medicine Safety
P
Prabuddha Chakraborty
Electrical and Computer Engineering, University of Maine, Orono, Maine, USA