$sigma$-zero: Gradient-based Optimization of $ell_0$-norm Adversarial Examples

πŸ“… 2024-02-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of robustness evaluation of deep neural networks under β„“β‚€-norm adversarial attacks. To overcome the non-convexity and non-differentiability of the β„“β‚€ norm, we propose the first differentiable, hyperparameter-free gradient-based optimization framework. Our method introduces a novel differentiable β„“β‚€ approximation function and an adaptive gradient projection operator, coupled with a dynamic loss-sparsity trade-off mechanism to enable end-to-end minimal β„“β‚€ perturbation search. The framework requires no manual hyperparameter tuning and significantly improves attack success rate, sparsity (i.e., smaller β„“β‚€ perturbations), and computational efficiency. Extensive experiments on MNIST, CIFAR-10, and ImageNet demonstrate consistent superiority over existing sparse attack methods across all three core metricsβ€”achieving state-of-the-art performance. Moreover, our β„“β‚€ attacks uncover structural vulnerabilities in deep models that remain undetected by conventional β„“β‚‚- or β„“βˆž-norm attacks.

Technology Category

Application Category

πŸ“ Abstract
Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging. While most attacks consider $ell_2$- and $ell_infty$-norm constraints to craft input perturbations, only a few investigate sparse $ell_1$- and $ell_0$-norm attacks. In particular, $ell_0$-norm attacks remain the least studied due to the inherent complexity of optimizing over a non-convex and non-differentiable constraint. However, evaluating adversarial robustness under these attacks could reveal weaknesses otherwise left untested with more conventional $ell_2$- and $ell_infty$-norm attacks. In this work, we propose a novel $ell_0$-norm attack, called $sigma$-zero, which leverages a differentiable approximation of the $ell_0$ norm to facilitate gradient-based optimization, and an adaptive projection operator to dynamically adjust the trade-off between loss minimization and perturbation sparsity. Extensive evaluations using MNIST, CIFAR10, and ImageNet datasets, involving robust and non-robust models, show that $sigma$ exttt{-zero} finds minimum $ell_0$-norm adversarial examples without requiring any time-consuming hyperparameter tuning, and that it outperforms all competing sparse attacks in terms of success rate, perturbation size, and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Evaluates adversarial robustness under sparse $ell_0$-norm attacks.
Proposes a novel gradient-based $ell_0$-norm attack method.
Improves success rate and efficiency in finding adversarial examples.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable approximation of $ell_0$ norm
Adaptive projection operator for sparsity
Gradient-based optimization for adversarial examples
πŸ”Ž Similar Papers
No similar papers found.
Antonio Emanuele CinΓ 
Antonio Emanuele CinΓ 
Assistant Professor @ University of Genoa
machine learningmachine learning securitycomputer vision
F
Francesco Villani
University of Genoa - Department of Computer Science, Bioengineering, Robotics and Systems Engineering
Maura Pintor
Maura Pintor
University of Cagliari
Machine LearningAdversarial Machine LearningComputer Security
L
Lea Schonherr
CISPA Helmholtz Center for Information Security
B
B. Biggio
University of Cagliari - Department of Electrical and Electronic Engineering
Marcello Pelillo
Marcello Pelillo
Professor of Computer Science, FIEEE, FIAPR, FAAIA, Ca' Foscari University of Venice & ZJNU
Computer VisionMachine LearningPattern Recognition