Exploring Sparsity and Smoothness of Arbitrary $\ell_p$ Norms in Adversarial Attacks

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited understanding of how the $\ell_p$ norm parameter $p$ systematically influences the sparsity and smoothness of adversarial perturbations. Focusing on the range $p \in [1,2]$, the work proposes three novel smoothness metrics—including a general framework based on smoothing operators and a first-order Taylor approximation—and integrates them with two established sparsity measures. A comprehensive empirical analysis is conducted across multiple image datasets and diverse architectures, including CNNs and Transformers. The findings reveal that conventional choices such as $\ell_1$ or $\ell_2$ norms are typically suboptimal, whereas values of $p$ in the interval $[1.3, 1.5]$ consistently achieve the best trade-off between sparsity and smoothness, offering a new principled guideline for designing adversarial attacks.

Technology Category

Application Category

📝 Abstract
Adversarial attacks against deep neural networks are commonly constructed under $\ell_p$ norm constraints, most often using $p=1$, $p=2$ or $p=\infty$, and potentially regularized for specific demands such as sparsity or smoothness. These choices are typically made without a systematic investigation of how the norm parameter \( p \) influences the structural and perceptual properties of adversarial perturbations. In this work, we study how the choice of \( p \) affects sparsity and smoothness of adversarial attacks generated under \( \ell_p \) norm constraints for values of $p \in [1,2]$. To enable a quantitative analysis, we adopt two established sparsity measures from the literature and introduce three smoothness measures. In particular, we propose a general framework for deriving smoothness measures based on smoothing operations and additionally introduce a smoothness measure based on first-order Taylor approximations. Using these measures, we conduct a comprehensive empirical evaluation across multiple real-world image datasets and a diverse set of model architectures, including both convolutional and transformer-based networks. We show that the choice of $\ell_1$ or $\ell_2$ is suboptimal in most cases and the optimal $p$ value is dependent on the specific task. In our experiments, using $\ell_p$ norms with $p\in [1.3, 1.5]$ yields the best trade-off between sparse and smooth attacks. These findings highlight the importance of principled norm selection when designing and evaluating adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

adversarial attacks
sparsity
smoothness
ell_p norms
deep neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial attacks
ℓ_p norms
sparsity
smoothness
Taylor approximation
F
Florian Eilers
Department of Computer Science, University of Münster, Münster, Germany
C
Christof Duhme
Department of Computer Science, University of Münster, Münster, Germany
Xiaoyi Jiang
Xiaoyi Jiang
Professor of Computer Science, University of Münster
Computer VisionPattern Recognition