Attack Anything: Blind DNNs via Universal Background Adversarial Attack

📅 2024-08-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals the pervasive vulnerability of deep neural networks (DNNs) to background perturbations—disturbances applied exclusively to background regions—that mislead model decisions independently of the target object, exposing an intrinsic over-sensitivity to background cues and a fundamental divergence from human visual perception. To address this, we propose the first general-purpose background adversarial attack framework: it performs *object-agnostic*, *task-agnostic*, and *model-agnostic* black-box attacks by optimizing *only the background region*, without modifying the target object, in both digital and physical domains. Methodologically, we introduce the first iterative optimization formulation for background perturbations, accompanied by theoretical convergence guarantees; we further design an adversarial ensemble strategy and a background-smoothness constraint to enhance transferability and physical realizability. Extensive evaluations across image classification and object detection tasks—and diverse state-of-the-art models—demonstrate high attack success rates; physical-world experiments confirm robust effectiveness under real-world conditions.

Technology Category

Application Category

📝 Abstract
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations. Existing studies mainly focus on performing attacks by corrupting targeted objects (physical attack) or images (digital attack), which is intuitively acceptable and understandable in terms of the attack's effectiveness. In contrast, our focus lies in conducting background adversarial attacks in both digital and physical domains, without causing any disruptions to the targeted objects themselves. Specifically, an effective background adversarial attack framework is proposed to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks. Technically, we approach the background adversarial attack as an iterative optimization problem, analogous to the process of DNN learning. Besides, we offer a theoretical demonstration of its convergence under a set of mild but sufficient conditions. To strengthen the attack efficacy and transferability, we propose a new ensemble strategy tailored for adversarial perturbations and introduce an improved smooth constraint for the seamless connection of integrated perturbations. We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method. The findings of this research substantiate the significant discrepancy between human and machine vision on the value of background variations, which play a far more critical role than previously recognized, necessitating a reevaluation of the robustness and reliability of DNNs. The code will be publicly available at https://github.com/JiaweiLian/Attack_Anything
Problem

Research questions and friction points this paper is trying to address.

Attacking DNNs via universal background adversarial perturbations
Generalizing attacks across diverse objects, models, and tasks
Demonstrating human-machine vision discrepancy on background importance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Background adversarial attack framework for diverse objects
Iterative optimization analogous to DNN learning
Enhanced ensemble strategy for attack transferability
🔎 Similar Papers
No similar papers found.
Jiawei Lian
Jiawei Lian
xxxst
3d visionWeakly/Self supervised
Shaohui Mei
Shaohui Mei
School of Elctronics and Information, Northwestern Polytechnical University
remote sensingpattern recognitionimage processing
X
Xiaofei Wang
Not identified in provided text
Y
Yi Wang
Not identified in provided text
L
Lefan Wang
Not identified in provided text
Y
Yingjie Lu
Not identified in provided text
M
Mingyang Ma
Not identified in provided text
Lap-Pui Chau
Lap-Pui Chau
The Hong Kong Polytechnic University
Visual Signal Processing