AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection

📅 2024-11-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inflexibility of object detection models under multi-target backdoor attacks by proposing the first backdoor framework enabling inference-time dynamic specification of attack types (object removal, fabrication, or misclassification) and target classes. The method introduces three core techniques: (1) target-decoupled modeling to independently control object existence, localization, and classification; (2) robust trigger mosaic embedding for resilient activation under diverse transformations; and (3) policy-driven batch processing for real-time attack switching. By jointly optimizing all three output spaces—object presence, bounding box regression, and class prediction—the framework achieves fine-grained, configurable, and highly robust backdoor behavior. Evaluated on mainstream detectors including YOLOv5 and Faster R-CNN, it attains an average attack success rate ~80% higher than single-target baselines. This work is the first to systematically demonstrate the feasibility of inference-controllable, multi-target cooperative backdoor attacks in object detection, establishing a novel paradigm for model security assessment.

Technology Category

Application Category

📝 Abstract
As object detection becomes integral to many safety-critical applications, understanding its vulnerabilities is essential. Backdoor attacks, in particular, pose a significant threat by implanting hidden backdoor in a victim model, which adversaries can later exploit to trigger malicious behaviors during inference. However, current backdoor techniques are limited to static scenarios where attackers must define a malicious objective before training, locking the attack into a predetermined action without inference-time adaptability. Given the expressive output space in object detection, including object existence detection, bounding box estimation, and object classification, the feasibility of implanting a backdoor that provides inference-time control with a high degree of freedom remains unexplored. This paper introduces AnywhereDoor, a flexible backdoor attack tailored for object detection. Once implanted, AnywhereDoor enables adversaries to specify different attack types (object vanishing, fabrication, or misclassification) and configurations (untargeted or targeted with specific classes) to dynamically control detection behavior. This flexibility is achieved through three key innovations: (i) objective disentanglement to support a broader range of attack combinations well beyond what existing methods allow; (ii) trigger mosaicking to ensure backdoor activations are robust, even against those object detectors that extract localized regions from the input image for recognition; and (iii) strategic batching to address object-level data imbalances that otherwise hinders a balanced manipulation. Extensive experiments demonstrate that AnywhereDoor provides attackers with a high degree of control, achieving an attack success rate improvement of nearly 80% compared to adaptations of existing methods for such flexible control.
Problem

Research questions and friction points this paper is trying to address.

Explores vulnerabilities in object detection systems to backdoor attacks.
Introduces AnywhereDoor for flexible, multi-target backdoor attacks on object detection.
Enables adversaries to manipulate object detection outcomes during inference.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Objective disentanglement scales supported targets
Trigger mosaicking ensures robustness against detectors
Strategic batching addresses object-level data imbalances
🔎 Similar Papers
No similar papers found.
J
Jialin Lu
School of Computing and Data Science, The University of Hong Kong
Junjie Shan
Junjie Shan
The University of Hong Kong
Z
Ziqi Zhao
School of Computing and Data Science, The University_of Hong Kong
Ka-Ho Chow
Ka-Ho Chow
The University of Hong Kong
Trustworthy AICybersecurityML for SystemsSystems for ML