Comprehensive Evaluation of Cloaking Backdoor Attacks on Object Detector in Real-World

📅 2025-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physical-world object detectors lack systematic evaluation against stealthy cloaking backdoor attacks. Method: This paper introduces the first large-scale, real-scene physical cloaking backdoor dataset (11,800 frames), using natural objects—such as T-shirts and hats—as triggers, and formally proposes and empirically validates the concept of “natural-object-driven cloaking backdoors.” We conduct comprehensive evaluations across three realistic deployment scenarios—data outsourcing, model outsourcing, and pretrained model adoption—using mainstream detectors including YOLOv3/v4, Faster R-CNN, and CenterNet. Contribution/Results: Experiments on 19 real-world videos demonstrate near-perfect attack success rates (ASR ≈ 100%) in most settings. Crucially, backdoored models retain clean-data accuracy with no statistically significant degradation and exhibit strong robustness against practical physical perturbations—including motion, deformation, illumination variation, and viewpoint changes—rendering them resistant to conventional verification methods.

Technology Category

Application Category

📝 Abstract
The exploration of backdoor vulnerabilities in object detectors, particularly in real-world scenarios, remains limited. A significant challenge lies in the absence of a natural physical backdoor dataset, and constructing such a dataset is both time- and labor-intensive. In this work, we address this gap by creating a large-scale dataset comprising approximately 11,800 images/frames with annotations featuring natural objects (e.g., T-shirts and hats) as triggers to incur cloaking adversarial effects in diverse real-world scenarios. This dataset is tailored for the study of physical backdoors in object detectors. Leveraging this dataset, we conduct a comprehensive evaluation of an insidious cloaking backdoor effect against object detectors, wherein the bounding box around a person vanishes when the individual is near a natural object (e.g., a commonly available T-shirt) in front of the detector. Our evaluations encompass three prevalent attack surfaces: data outsourcing, model outsourcing, and the use of pretrained models. The cloaking effect is successfully implanted in object detectors across all three attack surfaces. We extensively evaluate four popular object detection algorithms (anchor-based Yolo-V3, Yolo-V4, Faster R-CNN, and anchor-free CenterNet) using 19 videos (totaling approximately 11,800 frames) in real-world scenarios. Our results demonstrate that the backdoor attack exhibits remarkable robustness against various factors, including movement, distance, angle, non-rigid deformation, and lighting. In data and model outsourcing scenarios, the attack success rate (ASR) in most videos reaches 100% or near it, while the clean data accuracy of the backdoored model remains indistinguishable from that of the clean model, making it impossible to detect backdoor behavior through a validation set.
Problem

Research questions and friction points this paper is trying to address.

Targeted Detection
Stealthy Backdoor Attacks
Physical World Defense
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physical Backdoor Vulnerability
Targeted Object Detection
Adversarial Attacks
🔎 Similar Papers