In-the-Wild Camouflage Attack on Vehicle Detectors through Controllable Image Editing

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of vehicle detectors to adversarial camouflage attacks in real-world scenarios by introducing, for the first time, a conditional image editing framework for untargeted adversarial attacks in the wild. Specifically, the authors propose a ControlNet-based method for controllable camouflage generation, which jointly optimizes at both image and scene levels to preserve structural fidelity and visual consistency while achieving highly effective yet imperceptible visual deception. Evaluated on the COCO and LINZ datasets, the proposed approach reduces detector AP50 by over 38%, significantly outperforming existing methods. It demonstrates superior performance in black-box transferability, physical-world applicability, and visual stealth, establishing a new state of the art in realistic adversarial camouflage for object detection.

Technology Category

Application Category

📝 Abstract
Deep neural networks (DNNs) have achieved remarkable success in computer vision but remain highly vulnerable to adversarial attacks. Among them, camouflage attacks manipulate an object's visible appearance to deceive detectors while remaining stealthy to humans. In this paper, we propose a new framework that formulates vehicle camouflage attacks as a conditional image-editing problem. Specifically, we explore both image-level and scene-level camouflage generation strategies, and fine-tune a ControlNet to synthesize camouflaged vehicles directly on real images. We design a unified objective that jointly enforces vehicle structural fidelity, style consistency, and adversarial effectiveness. Extensive experiments on the COCO and LINZ datasets show that our method achieves significantly stronger attack effectiveness, leading to more than 38% AP50 decrease, while better preserving vehicle structure and improving human-perceived stealthiness compared to existing approaches. Furthermore, our framework generalizes effectively to unseen black-box detectors and exhibits promising transferability to the physical world. Project page is available at https://humansensinglab.github.io/CtrlCamo
Problem

Research questions and friction points this paper is trying to address.

camouflage attack
vehicle detector
adversarial attack
in-the-wild
image editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

camouflage attack
controllable image editing
ControlNet
adversarial robustness
vehicle detection
🔎 Similar Papers
No similar papers found.