T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physical-world adversarial attacks pose significant threats to autonomous driving traffic sign recognition systems, yet existing methods suffer from poor stealthiness and limited generalization. To address this, we propose DiffSign—the first physical-domain adversarial attack framework leveraging text-to-image diffusion models. Our approach jointly optimizes a CLIP-based semantic alignment loss and a masked prompting mechanism to enhance targeting specificity toward the target sign. We further introduce two style-customization strategies to improve stealthiness against out-of-distribution signs and cross-type transferability. Finally, we integrate physics-aware environmental simulation with style transfer to generate high-fidelity adversarial patches. Extensive experiments demonstrate that DiffSign achieves an average attack success rate of 83.3% under real-world conditions, while maintaining strong stealthiness, high cross-model transferability, and robustness against physical perturbations.

Technology Category

Application Category

📝 Abstract
Traffic Sign Recognition (TSR) systems play a critical role in Autonomous Driving (AD) systems, enabling real-time detection of road signs, such as STOP and speed limit signs. While these systems are increasingly integrated into commercial vehicles, recent research has exposed their vulnerability to physical-world adversarial appearance attacks. In such attacks, carefully crafted visual patterns are misinterpreted by TSR models as legitimate traffic signs, while remaining inconspicuous or benign to human observers. However, existing adversarial appearance attacks suffer from notable limitations. Pixel-level perturbation-based methods often lack stealthiness and tend to overfit to specific surrogate models, resulting in poor transferability to real-world TSR systems. On the other hand, text-to-image (T2I) diffusion model-based approaches demonstrate limited effectiveness and poor generalization to out-of-distribution sign types. In this paper, we present DiffSign, a novel T2I-based appearance attack framework designed to generate physically robust, highly effective, transferable, practical, and stealthy appearance attacks against TSR systems. To overcome the limitations of prior approaches, we propose a carefully designed attack pipeline that integrates CLIP-based loss and masked prompts to improve attack focus and controllability. We also propose two novel style customization methods to guide visual appearance and improve out-of-domain traffic sign attack generalization and attack stealthiness. We conduct extensive evaluations of DiffSign under varied real-world conditions, including different distances, angles, light conditions, and sign categories. Our method achieves an average physical-world attack success rate of 83.3%, leveraging DiffSign's high effectiveness in attack transferability.
Problem

Research questions and friction points this paper is trying to address.

Addresses vulnerability of traffic sign recognition systems to physical attacks
Overcomes limitations in stealthiness and transferability of existing methods
Improves generalization for out-of-distribution traffic sign types
Innovation

Methods, ideas, or system contributions that make the work stand out.

T2I-based framework generates stealthy traffic sign attacks
CLIP loss and masked prompts enhance attack controllability
Style customization improves out-of-domain generalization
🔎 Similar Papers
No similar papers found.
C
Chen Ma
Xi’an Jiaotong University
Ningfei Wang
Ningfei Wang
University of California, Irvine
Junhao Zheng
Junhao Zheng
South China University of Technology, Qwen Team
Large Language ModelsPretrainingContinual Learning
Q
Qing Guo
Nankai University
Q
Qian Wang
Wuhan University
Q
Qi Alfred Chen
University of California, Irvine
C
Chao Shen
Xi’an Jiaotong University