Adversarial Universal Stickers: Universal Perturbation Attacks on Traffic Sign using Stickers

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the real-world adversarial attack threat to traffic sign recognition systems by proposing a physically deployable, universal black-and-white sticker-based adversarial perturbation: a single, fixed-pattern sticker placed at a predetermined location on any traffic sign suffices to mislead mainstream deep learning models across classes. Methodologically, it is the first to instantiate universal adversarial perturbations as transferable, reproducible physical stickers; constructs a virtual experimental platform based on Google Street View data to enable safe, controlled adversarial evaluation; and employs a gradient-optimization-driven framework for universal perturbation generation. Evaluated on an American traffic sign dataset, the sticker achieves over 90% physical-world attack success rate against multiple state-of-the-art models. Empirical results demonstrate that lightweight physical perturbations pose a severe, practical threat to autonomous driving perception modules.

Technology Category

Application Category

📝 Abstract
Adversarial attacks on deep learning models have proliferated in recent years. In many cases, a different adversarial perturbation is required to be added to each image to cause the deep learning model to misclassify it. This is ineffective as each image has to be modified in a different way. Meanwhile, research on universal perturbations focuses on designing a single perturbation that can be applied to all images in a data set, and cause a deep learning model to misclassify the images. This work advances the field of universal perturbations by exploring universal perturbations in the context of traffic signs and autonomous vehicle systems. This work introduces a novel method for generating universal perturbations that visually look like simple black and white stickers, and using them to cause incorrect street sign predictions. Unlike traditional adversarial perturbations, the adversarial universal stickers are designed to be applicable to any street sign: same sticker, or stickers, can be applied in same location to any street sign and cause it to be misclassified. Further, to enable safe experimentation with adversarial images and street signs, this work presents a virtual setting that leverages Street View images of street signs, rather than the need to physically modify street signs, to test the attacks. The experiments in the virtual setting demonstrate that these stickers can consistently mislead deep learning models used commonly in street sign recognition, and achieve high attack success rates on dataset of US traffic signs. The findings highlight the practical security risks posed by simple stickers applied to traffic signs, and the ease with which adversaries can generate adversarial universal stickers that can be applied to many street signs.
Problem

Research questions and friction points this paper is trying to address.

Universal adversarial stickers for traffic signs
Misclassification of traffic signs by stickers
Virtual testing of adversarial sticker attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal stickers for traffic signs
Virtual testing with Street View
High success rate in misclassification
🔎 Similar Papers
No similar papers found.