🤖 AI Summary
This work addresses the robustness of traffic sign classification systems under real-world physical conditions by proposing a novel adversarial attack paradigm leveraging naturally fallen leaves. Unlike conventional attacks requiring deliberate modifications (e.g., stickers), this approach exploits the subtle, physically plausible perturbations induced by leaves adhering to sign surfaces—interferences that are difficult to attribute or detect. The method incorporates multi-species leaf modeling, controllable geometric and appearance transformations (scale, rotation, color), edge-response analysis, and black-box evaluation to achieve high misclassification rates against mainstream classifiers. Experiments demonstrate that leaf-induced perturbations significantly degrade models’ ability to extract critical edge features, confirming effective disruption of low-level visual mechanisms. To our knowledge, this is the first study to integrate natural objects into the adversarial attack framework, offering strong stealthiness and plausible deniability. The work provides a new empirical perspective and foundational methodology for evaluating the visual safety of autonomous driving systems.
📝 Abstract
Adversarial input image perturbation attacks have emerged as a significant threat to machine learning algorithms, particularly in image classification setting. These attacks involve subtle perturbations to input images that cause neural networks to misclassify the input images, even though the images remain easily recognizable to humans. One critical area where adversarial attacks have been demonstrated is in automotive systems where traffic sign classification and recognition is critical, and where misclassified images can cause autonomous systems to take wrong actions. This work presents a new class of adversarial attacks. Unlike existing work that has focused on adversarial perturbations that leverage human-made artifacts to cause the perturbations, such as adding stickers, paint, or shining flashlights at traffic signs, this work leverages nature-made artifacts: tree leaves. By leveraging nature-made artifacts, the new class of attacks has plausible deniability: a fall leaf stuck to a street sign could come from a near-by tree, rather than be placed there by an malicious human attacker. To evaluate the new class of the adversarial input image perturbation attacks, this work analyses how fall leaves can cause misclassification in street signs. The work evaluates various leaves from different species of trees, and considers various parameters such as size, color due to tree leaf type, and rotation. The work demonstrates high success rate for misclassification. The work also explores the correlation between successful attacks and how they affect the edge detection, which is critical in many image classification algorithms.