🤖 AI Summary
Adversarial attacks on traffic sign classification—e.g., misclassifying a stop sign as a speed-limit sign—pose critical safety risks in autonomous driving systems.
Method: This paper proposes a spatiotemporal robust defense paradigm leveraging historical street-view imagery. It introduces a novel “time-travel” mechanism comprising cross-temporal retrieval, multi-temporal image alignment, and temporal consistency modeling of traffic signs, followed by ensemble voting over aligned historical instances to restore semantic consistency under adversarial perturbations.
Contribution/Results: To our knowledge, this is the first defense framework to exploit open historical datasets and spatiotemporal continuity for adversarial robustness, thereby reversing the inherent asymmetry between attackers and defenders. Evaluated on mainstream traffic sign adversarial benchmarks—including FGSM, PGD, and CW attacks—the method achieves 100% defense success rate, significantly outperforming state-of-the-art real-time defenses in both robustness and generalization.
📝 Abstract
Adversarial example attacks have emerged as a critical threat to machine learning. Adversarial attacks in image classification abuse various, minor modifications to the image that confuse the image classification neural network -- while the image still remains recognizable to humans. One important domain where the attacks have been applied is in the automotive setting with traffic sign classification. Researchers have demonstrated that adding stickers, shining light, or adding shadows are all different means to make machine learning inference algorithms mis-classify the traffic signs. This can cause potentially dangerous situations as a stop sign is recognized as a speed limit sign causing vehicles to ignore it and potentially leading to accidents. To address these attacks, this work focuses on enhancing defenses against such adversarial attacks. This work shifts the advantage to the user by introducing the idea of leveraging historical images and majority voting. While the attacker modifies a traffic sign that is currently being processed by the victim's machine learning inference, the victim can gain advantage by examining past images of the same traffic sign. This work introduces the notion of ''time traveling'' and uses historical Street View images accessible to anybody to perform inference on different, past versions of the same traffic sign. In the evaluation, the proposed defense has 100% effectiveness against latest adversarial example attack on traffic sign classification algorithm.