Evaluating the Impact of Adversarial Attacks on Traffic Sign Classification using the LISA Dataset

๐Ÿ“… 2025-09-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Adversarial robustness of traffic sign classifiers remains underexplored on the LISA dataset, posing critical safety risks for autonomous driving systems. Method: This study systematically evaluates the vulnerability of 47-class traffic sign classification modelsโ€”built upon convolutional neural networks (CNNs)โ€”to adversarial attacks on the LISA dataset. We employ two canonical white-box attacks, Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), to generate adversarial examples and quantitatively analyze the impact of perturbation magnitude on classification accuracy. Results: Even imperceptible perturbations cause substantial accuracy degradation, exposing severe security vulnerabilities in real-world traffic scenarios. This work establishes the first reproducible benchmark for adversarial robustness assessment on the LISA dataset, bridging a key gap in the literature. Moreover, it provides empirically grounded insights for safety-critical risk modeling and informs the design of robust defense mechanisms for vision-based autonomous driving systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Adversarial attacks pose significant threats to machine learning models by introducing carefully crafted perturbations that cause misclassification. While prior work has primarily focused on MNIST and similar datasets, this paper investigates the vulnerability of traffic sign classifiers using the LISA Traffic Sign dataset. We train a convolutional neural network to classify 47 different traffic signs and evaluate its robustness against Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks. Our results show a sharp decline in classification accuracy as the perturbation magnitude increases, highlighting the models susceptibility to adversarial examples. This study lays the groundwork for future exploration into defense mechanisms tailored for real-world traffic sign recognition systems.
Problem

Research questions and friction points this paper is trying to address.

Investigating traffic sign classifier vulnerability to adversarial attacks
Evaluating robustness against FGSM and PGD attack methods
Assessing accuracy decline with increasing perturbation magnitude
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used LISA dataset for traffic sign classification
Applied FGSM and PGD adversarial attack methods
Evaluated CNN robustness against adversarial perturbations
๐Ÿ”Ž Similar Papers
No similar papers found.
N
Nabeyou Tadessa
Department of Computer Engineering, Benedict College, Columbia, SC, USA
B
Balaji Iyangar
Department of Computer Science, Benedict College, Columbia, SC, USA
Mashrur Chowdhury
Mashrur Chowdhury
Founding Director, National Center for Transportation Cybersecurity and Resiliency
CPS CybersecurityTransportation Cyber-Physical-Social SystemsConnected Autonomous Vehicles