🤖 AI Summary
This work addresses the vulnerability of autonomous vehicle perception modules to adversarial attacks (e.g., PGD, FGSM, GA) in traffic sign classification, posing critical safety risks. To enhance robustness, we propose a Hybrid Classical-Quantum Deep Learning (HCQ-DL) perception architecture that integrates AlexNet or VGG-16 feature extractors with trainable quantum circuits comprising ~100 parameters. Leveraging transfer learning and a hybrid classical-quantum training paradigm, the framework enables end-to-end robust classification. Experiments demonstrate >95% accuracy under clean conditions; robustness degrades gracefully to >91% under FGSM and GA attacks, and remains at 85% under stronger PGD attacks—substantially outperforming classical baselines (<21%). This study constitutes the first empirical validation of medium-scale trainable quantum circuits for improving adversarial robustness in real-world traffic scenarios, establishing a novel pathway toward quantum-enhanced autonomous driving perception.
📝 Abstract
Deep learning (DL)-based image classification models are essential for autonomous vehicle (AV) perception modules since incorrect categorization might have severe repercussions. Adversarial attacks are widely studied cyberattacks that can lead DL models to predict inaccurate output, such as incorrectly classified traffic signs by the perception module of an autonomous vehicle. In this study, we create and compare hybrid classical-quantum deep learning (HCQ-DL) models with classical deep learning (C-DL) models to demonstrate robustness against adversarial attacks for perception modules. Before feeding them into the quantum system, we used transfer learning models, alexnet and vgg-16, as feature extractors. We tested over 1000 quantum circuits in our HCQ-DL models for projected gradient descent (PGD), fast gradient sign attack (FGSA), and gradient attack (GA), which are three well-known untargeted adversarial approaches. We evaluated the performance of all models during adversarial attacks and no-attack scenarios. Our HCQ-DL models maintain accuracy above 95% during a no-attack scenario and above 91% for GA and FGSA attacks, which is higher than C-DL models. During the PGD attack, our alexnet-based HCQ-DL model maintained an accuracy of 85% compared to C-DL models that achieved accuracies below 21%. Our results highlight that the HCQ-DL models provide improved accuracy for traffic sign classification under adversarial settings compared to their classical counterparts.