🤖 AI Summary
Traffic sign recognition systems in autonomous driving are vulnerable to physically realizable camera-based adversarial attacks, leading to delayed automatic braking and safety risks. Method: This paper proposes the first generative texture-perturbation attack tailored to realistic driving scenarios, validated in the CARLA simulator under diverse lighting conditions and viewing angles. Concurrently, we design a lightweight defense framework integrating YOLOv5-based object detection, input validation, and adversarial training. Contribution/Results: Experimental results show that the proposed attack achieves over 92% success rate across varying environmental conditions. The defense framework reduces false detection rate to 3.1% and decreases braking response latency by 87%, significantly enhancing system real-time performance and safety. This work delivers a deployable, end-to-end vision-driven autonomous driving solution with integrated attack and defense capabilities.
📝 Abstract
Autonomous vehicles (AVs) rely heavily on cameras and artificial intelligence (AI) to make safe and accurate driving decisions. However, since AI is the core enabling technology, this raises serious cyber threats that hinder the large-scale adoption of AVs. Therefore, it becomes crucial to analyze the resilience of AV security systems against sophisticated attacks that manipulate camera inputs, deceiving AI models. In this paper, we develop camera-camouflaged adversarial attacks targeting traffic sign recognition (TSR) in AVs. Specifically, if the attack is initiated by modifying the texture of a stop sign to fool the AV's object detection system, thereby affecting the AV actuators. The attack's effectiveness is tested using the CARLA AV simulator and the results show that such an attack can delay the auto-braking response to the stop sign, resulting in potential safety issues. We conduct extensive experiments under various conditions, confirming that our new attack is effective and robust. Additionally, we address the attack by presenting mitigation strategies. The proposed attack and defense methods are applicable to other end-to-end trained autonomous cyber-physical systems.