🤖 AI Summary
This study addresses the insufficient robustness of Level 5 autonomous driving vision systems under security threats, which are vulnerable to perception failures induced by adversarial attacks. For the first time, the CIA triad—confidentiality, integrity, and availability—is systematically integrated into the design of autonomous driving vision systems. The authors propose a reference architecture tailored for connected vehicles, combining system modeling, attack surface identification, and component-level functional analysis to comprehensively characterize attack vectors across modules and their impacts on CIA properties. The research uncovers critical attack pathways and security vulnerabilities, thereby establishing a theoretical foundation and analytical framework for designing security mechanisms that enhance the robustness of autonomous driving vision systems.
📝 Abstract
This article investigates the robustness of vision systems in Connected and Autonomous Vehicles (CAVs), which is critical for developing Level-5 autonomous driving capabilities. Safe and reliable CAV navigation undeniably depends on robust vision systems that enable accurate detection of objects, lane markings, and traffic signage. We analyze the key sensors and vision components essential for CAV navigation to derive a reference architecture for CAV vision system (CAVVS). This reference architecture provides a basis for identifying potential attack surfaces of CAVVS. Subsequently, we elaborate on identified attack vectors targeting each attack surface, rigorously evaluating their implications for confidentiality, integrity, and availability (CIA). Our study provides a comprehensive understanding of attack vector dynamics in vision systems, which is crucial for formulating robust security measures that can uphold the principles of the CIA triad.