🤖 AI Summary
Autonomous driving systems (ADS) rely on multi-sensor fusion for safety yet remain vulnerable to adversarial sensor attacks; existing work lacks systematic analysis of end-to-end attack feasibility and error propagation mechanisms across the ADS pipeline. Method: We propose the Systematic Error Propagation Graph (SEPG) modeling framework—the first to formally characterize cascading error propagation paths from faulty sensors through perception, localization, planning, and control modules, under realistic physical and architectural constraints. We identify seven critical implementation challenges and eleven previously overlooked attack vectors, validated via systematic literature review, conceptually grounded experiments, and large language model–driven automated threat inference. Contribution/Results: Our work advances structured understanding of ADS vulnerabilities and enables a paradigm shift toward AI-augmented safety analysis, demonstrating feasibility of multiple novel attack classes while providing a rigorous foundation for robustness evaluation and defense design.
📝 Abstract
Autonomous vehicles, including self-driving cars, robotic ground vehicles, and drones, rely on complex sensor pipelines to ensure safe and reliable operation. However, these safety-critical systems remain vulnerable to adversarial sensor attacks that can compromise their performance and mission success. While extensive research has demonstrated various sensor attack techniques, critical gaps remain in understanding their feasibility in real-world, end-to-end systems. This gap largely stems from the lack of a systematic perspective on how sensor errors propagate through interconnected modules in autonomous systems when autonomous vehicles interact with the physical world.
To bridge this gap, we present a comprehensive survey of autonomous vehicle sensor attacks across platforms, sensor modalities, and attack methods. Central to our analysis is the System Error Propagation Graph (SEPG), a structured demonstration tool that illustrates how sensor attacks propagate through system pipelines, exposing the conditions and dependencies that determine attack feasibility. With the aid of SEPG, our study distills seven key findings that highlight the feasibility challenges of sensor attacks and uncovers eleven previously overlooked attack vectors exploiting inter-module interactions, several of which we validate through proof-of-concept experiments. Additionally, we demonstrate how large language models (LLMs) can automate aspects of SEPG construction and cross-validate expert analysis, showcasing the promise of AI-assisted security evaluation.