SoK: How Sensor Attacks Disrupt Autonomous Vehicles: An End-to-end Analysis, Challenges, and Missed Threats

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous driving systems (ADS) rely on multi-sensor fusion for safety yet remain vulnerable to adversarial sensor attacks; existing work lacks systematic analysis of end-to-end attack feasibility and error propagation mechanisms across the ADS pipeline. Method: We propose the Systematic Error Propagation Graph (SEPG) modeling framework—the first to formally characterize cascading error propagation paths from faulty sensors through perception, localization, planning, and control modules, under realistic physical and architectural constraints. We identify seven critical implementation challenges and eleven previously overlooked attack vectors, validated via systematic literature review, conceptually grounded experiments, and large language model–driven automated threat inference. Contribution/Results: Our work advances structured understanding of ADS vulnerabilities and enables a paradigm shift toward AI-augmented safety analysis, demonstrating feasibility of multiple novel attack classes while providing a rigorous foundation for robustness evaluation and defense design.

Technology Category

Application Category

📝 Abstract
Autonomous vehicles, including self-driving cars, robotic ground vehicles, and drones, rely on complex sensor pipelines to ensure safe and reliable operation. However, these safety-critical systems remain vulnerable to adversarial sensor attacks that can compromise their performance and mission success. While extensive research has demonstrated various sensor attack techniques, critical gaps remain in understanding their feasibility in real-world, end-to-end systems. This gap largely stems from the lack of a systematic perspective on how sensor errors propagate through interconnected modules in autonomous systems when autonomous vehicles interact with the physical world. To bridge this gap, we present a comprehensive survey of autonomous vehicle sensor attacks across platforms, sensor modalities, and attack methods. Central to our analysis is the System Error Propagation Graph (SEPG), a structured demonstration tool that illustrates how sensor attacks propagate through system pipelines, exposing the conditions and dependencies that determine attack feasibility. With the aid of SEPG, our study distills seven key findings that highlight the feasibility challenges of sensor attacks and uncovers eleven previously overlooked attack vectors exploiting inter-module interactions, several of which we validate through proof-of-concept experiments. Additionally, we demonstrate how large language models (LLMs) can automate aspects of SEPG construction and cross-validate expert analysis, showcasing the promise of AI-assisted security evaluation.
Problem

Research questions and friction points this paper is trying to address.

Analyzing sensor attack feasibility in real-world autonomous vehicles
Understanding error propagation through interconnected autonomous system modules
Identifying overlooked attack vectors in autonomous vehicle sensor pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

System Error Propagation Graph for attack analysis
Validating overlooked attack vectors via experiments
Using LLMs to automate security evaluation processes
🔎 Similar Papers
No similar papers found.
Qingzhao Zhang
Qingzhao Zhang
University of Arizona
system securitysoftware securityAI security
S
Shaocheng Luo
Duke University
Z
Z. Morley Mao
University of Michigan
M
Miroslav Pajic
Duke University
M
Michael K. Reiter
Duke University