🤖 AI Summary
Prior research on autonomous vehicle (AV) perception security focuses predominantly on accuracy degradation, neglecting temporal robustness—particularly inference-time latency vulnerabilities in perception modules. Method: This work introduces the first systematic inference-time attack paradigm targeting AV perception sensors, establishing a “timing–safety” coupled analytical framework. Leveraging a high-fidelity simulation platform, we integrate adversarial sample generation, precise timing perturbation injection, and traffic flow dynamics modeling to construct an end-to-end attack pipeline. Contribution/Results: Experiments demonstrate that the attack induces perceptual inference delays exceeding 300 ms and increases emergency braking failure rates to 68%, severely compromising both ego-vehicle and surrounding traffic participant safety. Our findings challenge conventional safety evaluation paradigms centered solely on functional correctness, providing novel theoretical insights and empirical evidence for timing-aware robustness assessment in AV perception systems.
📝 Abstract
As a safety-critical cyber-physical system, cybersecurity and related safety issues for Autonomous Vehicles (AVs) have been important research topics for a while. Among all the modules on AVs, perception is one of the most accessible attack surfaces, as drivers and AVs have no control over the outside environment. Most current work targeting perception security for AVs focuses on perception correctness. In this work, we propose an impact analysis based on inference time attacks for autonomous vehicles. We demonstrate in a simulation system that such inference time attacks can also threaten the safety of both the ego vehicle and other traffic participants.