🤖 AI Summary
This work identifies a class of low-cost, passive LiDAR spoofing attacks leveraging planar mirrors, posing a critical threat to the 3D perception security of autonomous vehicles (AVs). Exploiting specular reflection to redirect LiDAR laser beams, the attack injects phantom obstacles or erases real targets in point clouds—requiring no electronics or custom fabrication. We formalize two adversarial modes: *target addition* and *target removal*. Feasibility and scalability are rigorously validated via geometric optical modeling, outdoor experiments with commercial LiDAR sensors, real-world integration into Autoware, and large-scale CARLA simulations. Results demonstrate significant corruption of occupancy grids, leading to false positives/negatives and consequent decision-making and control failures. Crucially, prevalent defense mechanisms prove ineffective against this physical-layer attack. To our knowledge, this is the first systematic study establishing mirror-based reflection as a concrete, exploitable vulnerability in LiDAR perception—providing foundational empirical evidence for robust perception design and physical-layer security research.
📝 Abstract
Autonomous vehicles (AVs) rely heavily on LiDAR sensors for accurate 3D perception. We show a novel class of low-cost, passive LiDAR spoofing attacks that exploit mirror-like surfaces to inject or remove objects from an AV's perception. Using planar mirrors to redirect LiDAR beams, these attacks require no electronics or custom fabrication and can be deployed in real settings. We define two adversarial goals: Object Addition Attacks (OAA), which create phantom obstacles, and Object Removal Attacks (ORA), which conceal real hazards. We develop geometric optics models, validate them with controlled outdoor experiments using a commercial LiDAR and an Autoware-equipped vehicle, and implement a CARLA-based simulation for scalable testing. Experiments show mirror attacks corrupt occupancy grids, induce false detections, and trigger unsafe planning and control behaviors. We discuss potential defenses (thermal sensing, multi-sensor fusion, light-fingerprinting) and their limitations.