Seeing is Deceiving: Mirror-Based LiDAR Spoofing for Autonomous Vehicle Deception

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a class of low-cost, passive LiDAR spoofing attacks leveraging planar mirrors, posing a critical threat to the 3D perception security of autonomous vehicles (AVs). Exploiting specular reflection to redirect LiDAR laser beams, the attack injects phantom obstacles or erases real targets in point clouds—requiring no electronics or custom fabrication. We formalize two adversarial modes: *target addition* and *target removal*. Feasibility and scalability are rigorously validated via geometric optical modeling, outdoor experiments with commercial LiDAR sensors, real-world integration into Autoware, and large-scale CARLA simulations. Results demonstrate significant corruption of occupancy grids, leading to false positives/negatives and consequent decision-making and control failures. Crucially, prevalent defense mechanisms prove ineffective against this physical-layer attack. To our knowledge, this is the first systematic study establishing mirror-based reflection as a concrete, exploitable vulnerability in LiDAR perception—providing foundational empirical evidence for robust perception design and physical-layer security research.

Technology Category

Application Category

📝 Abstract
Autonomous vehicles (AVs) rely heavily on LiDAR sensors for accurate 3D perception. We show a novel class of low-cost, passive LiDAR spoofing attacks that exploit mirror-like surfaces to inject or remove objects from an AV's perception. Using planar mirrors to redirect LiDAR beams, these attacks require no electronics or custom fabrication and can be deployed in real settings. We define two adversarial goals: Object Addition Attacks (OAA), which create phantom obstacles, and Object Removal Attacks (ORA), which conceal real hazards. We develop geometric optics models, validate them with controlled outdoor experiments using a commercial LiDAR and an Autoware-equipped vehicle, and implement a CARLA-based simulation for scalable testing. Experiments show mirror attacks corrupt occupancy grids, induce false detections, and trigger unsafe planning and control behaviors. We discuss potential defenses (thermal sensing, multi-sensor fusion, light-fingerprinting) and their limitations.
Problem

Research questions and friction points this paper is trying to address.

Mirror-based LiDAR spoofing attacks inject or remove objects from autonomous vehicle perception
These passive attacks use planar mirrors to redirect LiDAR beams without electronics
Attacks corrupt occupancy grids, cause false detections, and trigger unsafe vehicle behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mirror-based LiDAR spoofing for object injection/removal
Geometric optics models validated with outdoor experiments
Simulation framework for scalable attack testing
🔎 Similar Papers
No similar papers found.